i want to create an online 3d game of roads and city buildings (supposed to have good graphics). i would prefer that it will work for all major Os's (linux,windows,mac).
I know that for example adobe shockwave can do that, but unfortunately there is no linux support.

Many of the mapping techniques including normal bump mapping, parallax mapping and others require the special per-vertex tangent-space basis (tangent, normal, binormal / bitangent).
This obviously means that my models should not only export vertex positions, texture coordinates and approximated per-vertex normals, but also one of the tangent space basis vectors (usually tangent), because the other one can be found directly in the shader using cross(tangent, normal).
Note that position, normal,

I'm doing work on older GPUS which only support 16bit indexes for DrawIndexedPrimitive calls (and I assume the same for OpenGL as a hardware limitation).
While I understand this means obviously all indices have to be under 0xffff and therefore at most 65k vertices, I can't find a good answer to what limit this means for the maximum number of polys you can draw at once. Does it mean the index buffer is also limited to 65k elements - about 22k triangles - or could you send 100k triangles at once

Tags： 3d
transformationmeshhead
I'm looking for an optimal method to match vertices of 2 different meshes. Different in density, size and maybe some other reasons.
The 2 meshes represent a human head, so the anatomical differences can be many.
The problem is minimization. Maybe minimization of mean distance or energy.
I can find on the web a few methods for registration interest point in 2D images, but not in 3D.
Does anyone have an idea for a non-rigid transformation?
I'm working especially to find each dual vertex of a

I have a 3ds rigged model that I want to use in Unity. I dont want to animate any motion in 3ds, since all the animations will be dynamic. All I want to is have access to bones / joints of the model in Unity so I can transform them with code in Unity at runtime.
I hear that I should import the model with an fbx format, which i do but I see no bones or joints in Unity. Also, from research I need to "bake the animation in" before export from 3ds but I don't want to animate in 3ds.

Tags： 3d
texture-mapping3d-modellingassimp
I'm using Assimp to load 3D models into my program. Everything has gone dandy so far, except I've come across a Lightwave object that doesn't seem to make sense. Of course it renders nicely in Lightwave, but in Assimp there are no texture coordinates, no UV coordinates, but textures that end up getting loaded. But that doesn't help - they just sit in memory and never get used because - you guessed it - there are no texture coordinates.
I haven't found any helpful Assimp pages so far on this. Ot

If given the irregular tetrahedron's vertices coordinates A(x1,y1,z1) B(x2,y2,z2) C(x3,y3,z3) D(x4,y4,z4) and I need to compute the 3d coordinate h(x,y,z) of a height from vertex A. After many google search I was only able to find the barycentric coordinate not the vertex of the height. Please help.

Tags： 3d
directxgpupickingray-picking
I am working on a project which allows users to pick 3d objects in a scene and I was wondering what everyone thought would be the best way to approach this particular scenario.
Basically we have a scene with at least 100 objects (they are low-poly but made from at least ~12-15 triangles) and up to about 1000-2000 objects.
Not all the objects will be "pickable" at all times because some objects will occlude others so "pickable" objects probably land in the range between 800-1500 (depending on t

I'm trying to center a webGL sphere to a specific x,y point. I can already position the globe using the Spherical Coordinates (setting the phi and theta), but I first need to convert x, y coordinates to this system (to the phi and theta). The globe is a sphere of this image and I can easily map lat, long to a position on that map myself (using a mapper function).
So in total what I want is:
convert lat/long to x/y position
convert x/y to phi/theta <-- can't figure out how to do this
positio

Tags： 3d
gradientterrainperlin-noiseprocedural-generation
I'd like to implement a procedural generation of a terrain.
After a thorough research I came up with a conclusion that it should be implemented with one of the gradient (coherent) noise generation algorithms, for instance Perlin Noise algorithm.
However, I don't want the generation to be completely random. I'd like to apply some constraints (like where should be a mountain range, or where should be a lowland etc.).
Question:
For example I have a curve which represents some landscape element.

I have a function z(x,y,t) and I'd like to animate the time evolution in a GIF. I saved the space data into 3 column text file of the form
x y z
with file names 1.txt, 2.txt, ..., 100.txt
I'd like a 2d GIF with the z column colored as a temperature. From what I've read this should be possible but I'm having trouble implementing it as I've only done basic plotting in gnuplot. Something like this
http://www.gnuplotting.org/animation-gif/
but with all the data files I have. I've spend the bet

I have created a model in javafx using Spheres. I want to show second window with controls, to scale this 3D model.
But if this second window has some controls, the model is shown with anchor pane, where I put my shapes.
(I change scene color to black in code)
Any idea how to remove it ?

Tags： 3d
texturesjavafx-8collada
So I'm trying to import a collada .dae file into a Javafx scene using ColModelImporterJFX by InteractiveMesh.
I've got the model import from jar working and rendering into the scene, but there isn't any usefull documentation regarding adding PhongMaterials to the returned MeshViews.
ColModelImporter station = new ColModelImporter();
station.setResourceBaseUrl(ClassLoader.getSystemResource("models/Station"));
station.read(ClassLoader.getSystemResource("models/station.dae"));
The only node in

Tags： 3d
pseudocodedata-conversion
I'm trying to optimize accessing and changing data of a 3D environment, because some operations have to be done millions of times. Currently I have the following optimizations in place:
Using a flat array (1D)
The dimensions are in powers of 2
Bit-shifting, where possible, instead of multiplication/division
The indexing of the 3D vectors is as follows:
A change in the X vector would increase/decrease the index by 1
A change in the Y vector would increase/decrease the index by 3DEnvironment

Tags： 3d
webgl3d-modelling3d-model
How do you texture a 3D model in WebGL? I understand all the basics of 3D in WebGL, I understand how to load textures and I understand how to load 3D models from external files, but I have no clue how to texture a model that is more complex than a basic cube.
I assume that the same principles apply in WebGL as in OpenGL or any other 3D rendering language, but I can't find any good information out there for WebGL, OpenGL, etc.
Any guidance/links/explanations would be greatly appreciated. Thank y

I have a question.
I load a .dae file into my scene, but I do not see any shadow of models.
I don't know what should I do.....>"<
I have googled the problem.
There are similar questions.
However I still can not solve my problem...
I don't know why......
Here is my code:
<!doctype html>
<html lang="en">
<head>
<title>Test von Web GL</title>
<meta charset="utf-8">
</head>
<body style="margin: 0;">
<script src="js/three.min.js">

The following code produces an array out of bounds exception (ArrayIndexOutOfBoundsException:-2)
I have no idea why, I have been following a tutorial online. Have read through the references and Processing Javadoc but not much info on the method. Anyone have any ideas?
someImage.jpg is a 1200 X 600 image file
class Ball
{
float size;
Ball(float size)
{
this.size = size;
}
void show(PImage img)
{
PShape my_ball;
my_ball = createShape(SPHERE, si

Tags： 3d
positioninterpolation
I measured data in a 3d space. lets say temperature in a room for example. the 8 measurement points are arranged like the edges of a cube. I have room coordinates and temperature data for each point.
Now I track a moving person within the same space. Based on the person's actual position I want to interpolate between the 8 values to assume the temperature at that point in space.

I'm trying to transit my 2D view to a 3D view once a reach a defined level of zoom for a PCL project MAP 3D Project - Github
So, wat did I do (on the PCL part), I created a customMap.cs an object which is basically an object that inherits from Xamarin.Forms.Map : public class CustomMap : Map
public class CustomMap : Map
{
/// <summary>
/// Just for my test since I can't find the way to find the current location of the camera
/// </summary>
public static readonly Bin

How can I create and save a 3D volume model from a volume image vtkImageData?
Some people recommend vtkImageDataGeometryFilter, so I tried the following:
vtkSmartPointer<vtkImageDataGeometryFilter> imageDataGeometryFilter =
vtkSmartPointer<vtkImageDataGeometryFilter>::New();
imageDataGeometryFilter->SetInputData(imageData);
imageDataGeometryFilter->SetThresholdCells(true);
imageDataGeometryFilter->ThresholdValueOn();
imageDataGeometryFilter->SetThresholdValue(90);
i

I have for the last several years been struggling to understand why the Internet has so few actually useful 3D web applications. It's 2009 and still everything looks like pages from a Sears catalog. You can turn on your TV and find flying logos every night. After that you can get nostalgic and flip on ol' N-64 and play some Zelda or Mario Kart. On the PC, Sims 2 is approaching 6 years old already.. And then there's WoW. Current generation of users - the Facebook crowd, let's say - has ~no~ probl

I have to make a project Distributed rendering of a 3d image. I can use standard algorithms. The aim is to learn hadoop and not image processing. So can any one suggest what language should I use c++ or java and some standard implementation of a 3d renderer. Any other help would be highly useful ..

Tags： 3d
mousecamera2dxna-4.0
So I am making a game in XNA 4.0 and I am having an issue with translating the coordinates from the mouse to the 3D world. I have used the Viewport.Unproject() method, and it almost works. The issue is that my projection is a "field of view" so the distance away from the center axises is exponential. If I change the projection to be a standard perspective than my 3D objects are deformed. Is there a mathematical fix for using the field of view with the translated data from the mouse coordinates?

Tags： 3d
lineintersectionplane
If given a line (represented by either a vector or two points on the line) how do I find the point at which the line intersects a plane? I've found loads of resources on this but I can't understand the equations there (they don't seem to be standard algebraic). I would like an equation (no matter how long) that can be interpreted by a standard programming language (I'm using Java).

How can I undo the effects of perspective projection (in Direct3D) on the item pointed to in the image below. I want it to look like a rectangular banner (get rid of the trapezoid effect visible).
The view matrix is setup with the camera at (0.0f, 0.8f, 2.5f) pointing at (0, 0, 0) and the item pointed to is drawn parallel with the x axis.
I've tried to draw that item with an orthogonal projection matrix, however I'm stuck on how to find out the screen coordinates of it when in perspective so

I'm working on an STL file importer and thought I'd make use of the normal given to determine the triangle winding order. Sample data with 4 triangles is included below (original data has over 70k triangles). My code's logic computes the normal assuming the vertices are specified anticlockwise, then does a dot product of this calculated normal with the supplied normal. If the result is positive, then I assume anticlockwise, else clockwise.
tm.SetCCW(Dot(Cross(facet.getVertex2() - facet.

Tags： 3d
xnanvidiastereo-3dstereoscopy
I am trying to come up with an app that renders two videostreams from webcams in a way that they are perceived as stereoscopic image on a 3D display. I have never dealt with stereoscopic 3D before, but thoretically this should be as simple as rendering the streams to two different surfaces and showing each for the appropriate eye (sorry, not fully familiar with terminology). I know that NVidia drivers can "stereoscopize" any 3D application. I also know that video games include this feature as a

I am pretty familiar with the Three.js 3D JavaScript library, I know how to add objects, lights, textures, group them and move them around a little bit and maybe play around with the camera.
What I want to know is how to take it to the next level by learning about the inner workings. I have already had a good look around for information on things like matrices, but all I find is complex mathematical examples.
I would like to know if anyone can point me to any sites or books which would help me

I'm trying to create 5 rectangles which are getting more and more rotated in the y-axis as they get far away from the centered rectangle.
But I can't get how to use the transform and perspective attributes to achieve it. I'm able only to rotate in 2d, but when I specify rotateY or rotateX it does not respond and stays the same.
any help? tutorial? example?

Tags： 3d
gnomevalacluttercogl
Cogl is a modern 3D graphics API with associated utility APIs designed to expose the features of 3D graphics hardware using a more object oriented design than OpenGL. The library has primarily been driven by the practical needs of Clutter but it is not tied to any one toolkit or even constrained to developing UI toolkits.
I have known the names of the common gnome libraries: cairo, pango, gtk, clutter and cogl for a long time, but recently i actually found out what the the libraries did. And t

I want to rotate an object round a custom pivot, which is its point, so I have such code:
private final EventHandler<MouseEvent> mouseEventHandler = new EventHandler<MouseEvent>() {
@Override
public void handle(MouseEvent mouseEvent) {
if (mouseEvent.getEventType() == MouseEvent.MOUSE_PRESSED) {
dragStartX = mouseEvent.getSceneX();
dragStartY = mouseEvent.getSceneY();
mousePosX = mouseEvent.getSceneX();

Is it possible to insert a jpg bitmap or svg image inside of HTML5 3d objects??
the 3d objects are this ones:
http://www.script-tutorials.com/triangle-mesh-for-3d-objects-in-html5

Tags： 3d
blenderjmonkeyengine
I am new to game developing and using the jMonkey engine. I started to develop an endless running type game. To run, I have created a map with blender and imported it to jME.
As the screenshot shows, I have added it to a terrain and made some mountains. Now I need to get the exact vector point of map (point A like point on screen shot).
This will help me to detect if the running object is on the map or not. Can someone give me an answer or tell me the alternatives that experts use?
this is t

Does anyone know if there is a site that has 3D images of different type of screw bits. I just ran into a bit supposedly it's called a triangle wedge special screw.
I'm sick of buying all these different tools and plan on making an adapter bit and 3D printing it out on a 3D printer. Example: I would have the triangle wedge special screw male section on one end and have a Phillips/cross on the other, that way I would just need a Phillips head screw driver to unscrew this thing. If I run into ot

How to enable or disable button or change button background color by another button in maya mel.
This my code so far.
global string $btn2;
global proc fun(string $btn){
button -label "button 2" -enable false $btn;
}
window -width 150;
columnLayout -adjustableColumn true;
$btn1 = `button -label "button 1" -c "fun $btn2"`;
$btn2 = `button -label "button 2" -enable true`;
showWindow;

I am currently creating myself a set of java classes for working with basic 3D shapes, and currently need help with displaying a cuboid on a 2D drawing surface(i.e canvas).
I know that this question is probably mostly mathematical, but how do you get the bounds/2D-vector of a corner point if you have the objects 3 rotation angles around each axis and the 3D vector of the point's pos in relation to the center of the object?

I have many .ply files.
With some reasons, some of their normal orientations are pointing outside of the object(which is what I want) and some of them are pointing inside the object.
Does Meshlabserver has a function that can detect the normal orientation of a .ply file, and another function that can unify all the normal to a specific orientation?
By the way, since I am new to 3D ojects, I would like to know what factor determines the normal orientation? Hope someone willing to answer my qu

I am working on a 3d project, and to do that I use meshLab, the problem is I cant get meshLab to export the mesh in obj format, when I try it the program just freeze or crash. I tried on a small cube and it worked but then when I tried on a model which is about 6 mb it crashed (the models I am working on are 25 - 500 mb).
Any help will be amazing, thx

I'm trying to convert a textured mesh (.3DS that I'm reading with Meshlab). I want to convert it to a 3D colored mesh (a an RGB color per vertex). For this, I'm saving it into a .ply extension. However, when I Open it, I don't find the RGB colors associated to each vertex. Is it possible to do this conversion ?

Tags： 3d
geometrycomputational-geometrygeometry-surface
I am interesting in drawing spiral curves on a surface which look like that:
The equation of it is:
((1.0-z)*(((x-1.0)*(x-1.0))+y*y-(1.0/3.0))*(((x+1.0)*(x+1.0))+(y*y)-(1.0/3.0)))+(z*((x*x)+(y*y)-(1.0/3.0))) == 0.0
where
x=<-1.0-sqrt(1/3),+1.0+sqrt(1/3)>
y=<-sqrt(1/3),+sqrt(1/3)>
z=< 0.0,1.0>
I found the equation on this website:
https://www.quora.com/What-is-the-mathematical-expression-which-when-plotted-looks-like-a-pair-of-pants
Then I saw that post in StackMath

Tags： 3d
3d-modelling3d-model
Take a binary STL file.
Research suggests the 2-byte "Attribute Byte Count" should be kept as null bytes; however, some software packages adapt it for purposes such as
colour. I was hoping to use this field to store a 16-bit integer in individual elements when creating STL files in my software, without corrupting the file.
A user could then view the file in a viewer (e.g. Windows 10 3D Viewer) as they desire... but when imported into my own software I would be able to use this stored i

I have always wondered, how do game programmers tie together game characters done in an external 3d modeling software like maya or 3d max and the actual game logic done in there favourite programming language e.g c or c++.
How do you get to combine this two things together, and what is the actual process of building a game from modeling characters to programming?
Some of the things that make me wonder are like, do you program characters movements from the code or from the 3d model?
Examples w

Tags： 3d
high-levelhardware-acceleration3d-engine
I am programming in Flash for a long-time. It is interesting that most of the things, including open source libraries, are very high-level in the Flash world. It is great because we can build things up quickly. But Flash is too slow (I want to do CV stuff, visual effect, generative art etc).
I have tried glut, Processing, OpenFrameworks and I found them too different from Flash.
So, I want to know if there is any high-level (like PaperVision3D), fast (better hardware-accelerated) 3D engine? It

I know about Papervision 3D. However, alot of the realism there comes from textures.
Does anyone know of a benchmark that shows how many single-color, flash-shaded 3D triagnels flash10 can reasonably render? I can't find this benchmark online or an engine for this (most seems to really value bitmaps / texture).

What are some advantages and disadvantages between the two. Especially for something like a 3D game.

I have a reference model, (for example a triangle), with its 'up' vector definied as <0 1 0> and its 'over' vector defined as <1 0 0>.
Now I have another triangle of the same size rotated and positioned arbitrarily in a 3d space. What I want to do is to find out this new triangle's 'up' and 'over' vector. I.e. if the triangle is rotated 180 degrees around the X-axis, it's 'up' vector should be <0 -1 0> and its 'over' vector should be the same.
How do I find the rotation transformation

Tags： 3d
geometrycomputational-geometryconcave-hull
I have a list of Surfaces defining a 3D Object.
Those surfaces have the following constraints:
each surface is defined with an array of vertices defining its border
no holes are inside of an surface
surfaces do not overlap or go through other surfaces
each vertex on each edge of the surface is included
all surfaces are adjectant to at least two other surfaces
objects created by those surfaces may be concave
I want to get the outer hull of the 3D Objects created by those surfaces
- there i

I want to extract the 3d Flyover map data from Apple's Maps.app for use in 3D modeling apps. Has anyone attempted this? The only maps files I could find were in:
/Library/Containers/com.apple.Maps/Data/Library/Saved Application State
This has a window_1.data file which is last modified today and is 2.8 MB, though I can't make out which format it is in.
Any pointers would be great!

I have a coordinate system A
Example: 3 principal vector direction of system A are:
e0= [0.3898 -0.0910 0.9164]
e1= [0.6392 0.7431 -0.1981]
e2= [-0.6629 0.6630 0.3478]
And, I have a cartesian coordinate system B with three unit vector:
nx=[1 0 0];
ny=[0 1 0];
nz=[0 0 1]
How can i find transformation matrix C between two coordinate systems A & B ?

My objective is to display OBJ files into QWidget. The code I have used to work previously.
vtkSmartPointer<vtkOBJImporter> importer = vtkSmartPointer<vtkOBJImporter>::New();
importer->SetFileName(graphics_path_balanced.toStdString().c_str());
importer->SetFileNameMTL(graphics_path_balanced.append(".mtl").toStdString().c_str());
importer->Read();
importer->GetRenderer()->SetBackground(1.0, 1.0, 1.0);
#if VTK_MAJOR_VERSION >= 7 && VTK_MI

1 2 3 4 5 6 ...

下一页 最后一页 共 9 页