Search Results

Search found 240 results on 10 pages for 'geodesic sphere'.

Page 1/10 | 1 2 3 4 5 6 7 8 9 10  | Next Page >

  • Sphere-Sphere intersection and Circle-Sphere intersection

    - by cagirici
    I have code for circle-circle intersection. But I need to expand it to 3-D. How do I calculate: Radius and center of the intersection circle of two spheres Points of the intersection of a sphere and a circle? Given two spheres (sc0,sr0) and (sc1,sr1), I need to calculate a circle of intersection whose center is ci and whose radius is ri. Moreover, given a sphere (sc0,sr0) and a circle (cc0, cr0), I need to calulate the two intersection points (pi0, pi1) I have checked this link and this link, but I could not understand the logic behind them and how to code them. I tried ProGAL library for sphere-sphere-sphere intersection, but the resulting coordinates are rounded. I need precise results.

    Read the article

  • morph a sphere to a cube and a a cube to a sphere with GLSL

    - by nkint
    hi i'm getting started with glsl with quartz composer. i have a patch with a particle system in which each particle is mapped into a sphere with a blend value. with blend=0 particles are in random positions, blend=1 particles are in the sphere. the code is here: vec3 sphere(vec2 domain) { vec3 range; range.x = radius * cos(domain.y) * sin(domain.x); range.y = radius * sin(domain.y) * sin(domain.x); range.z = radius * cos(domain.x); return range; } // in main: normal = sphere(p0); * blend + gl_Normal * (1.0 - blend); i'd like the particle to be on a cube if blend=0 i've tried to find but i can't figure out some parametric equation for the cube. mayebe it is not the right way?

    Read the article

  • Morph a sphere to a cube and a cube to a sphere with GLSL

    - by nkint
    I'm getting started with GLSL with quartz composer. I have a patch with a particle system in which each particle is mapped into a sphere with a blend value. With blend=0 particles are in random positions, blend=1 particles are in the sphere. The code is here: vec3 sphere(vec2 domain) { vec3 range; range.x = radius * cos(domain.y) * sin(domain.x); range.y = radius * sin(domain.y) * sin(domain.x); range.z = radius * cos(domain.x); return range; } // in main: vec2 p0 = gl_Vertex.xy * twopi; vec3 normal = sphere(p0);; vec3 r0 = radius * normal; vec3 vertex = r0; normal = normal * blend + gl_Normal * (1.0 - blend); vertex = vertex * blend + gl_Vertex.xyz * (1.0 - blend); I'd like the particle to be on a cube if blend=0 I've tried to find but I can't figure out some parametric equation for the cube. Maybe it is not the right way?

    Read the article

  • OpenGL texture on sphere

    - by Cilenco
    I want to create a rolling, textured ball in OpenGL ES 1.0 for Android. With this function I can create a sphere: public Ball(GL10 gl, float radius) { ByteBuffer bb = ByteBuffer.allocateDirect(40000); bb.order(ByteOrder.nativeOrder()); sphereVertex = bb.asFloatBuffer(); points = build(); } private int build() { double dTheta = STEP * Math.PI / 180; double dPhi = dTheta; int points = 0; for(double phi = -(Math.PI/2); phi <= Math.PI/2; phi+=dPhi) { for(double theta = 0.0; theta <= (Math.PI * 2); theta+=dTheta) { sphereVertex.put((float) (raduis * Math.sin(phi) * Math.cos(theta))); sphereVertex.put((float) (raduis * Math.sin(phi) * Math.sin(theta))); sphereVertex.put((float) (raduis * Math.cos(phi))); points++; } } sphereVertex.position(0); return points; } public void draw() { texture.bind(); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, sphereVertex); gl.glDrawArrays(GL10.GL_TRIANGLE_FAN, 0, points); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); } My problem now is that I want to use this texture for the sphere but then only a black ball is created (of course because the top right corner s black). I use this texture coordinates because I want to use the whole texture: 0|0 0|1 1|1 1|0 That's what I learned from texturing a triangle. Is that incorrect if I want to use it with a sphere? What do I have to do to use the texture correctly?

    Read the article

  • Calculating adjacent quads on a quad sphere

    - by Caius Eugene
    I've been experimenting with generating a quad sphere. This sphere subdivides into a quadtree structure. Eventually I'm going to be applying some simplex noise to the vertices of each face to create a terrain like surface. To solve the issue of cracks I want to be able to apply a geomitmap technique of triangle fanning on the edges of each quad, but in order to know the subdivision level of the adjacent quads I need to identify which quads are adjacent to each other. Does anyone know any approaches to computing and storing these adjacent quads for quick lookup? Also It's important that I know which direction they are in so I can easily adjust the correct edge.

    Read the article

  • Math for a geodesic sphere

    - by Marcelo Cantos
    I'm trying to create a very specific geodesic tessellation, but I can't find anything online about it. It is normal to subdivide the triangles of an icosahedron into triangle patches and project them onto the sphere. However, I noticed an animated GIF on the Wikipedia entry for Geodesic Domes that appears not to follow this scheme. Geodesic spheres generally comprise a mixture of mostly hexagonal triangle patches, with pentagonal patches forming at the vertices of the original icosahedron; in most cases, these pentagons are linked together; that is, following a straight edge from the center of one pentagon leads to the center of another pentagon. In the Wikipedia animation, however, the edge from the center of one pentagon doesn't appear to intersect the center of an adjacent pentagons; instead it intersects the side of the other pentagon. Hopefully the drawing below makes this clear: Where can I go to learn about the math behind this particular geometry? Ideally, I'd like to know of an algorithm for generating such tessellations.

    Read the article

  • Trying to create a sphere in UDK on which I can stand

    - by Dave
    Trying to build a globe in UDK, but when I do (create a sphere), my player falls straight through it. How do I make a sphere that I can walk on? Every other shape (cube, cone...etc) work just fine. -- Edit: Specifically, I want to build a CSG/Brush sphere, not a mesh sphere. It appears to work just fine if I set the "sphere exptrapolation" to 1 or 2, but if I bump it up to 3 or higher, I fall right through. I literally created 2 spheres next to each other, one set at "2" and one at "3" - I can walk from the top of the "2" sphere and jump onto the "3" sphere, but I fall right through it.

    Read the article

  • Routes on a sphere surface - Find geodesic?

    - by CaNNaDaRk
    I'm working with some friends on a browser based game where people can move on a 2D map. It's been almost 7 years and still people play this game so we are thinking of a way to give them something new. Since then the game map was a limited plane and people could move from (0, 0) to (MAX_X, MAX_Y) in quantized X and Y increments (just imagine it as a big chessboard). We believe it's time to give it another dimension so, just a couple of weeks ago, we began to wonder how the game could look with other mappings: Unlimited plane with continous movement: this could be a step forward but still i'm not convinced. Toroidal World (continous or quantized movement): sincerely I worked with torus before but this time I want something more... Spherical world with continous movement: this would be great! What we want Users browsers are given a list of coordinates like (latitude, longitude) for each object on the spherical surface map; browsers must then show this in user's screen rendering them inside a web element (canvas maybe? this is not a problem). When people click on the plane we convert the (mouseX, mouseY) to (lat, lng) and send it to the server which has to compute a route between current user's position to the clicked point. What we have We began writing a Java library with many useful maths to work with Rotation Matrices, Quaternions, Euler Angles, Translations, etc. We put it all together and created a program that generates sphere points, renders them and show them to the user inside a JPanel. We managed to catch clicks and translate them to spherical coords and to provide some other useful features like view rotation, scale, translation etc. What we have now is like a little (very little indeed) engine that simulates client and server interaction. Client side shows points on the screen and catches other interactions, server side renders the view and does other calculus like interpolating the route between current position and clicked point. Where is the problem? Obviously we want to have the shortest path to interpolate between the two route points. We use quaternions to interpolate between two points on the surface of the sphere and this seemed to work fine until i noticed that we weren't getting the shortest path on the sphere surface: We though the problem was that the route is calculated as the sum of two rotations about X and Y axis. So we changed the way we calculate the destination quaternion: We get the third angle (the first is latitude, the second is longitude, the third is the rotation about the vector which points toward our current position) which we called orientation. Now that we have the "orientation" angle we rotate Z axis and then use the result vector as the rotation axis for the destination quaternion (you can see the rotation axis in grey): What we got is the correct route (you can see it lays on a great circle), but we get to this ONLY if the starting route point is at latitude, longitude (0, 0) which means the starting vector is (sphereRadius, 0, 0). With the previous version (image 1) we don't get a good result even when startin point is 0, 0, so i think we're moving towards a solution, but the procedure we follow to get this route is a little "strange" maybe? In the following image you get a view of the problem we get when starting point is not (0, 0), as you can see starting point is not the (sphereRadius, 0, 0) vector, and as you can see the destination point (which is correctly drawn!) is not on the route. The magenta point (the one which lays on the route) is the route's ending point rotated about the center of the sphere of (-startLatitude, 0, -startLongitude). This means that if i calculate a rotation matrix and apply it to every point on the route maybe i'll get the real route, but I start to think that there's a better way to do this. Maybe I should try to get the plane through the center of the sphere and the route points, intersect it with the sphere and get the geodesic? But how? Sorry for being way too verbose and maybe for incorrect English but this thing is blowing my mind! EDIT: This code version is related to the first image: public void setRouteStart(double lat, double lng) { EulerAngles tmp = new EulerAngles ( Math.toRadians(lat), 0, -Math.toRadians(lng)); //set route start Quaternion qtStart.setInertialToObject(tmp); //do other stuff like drawing start point... } public void impostaDestinazione(double lat, double lng) { EulerAngles tmp = new AngoliEulero( Math.toRadians(lat), 0, -Math.toRadians(lng)); qtEnd.setInertialToObject(tmp); //do other stuff like drawing dest point... } public V3D interpolate(double totalTime, double t) { double _t = t/totalTime; Quaternion q = Quaternion.Slerp(qtStart, qtEnd, _t); RotationMatrix.inertialQuatToIObject(q); V3D p = matInt.inertialToObject(V3D.Xaxis.scale(sphereRadius)); //other stuff, like drawing point ... return p; } //mostly taken from a book! public static Quaternion Slerp(Quaternion q0, Quaternion q1, double t) { double cosO = q0.dot(q1); double q1w = q1.w; double q1x = q1.x; double q1y = q1.y; double q1z = q1.z; if (cosO < 0.0f) { q1w = -q1w; q1x = -q1x; q1y = -q1y; q1z = -q1z; cosO = -cosO; } double sinO = Math.sqrt(1.0f - cosO*cosO); double O = Math.atan2(sinO, cosO); double oneOverSinO = 1.0f / senoOmega; k0 = Math.sin((1.0f - t) * O) * oneOverSinO; k1 = Math.sin(t * O) * oneOverSinO; // Interpolate return new Quaternion( k0*q0.w + k1*q1w, k0*q0.x + k1*q1x, k0*q0.y + k1*q1y, k0*q0.z + k1*q1z ); } A little dump of what i get (again check image 1): Route info: Sphere radius and center: 200,000, (0.0, 0.0, 0.0) Route start: lat 0,000 °, lng 0,000 ° @v: (200,000, 0,000, 0,000), |v| = 200,000 Route end: lat 30,000 °, lng 30,000 ° @v: (150,000, 86,603, 100,000), |v| = 200,000 Qt dump: (w, x, y, z), rot. angle°, (x, y, z) rot. axis Qt start: (1,000, 0,000, -0,000, 0,000); 0,000 °; (1,000, 0,000, 0,000) Qt end: (0,933, 0,067, -0,250, 0,250); 42,181 °; (0,186, -0,695, 0,695) Route start: lat 30,000 °, lng 10,000 ° @v: (170,574, 30,077, 100,000), |v| = 200,000 Route end: lat 80,000 °, lng -50,000 ° @v: (22,324, -26,604, 196,962), |v| = 200,000 Qt dump: (w, x, y, z), rot. angle°, (x, y, z) rot. axis Qt start: (0,962, 0,023, -0,258, 0,084); 31,586 °; (0,083, -0,947, 0,309) Qt end: (0,694, -0,272, -0,583, -0,324); 92,062 °; (-0,377, -0,809, -0,450)

    Read the article

  • fast sphere-grid intersection

    - by Mat
    hi! given a 3D grid, a 3d point as sphere center and a radius, i'd like to quickly calculate all cells contained or intersected by the sphere. Currently i take the the (gridaligned) boundingbox of the sphere and calculate the two cells for the min anx max point of this boundingbox. then, for each cell between those two cells, i do a box-sphere intersection test. would be great if there was something more efficient thanks!

    Read the article

  • Mapping A Sphere To A Cube

    - by petrocket
    There is a special way of mapping a cube to a sphere described here: http://mathproofs.blogspot.com/2005/07/mapping-cube-to-sphere.html It is not your basic "normalize the point and your done" approach and gives a much more evenly spaced mapping. I've tried to do the inverse of the mapping going from sphere coords to cube coords and have been unable to come up the working equations. It's a rather complex system of equations with lots of square roots. Any math geniuses want to take a crack at it? Here's the equations in c++ code: sx = x * sqrtf(1.0f - y * y * 0.5f - z * z * 0.5f + y * y * z * z / 3.0f); sy = y * sqrtf(1.0f - z * z * 0.5f - x * x * 0.5f + z * z * x * x / 3.0f); sz = z * sqrtf(1.0f - x * x * 0.5f - y * y * 0.5f + x * x * y * y / 3.0f); sx,sy,sz are the sphere coords and x,y,z are the cube coords.

    Read the article

  • Rotate sphere in Javascript / three.js while moving on x/z axes

    - by kaipr
    I have a sphere/ball in three.js which I want to "roll" arround on a x/z axis. For the z axe I could simply do this no matter what the current x and y rotation is: sphere.roll_z = function(distance) { sphere.position.z += distance; sphere.rotation.x += distance > 0 ? 0.05 : -0.05; } But how can I roll it along the x axe? And how could I properly do the roll_z? I've found a lot about quateration and matrixes, but I can't figure out how to use them properly to achieve my (rather simple) goal. I'm aware that I have to update multiple rotations and that I have to calculate how far to rotate the sphere to match the distance, but the "how" is the question. It's probably just lack of mathematical skills which I should train, but a working example/short explanation would help alot to start with.

    Read the article

  • exact point oh a rotating sphere

    - by nkint
    I have a sphere that represents the heart textured with real pictures. It's rotating about the x axis, and when user click down it has to show me the exact place he clicked on. For example if he clicked on Singapore and the system should be able to: understand that user clicked on the sphere (OK, I'll do it with unProject) understand where user clicked on the sphere (ray-sphere collision?) and take into account the rotation transform sphere-coordinate to some coordinate system good for some web-api service ask to api (OK, this is the simpler thing for me ;-) some advice?

    Read the article

  • exact point on a rotating sphere

    - by nkint
    I have a sphere that represents the Earth textured with real pictures. It's rotating around the x axis, and when user click down it has to show me the exact place he clicked on. For example if he clicked on Singapore the system should be able to: understand that user clicked on the sphere (OK, I'll do it with unProject) understand where user clicked on the sphere (ray-sphere collision?) and take into account the rotation transform sphere-coordinate to some coordinate system good for some web-api service ask to api (OK, this is the simpler thing for me ;-) some advice?

    Read the article

  • Quaternion Camera Orbiting around a Sphere

    - by jessejuicer
    Background: I'm trying to create a game where the camera is always rotating around a single sphere. I'm using the DirectX D3DX math functions in C++ on Windows. The Problem: I cannot get both the camera position and orientation both working properly at the same time. Either one works but not both together. Here's the code for my quaternion camera that revolves around a sphere, always looking at the centerpoint of the sphere, ... as far as I understand it (but which isn't working properly): (I'm only going to present rotation around the X axis here, to simplify this post) Whenever the UP key is pressed or held down, the camera should rotate around the X axis, while looking at the centerpoint of the sphere (which is at 0,0,0 in the world). So, I build a quaternion that represents a small angle of rotation around the x axis like this (where 'deltaAngle' is a small enough number for a slow rotation): D3DXVECTOR3 rotAxis; D3DXQUATERNION tempQuat; tempQuat.x = 0.0f; tempQuat.y = 0.0f; tempQuat.z = 0.0f; tempQuat.w = 1.0f; rotAxis.x = 1.0f; rotAxis.y = 0.0f; rotAxis.z = 0.0f; D3DXQuaternionRotationAxis(&tempQuat, &rotAxis, deltaAngle); ...and I accumulate the result into the camera's current orientation quat, like this: D3DXQuaternionMultiply(&cameraOrientationQuat, &cameraOrientationQuat, &tempQuat); ...which all works fine. Now I need to build a view matrix to pass to DirectX SetTransform function. So I build a rotation matrix from the camera orientation quat as follows: D3DXMATRIXA16 rotationMatrix; D3DXMatrixIdentity(&rotationMatrix); D3DXMatrixRotationQuaternion(&rotationMatrix, &cameraOrientationQuat); ...Now (as seen below) if I just transpose that rotationMatrix and plug it into the 3x3 section of the view matrix, then negate the camera's position and plug it into the translation section of the view matrix, the rotation magically works. Perfectly. (even when I add in rotations for all three axes). There's no gimbal lock, just a smooth rotation all around in any direction. BUT- this works even though I never change the camera's position. At all. Which sorta blows my mind. I even display the camera position and can watch it stay constant at it's starting point (0.0, 0.0, -4000.0). It never moves, but the rotation around the sphere is perfect. I don't understand that. For proper view rotation, the camera position should be revolving around the sphere. Here's the rest of building the view matrix (I'll talk about the commented code below). Note that the camera starts out at (0.0, 0.0, -4000.0) and m_camDistToTarget is 4000.0: /* D3DXVECTOR3 vec1; D3DXVECTOR4 vec2; vec1.x = 0.0f; vec1.y = 0.0f; vec1.z = -1.0f; D3DXVec3Transform(&vec2, &vec1, &rotationMatrix); g_cameraActor->pos.x = vec2.x * g_cameraActor->m_camDistToTarget; g_cameraActor->pos.y = vec2.y * g_cameraActor->m_camDistToTarget; g_cameraActor->pos.z = vec2.z * g_cameraActor->m_camDistToTarget; */ D3DXMatrixTranspose(&g_viewMatrix, &rotationMatrix); g_viewMatrix._41 = -g_cameraActor->pos.x; g_viewMatrix._42 = -g_cameraActor->pos.y; g_viewMatrix._43 = -g_cameraActor->pos.z; g_viewMatrix._44 = 1.0f; g_direct3DDevice9->SetTransform( D3DTS_VIEW, &g_viewMatrix ); ...(The world matrix is always an identity, and the perspective projection works fine). ...So, without the commented code being compiled, the rotation works fine. But to be proper, for obvious reasons, the camera position should be rotating around the sphere, which it currently is not. That's what the commented code is supposed to do. And when I add in that chunk of code to do that, and look at all the data as I hold the keys down (using UP, DOWN, LEFT, RIGHT to rotate different directions) all the values look correct! The camera position is rotating around the sphere just fine, and I can watch that happen visually too. The problem is that the camera orientation does not lookat the center of the sphere. It always looks straight forward down the z axis (toward positive z) as it revolves around the sphere. Yet the values of both the rotation matrix and the view matrix seem to be behaving correctly. (The view matrix orientation is the same as the rotation matrix, just transposed). For instance if I just hold down the key to spin around the x axis, I can watch the values of the three axes represented in the view matrix (x, y, and z axes)... view x-axis stays at (1.0, 0.0, 0.0), and view y-axis and z-axis both spin around the x axis just fine. All the numbers are changing as they should be... well, almost. As far as I can tell, the position of the view matrix is spinning around the sphere one direction (like clockwise), and the orientation (the axes in the view matrix) are spinning the opposite direction (like counter-clockwise). Which I guess explains why the orientation appears to stay straight ahead. I know the position is correct. It revolves properly. It's the orientation that's wrong. Can anyone see what am I doing wrong? Am I using these functions incorrectly? Or is my algorithm flawed? As usual I've been combing my code for simple mistakes for many hours. I'm willing to post the actual code, and a video of the behavior, but that will take much more effort. Thought I'd ask this way first.

    Read the article

  • Arbitrary Rotation about a Sphere

    - by Der
    I'm coding a mechanic which allows a user to move around the surface of a sphere. The position on the sphere is currently stored as theta and phi, where theta is the angle between the z-axis and the xz projection of the current position (i.e. rotation about the y axis), and phi is the angle from the y-axis to the position. I explained that poorly, but it is essentially theta = yaw, phi = pitch Vector3 position = new Vector3(0,0,1); position.X = (float)Math.Sin(phi) * (float)Math.Sin(theta); position.Y = (float)Math.Sin(phi) * (float)Math.Cos(theta); position.Z = (float)Math.Cos(phi); position *= r; I believe this is accurate, however I could be wrong. I need to be able to move in an arbitrary pseudo two dimensional direction around the surface of a sphere at the origin of world space with radius r. For example, holding W should move around the sphere in an upwards direction relative to the orientation of the player. I believe I should be using a Quaternion to represent the position/orientation on the sphere, but I can't think of the correct way of doing it. Spherical geometry is not my strong suit. Essentially, I need to fill the following block: public void Move(Direction dir) { switch (dir) { case Direction.Left: // update quaternion to rotate left break; case Direction.Right: // update quaternion to rotate right break; case Direction.Up: // update quaternion to rotate upward break; case Direction.Down: // update quaternion to rotate downward break; } }

    Read the article

  • 2D shader to draw representation of rotating sphere.

    - by TheBigO
    I want to display a 3D textured sphere, and then rotate it in one direction. The direction will never change, and the camera will never move. One way is to actually create a spherical mesh, map a texture to it, rotate the sphere, and render in 3D. My question is, is there a way to display a 2D circle, that looks like a rotating sphere, with just a 2D shader. In other words, can someone think of a trick, like mapping a texture to the circle in a particular way, to give the appearance of an in-place rotating sphere, that is always viewed from the side? I don't need exact shader code, I'm just looking for the right idea.

    Read the article

  • XNA shield effect with a Primative sphere problem

    - by Sparky41
    I'm having issue with a shield effect i'm trying to develop. I want to do a shield effect that surrounds part of a model like this: http://i.imgur.com/jPvrf.png I currently got this: http://i.imgur.com/Jdin7.png (The red likes are a simple texture a black background with a red cross in it, for testing purposes: http://i.imgur.com/ODtzk.png where the smaller cross in the middle shows the contact point) This sphere is drawn via a primitive (DrawIndexedPrimitives) This is how i calculate the pieces of the sphere using a class i've called Sphere (this class is based off the code here: http://xbox.create.msdn.com/en-US/education/catalog/sample/primitives_3d) public class Sphere { // During the process of constructing a primitive model, vertex // and index data is stored on the CPU in these managed lists. List vertices = new List(); List indices = new List(); // Once all the geometry has been specified, the InitializePrimitive // method copies the vertex and index data into these buffers, which // store it on the GPU ready for efficient rendering. VertexBuffer vertexBuffer; IndexBuffer indexBuffer; BasicEffect basicEffect; public Vector3 position = Vector3.Zero; public Matrix RotationMatrix = Matrix.Identity; public Texture2D texture; /// <summary> /// Constructs a new sphere primitive, /// with the specified size and tessellation level. /// </summary> public Sphere(float diameter, int tessellation, Texture2D text, float up, float down, float portstar, float frontback) { texture = text; if (tessellation < 3) throw new ArgumentOutOfRangeException("tessellation"); int verticalSegments = tessellation; int horizontalSegments = tessellation * 2; float radius = diameter / 2; // Start with a single vertex at the bottom of the sphere. AddVertex(Vector3.Down * ((radius / up) + 1), Vector3.Down, Vector2.Zero);//bottom position5 // Create rings of vertices at progressively higher latitudes. for (int i = 0; i < verticalSegments - 1; i++) { float latitude = ((i + 1) * MathHelper.Pi / verticalSegments) - MathHelper.PiOver2; float dy = (float)Math.Sin(latitude / up);//(up)5 float dxz = (float)Math.Cos(latitude); // Create a single ring of vertices at this latitude. for (int j = 0; j < horizontalSegments; j++) { float longitude = j * MathHelper.TwoPi / horizontalSegments; float dx = (float)(Math.Cos(longitude) * dxz) / portstar;//port and starboard (right)2 float dz = (float)(Math.Sin(longitude) * dxz) * frontback;//front and back1.4 Vector3 normal = new Vector3(dx, dy, dz); AddVertex(normal * radius, normal, new Vector2(j, i)); } } // Finish with a single vertex at the top of the sphere. AddVertex(Vector3.Up * ((radius / down) + 1), Vector3.Up, Vector2.One);//top position5 // Create a fan connecting the bottom vertex to the bottom latitude ring. for (int i = 0; i < horizontalSegments; i++) { AddIndex(0); AddIndex(1 + (i + 1) % horizontalSegments); AddIndex(1 + i); } // Fill the sphere body with triangles joining each pair of latitude rings. for (int i = 0; i < verticalSegments - 2; i++) { for (int j = 0; j < horizontalSegments; j++) { int nextI = i + 1; int nextJ = (j + 1) % horizontalSegments; AddIndex(1 + i * horizontalSegments + j); AddIndex(1 + i * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + j); AddIndex(1 + i * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + j); } } // Create a fan connecting the top vertex to the top latitude ring. for (int i = 0; i < horizontalSegments; i++) { AddIndex(CurrentVertex - 1); AddIndex(CurrentVertex - 2 - (i + 1) % horizontalSegments); AddIndex(CurrentVertex - 2 - i); } //InitializePrimitive(graphicsDevice); } /// <summary> /// Adds a new vertex to the primitive model. This should only be called /// during the initialization process, before InitializePrimitive. /// </summary> protected void AddVertex(Vector3 position, Vector3 normal, Vector2 texturecoordinate) { vertices.Add(new VertexPositionNormal(position, normal, texturecoordinate)); } /// <summary> /// Adds a new index to the primitive model. This should only be called /// during the initialization process, before InitializePrimitive. /// </summary> protected void AddIndex(int index) { if (index > ushort.MaxValue) throw new ArgumentOutOfRangeException("index"); indices.Add((ushort)index); } /// <summary> /// Queries the index of the current vertex. This starts at /// zero, and increments every time AddVertex is called. /// </summary> protected int CurrentVertex { get { return vertices.Count; } } public void InitializePrimitive(GraphicsDevice graphicsDevice) { // Create a vertex declaration, describing the format of our vertex data. // Create a vertex buffer, and copy our vertex data into it. vertexBuffer = new VertexBuffer(graphicsDevice, typeof(VertexPositionNormal), vertices.Count, BufferUsage.None); vertexBuffer.SetData(vertices.ToArray()); // Create an index buffer, and copy our index data into it. indexBuffer = new IndexBuffer(graphicsDevice, typeof(ushort), indices.Count, BufferUsage.None); indexBuffer.SetData(indices.ToArray()); // Create a BasicEffect, which will be used to render the primitive. basicEffect = new BasicEffect(graphicsDevice); //basicEffect.EnableDefaultLighting(); } /// <summary> /// Draws the primitive model, using the specified effect. Unlike the other /// Draw overload where you just specify the world/view/projection matrices /// and color, this method does not set any renderstates, so you must make /// sure all states are set to sensible values before you call it. /// </summary> public void Draw(Effect effect) { GraphicsDevice graphicsDevice = effect.GraphicsDevice; // Set our vertex declaration, vertex buffer, and index buffer. graphicsDevice.SetVertexBuffer(vertexBuffer); graphicsDevice.Indices = indexBuffer; graphicsDevice.BlendState = BlendState.Additive; foreach (EffectPass effectPass in effect.CurrentTechnique.Passes) { effectPass.Apply(); int primitiveCount = indices.Count / 3; graphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vertices.Count, 0, primitiveCount); } graphicsDevice.BlendState = BlendState.Opaque; } /// <summary> /// Draws the primitive model, using a BasicEffect shader with default /// lighting. Unlike the other Draw overload where you specify a custom /// effect, this method sets important renderstates to sensible values /// for 3D model rendering, so you do not need to set these states before /// you call it. /// </summary> public void Draw(Camera camera, Color color) { // Set BasicEffect parameters. basicEffect.World = GetWorld(); basicEffect.View = camera.view; basicEffect.Projection = camera.projection; basicEffect.DiffuseColor = color.ToVector3(); basicEffect.TextureEnabled = true; basicEffect.Texture = texture; GraphicsDevice device = basicEffect.GraphicsDevice; device.DepthStencilState = DepthStencilState.Default; if (color.A < 255) { // Set renderstates for alpha blended rendering. device.BlendState = BlendState.AlphaBlend; } else { // Set renderstates for opaque rendering. device.BlendState = BlendState.Opaque; } // Draw the model, using BasicEffect. Draw(basicEffect); } public virtual Matrix GetWorld() { return /*world */ Matrix.CreateScale(1f) * RotationMatrix * Matrix.CreateTranslation(position); } } public struct VertexPositionNormal : IVertexType { public Vector3 Position; public Vector3 Normal; public Vector2 TextureCoordinate; /// <summary> /// Constructor. /// </summary> public VertexPositionNormal(Vector3 position, Vector3 normal, Vector2 textCoor) { Position = position; Normal = normal; TextureCoordinate = textCoor; } /// <summary> /// A VertexDeclaration object, which contains information about the vertex /// elements contained within this struct. /// </summary> public static readonly VertexDeclaration VertexDeclaration = new VertexDeclaration ( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(12, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0), new VertexElement(24, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 0) ); VertexDeclaration IVertexType.VertexDeclaration { get { return VertexPositionNormal.VertexDeclaration; } } } A simple call to the class to initialise it. The Draw method is called in the master draw method in the Gamecomponent. My current thoughts on this are: The direction of the weapon hitting the ship is used to get the middle position for the texture Wrap a texture around the drawn sphere based on this point of contact Problem is i'm not sure how to do this. Can anyone help or if you have a better idea please tell me i'm open for opinion? :-) Thanks.

    Read the article

  • DirectX Sphere Texture Coordinates

    - by Rushyo
    I have a sphere with per-vertex normals and I'm trying to derive the texture coordinates for the object using the algorithm: U = Asin(Norm.X) / PI + 0.5 V = Asin(Norm.Y) / PI + 0.5 With a polka dot texture, I get: Here's the same object without the texture applied: The issue I'm particuarly looking at (I know there's a few) is the misalignment of the textures. I am inclined to believe the issue resides in my use of those algorithms, as the specular highlighting (which doesn't utilise any textures but does rely on the normals being correct) appears to have no artifacts. Any ideas?

    Read the article

  • Compute bounding quad of a sphere with vertex shader

    - by Ben Jones
    I'm trying to implement an algorithm from a graphics paper and part of the algorithm is rendering spheres of known radius to a buffer. They say that they render the spheres by computing the location and size in a vertex shader and then doing appropriate shading in a fragment shader. Any guesses as to how they actually did this? The position and radius are known in world coordinates and the projection is perspective. Does that mean that the sphere will be projected as a circle? Thanks!

    Read the article

  • C# XNA - Sky Sphere Question

    - by Wade
    I have been banging my head against the wall trying to get a sky sphere to work appropriately in XNA 4.0. I have the sphere loading correctly, and even textured, but i would like something a little more dynamic that can support a day/night cycle. My issue is that, while i know a good amount of C# and XNA, i know next to nothing about HLSL. (I could make an ambient light shader if my life depended on it...) I also have not been able to find a tutorial on how to build a sky sphere like this. Of course i don't expect to be able to make an amazing one right off the bat, i would like to start small, with a dynamic coloring sky i'll work out the clouds and sun later. My first question: Does anyone know of any good tutorial sites that could help me get a decent grasp around HLSL? Second: Does anyone have a good example of or know where to find one of a gradient sky using XNA and C#?

    Read the article

  • Creating a curved mesh on inside of sphere based on texture image coordinates

    - by user5025
    In Blender, I have created a sphere with a panoramic texture on the inside. I have also manually created a plane mesh (curved to match the size of the sphere) that sits on the inside wall where I can draw a different texture. This is great, but I really want to reduce the manual labor, and do some of this work in a script -- like having a variable for the panoramic image, and coordinates of the area in the photograph that I want to replace with a new mesh. The hardest part of doing this is going to be creating a curved mesh in code that can sit on the inside wall of a sphere. Can anyone point me in the right direction?

    Read the article

  • Camera placement sphere for an always fully visible object

    - by BengtR
    Given an object: With the bounds [x, y, z, width, height, depth] And an orthographic projection [left, right, bottom, top, near, far] I want to determine the radius of a sphere which allows me to randomly place my camera on so that: The object is fully visible from all positions on this sphere The sphere radius is the smallest possible value while still satisfying 1. Assume the object is centered around the origin. How can I find this radius? I'm currently using sqrt(width^2 + height^2 + depth^2) but I'm not sure that's the correct value, as it doesn't take the camera into account. Thanks for any advice. I'm sorry for confusing a few things here. My comments below should clarify what I'm trying to do actually.

    Read the article

  • Largest sphere inside a frustum

    - by Will
    How do you find the largest sphere that you can draw in perspective? Viewed from the top, it'd be this: Added: on the frustum on the right, I've marked four points I think we know something about. We can unproject all eight corners of the frusum, and the centres of the near and far ends. So we know point 1, 3 and 4. We also know that point 2 is the same distance from 3 as 4 is from 3. So then we can compute the nearest point on the line 1 to 4 to point 2 in order to get the centre? But the actual math and code escapes me. I want to draw models (which are approximately spherical and which I have a miniball bounding sphere for) as large as possible. Update: I've tried to implement the incircle-on-two-planes approach as suggested by bobobobo and Nathan Reed : function getFrustumsInsphere(viewport,invMvpMatrix) { var midX = viewport[0]+viewport[2]/2, midY = viewport[1]+viewport[3]/2, centre = unproject(midX,midY,null,null,viewport,invMvpMatrix), incircle = function(a,b) { var c = ray_ray_closest_point_3(a,b); a = a[1]; // far clip plane b = b[1]; // far clip plane c = c[1]; // camera var A = vec3_length(vec3_sub(b,c)), B = vec3_length(vec3_sub(a,c)), C = vec3_length(vec3_sub(a,b)), P = 1/(A+B+C), x = ((A*a[0])+(B*a[1])+(C*a[2]))*P, y = ((A*b[0])+(B*b[1])+(C*b[2]))*P, z = ((A*c[0])+(B*c[1])+(C*c[2]))*P; c = [x,y,z]; // now the centre of the incircle c.push(vec3_length(vec3_sub(centre[1],c))); // add its radius return c; }, left = unproject(viewport[0],midY,null,null,viewport,invMvpMatrix), right = unproject(viewport[2],midY,null,null,viewport,invMvpMatrix), horiz = incircle(left,right), top = unproject(midX,viewport[1],null,null,viewport,invMvpMatrix), bottom = unproject(midX,viewport[3],null,null,viewport,invMvpMatrix), vert = incircle(top,bottom); return horiz[3]<vert[3]? horiz: vert; } I admit I'm winging it; I'm trying to adapt 2D code by extending it into 3 dimensions. It doesn't compute the insphere correctly; the centre-point of the sphere seems to be on the line between the camera and the top-left each time, and its too big (or too close). Is there any obvious mistakes in my code? Does the approach, if fixed, work?

    Read the article

  • Deformation of Sphere using Transformations

    - by Mert Toka
    I have a graphic related question. I need to have a transformation matrix that I have no idea about what it is. The problem is to create right image from the right sphere. I created those images in Maya, but I need some matrices for the graphics course. Here is the image: Our professor told us to use some sine and cosine in our transformations, but I have no idea what he meant. I thought of intersecting a plane from the grid(that is xz plane) and sphere, and then scaling down the resulting circle. Would that work? I also checked this paper, however it looks like a bit advanced for me. Another thing is I guess that paper is not about the same type of information I was looking for. It would be great if you could help me.

    Read the article

  • Square game map rendered as sphere

    - by Roflha
    For a hobby project of mine I have created a finite voxel world (similar to Minecraft), but as I said, mine is finite. When you reach the edge of it, you are sent to the other side. That is all working fine along with rendering the far side of the map, but I want to be able to render this grid as a sphere. Looking down from above, the world is a square. I basically want to be able to represent a portion of that square as a sphere, as if you were looking at a planet. Right now I am experimenting with taking a circular section of the map, and rendering that, but it look to flat (no curvature around the edges). My question then, is what would be the best way to add some curvature to the edges of a 2d circle to make it look like a hemisphere. However, I am not overly attached to this implementation so if somebody has some other idea for representing the square as a planet, I am all ears.

    Read the article

1 2 3 4 5 6 7 8 9 10  | Next Page >