Search Results

Search found 29201 results on 1169 pages for 'game development'.

Page 548/1169 | < Previous Page | 544 545 546 547 548 549 550 551 552 553 554 555  | Next Page >

  • How is the gimbal locked problem solved using accumulative matrix transformations

    - by Luke San Antonio
    I am reading the online "Learning Modern 3D Graphics Programming" book by Jason L. McKesson As of now, I am up to the gimbal lock problem and how to solve it using quaternions. However right here, at the Quaternions page. Part of the problem is that we are trying to store an orientation as a series of 3 accumulated axial rotations. Orientations are orientations, not rotations. And orientations are certainly not a series of rotations. So we need to treat the orientation of the ship as an orientation, as a specific quantity. I guess this is the first spot I start to get confused, the reason is because I don't see the dramatic difference between orientations and rotations. I also don't understand why an orientation cannot be represented by a series of rotations... Also: The first thought towards this end would be to keep the orientation as a matrix. When the time comes to modify the orientation, we simply apply a transformation to this matrix, storing the result as the new current orientation. This means that every yaw, pitch, and roll applied to the current orientation will be relative to that current orientation. Which is precisely what we need. If the user applies a positive yaw, you want that yaw to rotate them relative to where they are current pointing, not relative to some fixed coordinate system. The concept, I understand, however I don't understand how if accumulating matrix transformations is a solution to this problem, how the code given in the previous page isn't just that. Here's the code: void display() { glClearColor(0.0f, 0.0f, 0.0f, 0.0f); glClearDepth(1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glutil::MatrixStack currMatrix; currMatrix.Translate(glm::vec3(0.0f, 0.0f, -200.0f)); currMatrix.RotateX(g_angles.fAngleX); DrawGimbal(currMatrix, GIMBAL_X_AXIS, glm::vec4(0.4f, 0.4f, 1.0f, 1.0f)); currMatrix.RotateY(g_angles.fAngleY); DrawGimbal(currMatrix, GIMBAL_Y_AXIS, glm::vec4(0.0f, 1.0f, 0.0f, 1.0f)); currMatrix.RotateZ(g_angles.fAngleZ); DrawGimbal(currMatrix, GIMBAL_Z_AXIS, glm::vec4(1.0f, 0.3f, 0.3f, 1.0f)); glUseProgram(theProgram); currMatrix.Scale(3.0, 3.0, 3.0); currMatrix.RotateX(-90); //Set the base color for this object. glUniform4f(baseColorUnif, 1.0, 1.0, 1.0, 1.0); glUniformMatrix4fv(modelToCameraMatrixUnif, 1, GL_FALSE, glm::value_ptr(currMatrix.Top())); g_pObject->Render("tint"); glUseProgram(0); glutSwapBuffers(); } To my understanding, isn't what he is doing (modifying a matrix on a stack) considered accumulating matrices, since the author combined all the individual rotation transformations into one matrix which is being stored on the top of the stack. My understanding of a matrix is that they are used to take a point which is relative to an origin (let's say... the model), and make it relative to another origin (the camera). I'm pretty sure this is a safe definition, however I feel like there is something missing which is blocking me from understanding this gimbal lock problem. One thing that doesn't make sense to me is: If a matrix determines the difference relative between two "spaces," how come a rotation around the Y axis for, let's say, roll, doesn't put the point in "roll space" which can then be transformed once again in relation to this roll... In other words shouldn't any further transformations to this point be in relation to this new "roll space" and therefore not have the rotation be relative to the previous "model space" which is causing the gimbal lock. That's why gimbal lock occurs right? It's because we are rotating the object around set X, Y, and Z axes rather than rotating the object around it's own, relative axes. Or am I wrong? Since apparently this code I linked in isn't an accumulation of matrix transformations can you please give an example of a solution using this method. So in summary: What is the difference between a rotation and an orientation? Why is the code linked in not an example of accumulation of matrix transformations? What is the real, specific purpose of a matrix, if I had it wrong? How could a solution to the gimbal lock problem be implemented using accumulation of matrix transformations? Also, as a bonus: Why are the transformations after the rotation still relative to "model space?" Another bonus: Am I wrong in the assumption that after a transformation, further transformations will occur relative to the current? Also, if it wasn't implied, I am using OpenGL, GLSL, C++, and GLM, so examples and explanations in terms of these are greatly appreciated, if not necessary. The more the detail the better! Thanks in advance...

    Read the article

  • OpenGL ES Loading

    - by kuroutadori
    I want to know what is the norm of loading rendering code. Take a button. When the application is loaded, a texture is loaded which has the image of the button on it. When the button is tapped, it then adds a loader into a queue, which is loaded on render thread. It then loads up an array buffer with vertexes and tex coords when render is called. It then adds to a render tree. Then it renders. the render function looks like this void render() { update(); mBaseRenderer->render(); } update() is when the queue is checked to see if anything needs loading. mBaseRenderer->render() is the render tree. What I am asking then is, should I even have the update() there at all and instead have everything preloaded before it renders? If I can have it loaded when need, for instance when there is tap, then how can it be done (My current code causes an dequeueing buffer error (Unknown error: -75) which I assume is to do with OpenGL ES and the context)?

    Read the article

  • XNA texture stretching at extreme coordinates

    - by Shaun Hamman
    I was toying around with infinitely scrolling 2D textures using the XNA framework and came across a rather strange observation. Using the basic draw code: spriteBatch.Begin(SpriteSortMode.Deferred, null, SamplerState.PointWrap, null, null); spriteBatch.Draw(texture, Vector2.Zero, sourceRect, Color.White, 0.0f, Vector2.Zero, 2.0f, SpriteEffects.None, 1.0f); spriteBatch.End(); with a small 32x32 texture and a sourceRect defined as: sourceRect = new Rectangle(0, 0, Window.ClientBounds.Width, Window.ClientBounds.Height); I was able to scroll the texture across the window infinitely by changing the X and Y coordinates of the sourceRect. Playing with different coordinate locations, I noticed that if I made either of the coordinates too large, the texture no longer drew and was instead replaced by either a flat color or alternating bands of color. Tracing the coordinates back down, I found the following at around (0, -16,777,000): As you can see, the texture in the top half of the image is stretched vertically. My question is why is this occurring? Certainly I can do things like bind the x/y position to some low multiple of 32 to give the same effect without this occurring, so fixing it isn't an issue, but I'm curious about why this happens. My initial thought was perhaps it was overflowing the coordinate value or some such thing, but looking at a data type size chart, the next closest below is an unsigned short with a range of about 32,000, and above is an unsigned int with a range of around 2,000,000,000 so that isn't likely the cause.

    Read the article

  • Unexpected results for projection on to plane

    - by ravenspoint
    I want to use this projection matrix: GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; It should cast object shadows onto the y = 0 plane from a point light at 1,1,-1. I create a rectangle in the x = 0.5 plane glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); Now if I manually multiply these vertices with the matrix, I get. glBegin( GL_QUADS ); glVertex3f( 0.375,0,-0.375); glVertex3f( 0.375,0,-1.625); glVertex3f( 0,0,-2); glVertex3f( 0,0,0); glEnd(); Which produces a reasonable display ( camera at 0,5,0 looking down y axis ) So rather than do the calculation manually, I should be able to use the opengl model transormation. I write this code: glMatrixMode (GL_MODELVIEW); GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; glLoadMatrixf( shadow ); glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); But this produces a blank screen! What am I doing wrong? Is there some debug mode where I can print out the transformed vertices, so I can see where they are ending up? Note: People have suggested that using glMultMatrixf() might make a difference. It doesn't. Replacing glLoadMatrixf( shadow ); with glLoadIdentity(); glMultMatrixf( shadow ); gives the identical result ( of course! )

    Read the article

  • Question about component based design: handling objects interaction

    - by Milo
    I'm not sure how exactly objects do things to other objects in a component based design. Say I have an Obj class. I do: Obj obj; obj.add(new Position()); obj.add(new Physics()); How could I then have another object not only move the ball but have those physics applied. I'm not looking for implementation details but rather abstractly how objects communicate. In an entity based design, you might just have: obj1.emitForceOn(obj2,5.0,0.0,0.0); Any article or explanation to get a better grasp on a component driven design and how to do basic things would be really helpful.

    Read the article

  • What features does D3D have that OpenGL does not (and vice versa)?

    - by Tom
    Are there any feature comparisons on Direct3D 11 and the newest OpenGL versions? Well, simply put, Direct3D 11 introduced three main features (taken from Wikipedia): Tesselation Multithreaded rendering Compute shaders Increased texture cache Now I'm wondering, how does the newest versions of OpenGL cope with these features? And since I have this feeling that there are features that Direct3D lacks from OpenGL's side, what are those?

    Read the article

  • How do I increase moving speed of body?

    - by Siddharth
    How to move ball speedily on the screen using box2d in libGDX? package com.badlogic.box2ddemo; import com.badlogic.gdx.ApplicationListener; import com.badlogic.gdx.Gdx; import com.badlogic.gdx.graphics.GL10; import com.badlogic.gdx.graphics.Texture; import com.badlogic.gdx.graphics.g2d.Sprite; import com.badlogic.gdx.graphics.g2d.SpriteBatch; import com.badlogic.gdx.graphics.g2d.TextureRegion; import com.badlogic.gdx.math.Matrix4; import com.badlogic.gdx.math.Vector2; import com.badlogic.gdx.physics.box2d.Body; import com.badlogic.gdx.physics.box2d.BodyDef; import com.badlogic.gdx.physics.box2d.BodyDef.BodyType; import com.badlogic.gdx.physics.box2d.Box2DDebugRenderer; import com.badlogic.gdx.physics.box2d.CircleShape; import com.badlogic.gdx.physics.box2d.Fixture; import com.badlogic.gdx.physics.box2d.FixtureDef; import com.badlogic.gdx.physics.box2d.PolygonShape; import com.badlogic.gdx.physics.box2d.World; public class Box2DDemo implements ApplicationListener { private SpriteBatch batch; private TextureRegion texture; private World world; private Body groundDownBody, groundUpBody, groundLeftBody, groundRightBody, ballBody; private BodyDef groundBodyDef1, groundBodyDef2, groundBodyDef3, groundBodyDef4, ballBodyDef; private PolygonShape groundDownPoly, groundUpPoly, groundLeftPoly, groundRightPoly; private CircleShape ballPoly; private Sprite sprite; private FixtureDef fixtureDef; private Vector2 ballPosition; private Box2DDebugRenderer renderer; Vector2 vector2; @Override public void create() { texture = new TextureRegion(new Texture( Gdx.files.internal("img/red_ring.png"))); sprite = new Sprite(texture); sprite.setOrigin(sprite.getWidth() / 2, sprite.getHeight() / 2); batch = new SpriteBatch(); world = new World(new Vector2(0.0f, 0.0f), false); groundBodyDef1 = new BodyDef(); groundBodyDef1.type = BodyType.StaticBody; groundBodyDef1.position.x = 0.0f; groundBodyDef1.position.y = 0.0f; groundDownBody = world.createBody(groundBodyDef1); groundBodyDef2 = new BodyDef(); groundBodyDef2.type = BodyType.StaticBody; groundBodyDef2.position.x = 0f; groundBodyDef2.position.y = Gdx.graphics.getHeight(); groundUpBody = world.createBody(groundBodyDef2); groundBodyDef3 = new BodyDef(); groundBodyDef3.type = BodyType.StaticBody; groundBodyDef3.position.x = 0f; groundBodyDef3.position.y = 0f; groundLeftBody = world.createBody(groundBodyDef3); groundBodyDef4 = new BodyDef(); groundBodyDef4.type = BodyType.StaticBody; groundBodyDef4.position.x = Gdx.graphics.getWidth(); groundBodyDef4.position.y = 0f; groundRightBody = world.createBody(groundBodyDef4); groundDownPoly = new PolygonShape(); groundDownPoly.setAsBox(480.0f, 10f); fixtureDef = new FixtureDef(); fixtureDef.density = 0f; fixtureDef.restitution = 1f; fixtureDef.friction = 0f; fixtureDef.shape = groundDownPoly; fixtureDef.filter.groupIndex = 0; groundDownBody.createFixture(fixtureDef); groundUpPoly = new PolygonShape(); groundUpPoly.setAsBox(480.0f, 10f); fixtureDef = new FixtureDef(); fixtureDef.friction = 0f; fixtureDef.restitution = 0f; fixtureDef.density = 0f; fixtureDef.shape = groundUpPoly; fixtureDef.filter.groupIndex = 0; groundUpBody.createFixture(fixtureDef); groundLeftPoly = new PolygonShape(); groundLeftPoly.setAsBox(10f, 320f); fixtureDef = new FixtureDef(); fixtureDef.friction = 0f; fixtureDef.restitution = 0f; fixtureDef.density = 0f; fixtureDef.shape = groundLeftPoly; fixtureDef.filter.groupIndex = 0; groundLeftBody.createFixture(fixtureDef); groundRightPoly = new PolygonShape(); groundRightPoly.setAsBox(10f, 320f); fixtureDef = new FixtureDef(); fixtureDef.friction = 0f; fixtureDef.restitution = 0f; fixtureDef.density = 0f; fixtureDef.shape = groundRightPoly; fixtureDef.filter.groupIndex = 0; groundRightBody.createFixture(fixtureDef); ballPoly = new CircleShape(); ballPoly.setRadius(16f); fixtureDef = new FixtureDef(); fixtureDef.shape = ballPoly; fixtureDef.density = 1f; fixtureDef.friction = 1f; fixtureDef.restitution = 1f; ballBodyDef = new BodyDef(); ballBodyDef.type = BodyType.DynamicBody; ballBodyDef.position.x = (int) 200; ballBodyDef.position.y = (int) 200; ballBody = world.createBody(ballBodyDef); ballBody.setLinearVelocity(200f, 200f); // ballBody.applyLinearImpulse(new Vector2(250f, 250f), // ballBody.getLocalCenter()); ballBody.createFixture(fixtureDef); renderer = new Box2DDebugRenderer(true, false, false); } @Override public void dispose() { ballPoly.dispose(); groundLeftPoly.dispose(); groundUpPoly.dispose(); groundDownPoly.dispose(); groundRightPoly.dispose(); world.destroyBody(ballBody); world.dispose(); } @Override public void pause() { } @Override public void render() { world.step(1f/30f, 3, 3); Gdx.gl.glClearColor(1f, 1f, 1f, 1f); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); batch.begin(); vector2 = ballBody.getLinearVelocity(); System.out.println("X=" + vector2.x + " Y=" + vector2.y); ballPosition = ballBody.getPosition(); renderer.render(world,batch.getProjectionMatrix()); // int preX = (int) (vector2.x / Math.abs(vector2.x)); // int preY = (int) (vector2.y / Math.abs(vector2.y)); // // if (Math.abs(vector2.x) == 0.0f) // ballBody1.setLinearVelocity(1.4142137f, vector2.y); // else if (Math.abs(vector2.x) < 1.4142137f) // ballBody1.setLinearVelocity(preX * 5, vector2.y); // // if (Math.abs(vector2.y) == 0.0f) // ballBody1.setLinearVelocity(vector2.x, 1.4142137f); // else if (Math.abs(vector2.y) < 1.4142137f) // ballBody1.setLinearVelocity(vector2.x, preY * 5); batch.draw(sprite, (ballPosition.x - (texture.getRegionWidth() / 2)), (ballPosition.y - (texture.getRegionHeight() / 2))); batch.end(); } @Override public void resize(int arg0, int arg1) { } @Override public void resume() { } } I implement above code but I can not achieve higher moving speed of the ball

    Read the article

  • Vertex Array Object (OpenGL)

    - by Shin
    I've just started out with OpenGL I still haven't really understood what Vertex Array Objects are and how they can be employed. If Vertex Buffer Object are used to store vertex data (such as their positions and texture coordinates) and the VAOs only contain status flags, where can they be used? What's their purpose? As far as I understood from the (very incomplete and unclear) GL Wiki, VAOs are used to set the flags/status for every vertex, following the order described in the Element Array Buffer, but the wiki was really ambiguous about it and I'm not really sure about what VAOs really do and how I could employ them.

    Read the article

  • Blender to 3ds max to cal3d format

    - by Kaliber64
    There are quite a few questions on cal3d but they are old and don't apply anymore. In Blender(must be 2.49a for python script to work!!!): I have a scene with 7 meshes, 1 armature, 10 bones. I tried going to one mesh to simplify it but doesn't change anything. I found a small blend file that was used for cal3d and it exported just fine. So I tried to copy it's setup with no success. EDIT*8/13/2012 In the last week here is what I have found so far. I made the mesh in the newest blender(2.62?) and exported it to import it in the old one(2.49a). Did an animation in the old one because importing new blend files to old blenders, its just said it would lose keyframe data and all was good. And then you get the last problem of it not exporting meshes. BUT I found that meshes made in the old one export regardless. I can't find any that won't export. So if I used the old blender to remake my model I could get it to export :) At this point I found a modified release of cal3d (because the most core model variable would not initiate as I made a really small test subject in old blender instead of remaking my big one which took 4 hours.) which fixes the morph objects and adds what cal3d left off with. Under their license they have to release the modification but it has no documentation so I have to figure it out on my own. Its mostly the same. But with this lib it came with a 3ds max exporter. My question now is how do I transfer armature and mesh information from blender to 3ds max in order to export into cal3d format. Every time I try the models are see through and small and there are no bones. The formats I have tried to import are .3ds .obj(mesh only) and COLLADA. In all of them the mesh is invisible and no bones. It says the default texture is on so I should be able to see it. All the vertices are present I found a vertex highlighter so I can see those. If any of this is confusing let me know so I can clear it up. Its late .<=sleep.

    Read the article

  • Efficiently representing a dynamic transform hierarchy

    - by Mattia
    I'm looking for a way to represent a dynamic transform hierarchy (i.e. one where nodes can be inserted and removed arbitrarily) that's a bit more efficient than using a standard tree of pointers . I saw the answers to this question ( Efficient structure for representing a transform hierarchy. ), but as far as I can determine the tree-as-array approach only works for static hierarchies or dynamic ones where nodes have a fixed number of children (both deal-breakers for me). I'm probably wrong about that but could anyone point out how? If I'm not wrong are there other alternatives that work for dynamic hierarchies?

    Read the article

  • Problem with Ogre::Camera lookAt function when target is directly below.

    - by PigBen
    I am trying to make a class which controls a camera. It's pretty basic right now, it looks like this: class HoveringCameraController { public: void init(Ogre::Camera & camera, AnimatedBody & target, Ogre::Real height); void update(Ogre::Real time_delta); private: Ogre::Camera * camera_; AnimatedBody * target_; Ogre::Real height_; }; HoveringCameraController.cpp void HoveringCameraController::init(Ogre::Camera & camera, AnimatedBody & target, Ogre::Real height) { camera_ = &camera; target_ = &target; height_ = height; update(0.0); } void HoveringCameraController::update(Ogre::Real time_delta) { auto position = target_->getPosition(); position.y += height_; camera_->setPosition(position); camera_->lookAt(target_->getPosition()); } AnimatedBody is just a class that encapsulates an entity, it's animations and a scene node. The getPosition function is simply forwarded to it's scene node. What I want(for now) is for the camera to simply follow the AnimatedBody overhead at the distance given(the height parameter), and look down at it. It follows the object around, but it doesn't look straight down, it's tilted quite a bit in the positive Z direction. Does anybody have any idea why it would do that? If I change this line: position.y += height_; to this: position.x += height_; or this: position.z += height_; it does exactly what I would expect. It follows the object from the side or front, and looks directly at it.

    Read the article

  • Texture errors in CubeMap

    - by shade4159
    I am trying to apply this texture as a cubemap. This is my result: Clearly I am doing something with my texture coordinates, but I cannot for the life of me figure out what. I don't even see a pattern to the texture fragments. They just seem like a jumble of different faces. Can anyone shed some light on this? Vertex shader: #version 400 in vec4 vPosition; in vec3 inTexCoord; smooth out vec3 texCoord; uniform mat4 projMatrix; void main() { texCoord = inTexCoord; gl_Position = projMatrix * vPosition; } My fragment shader: #version 400 smooth in vec3 texCoord; out vec4 fColor; uniform samplerCube textures void main() { fColor = texture(textures,texCoord); } Vertices of cube: point4 worldVerts[8] = { vec4( 15, 15, 15, 1 ), vec4( -15, 15, 15, 1 ), vec4( -15, 15, -15, 1 ), vec4( 15, 15, -15, 1 ), vec4( -15, -15, 15, 1 ), vec4( 15, -15, 15, 1 ), vec4( 15, -15, -15, 1 ), vec4( -15, -15, -15, 1 ) }; Cube rendering: void worldCube(point4* verts, int& Index, point4* points, vec3* texVerts) { quadInv( verts[0], verts[1], verts[2], verts[3], 1, Index, points, texVerts); quadInv( verts[6], verts[3], verts[2], verts[7], 2, Index, points, texVerts); quadInv( verts[4], verts[5], verts[6], verts[7], 3, Index, points, texVerts); quadInv( verts[4], verts[1], verts[0], verts[5], 4, Index, points, texVerts); quadInv( verts[5], verts[0], verts[3], verts[6], 5, Index, points, texVerts); quadInv( verts[4], verts[7], verts[2], verts[1], 6, Index, points, texVerts); } Backface function (since this is the inside of the cube): void quadInv( const point4& a, const point4& b, const point4& c, const point4& d , int& Index, point4* points, vec3* texVerts) { quad( a, d, c, b, Index, points, texVerts, a.to_3(), b.to_3(), c.to_3(), d.to_3()); } And the quad drawing function: void quad( const point4& a, const point4& b, const point4& c, const point4& d, int& Index, point4* points, vec3* texVerts, const vec3& tex_a, const vec3& tex_b, const vec3& tex_c, const vec3& tex_d) { texVerts[Index] = tex_a.normalized(); points[Index] = a; Index++; texVerts[Index] = tex_b.normalized(); points[Index] = b; Index++; texVerts[Index] = tex_c.normalized(); points[Index] = c; Index++; texVerts[Index] = tex_a.normalized(); points[Index] = a; Index++; texVerts[Index] = tex_c.normalized(); points[Index] = c; Index++; texVerts[Index] = tex_d.normalized(); points[Index] = d; Index++; } Edit: I forgot to mention, in the image, the camera is pointed directly at the back face of the cube. You can kind of see the diagonals leading out of the corners, if you squint.

    Read the article

  • fast java2d translucency

    - by mdriesen
    I'm trying to draw a bunch of translucent circles on a Swing JComponent. This isn't exactly fast, and I was wondering if there is a way to speed it up. My custom JComponent has the following paintComponent method: public void paintComponent(Graphics g) { Rectangle view = g.getClipBounds(); VolatileImage image = createVolatileImage(view.width, view.height); Graphics2D buffer = image.createGraphics(); // translate to camera location buffer.translate(-cx, -cy); // renderables contains all currently visible objects for(Renderable r : renderables) { r.paint(buffer); } g.drawImage(image.getSnapshot(), view.x, view.y, this); } The paint method of my circles is as follows: public void paint(Graphics2D graphics) { graphics.setPaint(paint); graphics.fillOval(x, y, radius, radius); } The paint is just an rgba color with a < 255: Color(int r, int g, int b, int a) It works fast enough for opaque objects, but is there a simple way to speed this up for translucent ones?

    Read the article

  • OpenGL Learning Material (that's up to date)

    - by Sauron
    So im sure there are topics on this, but alot of them list older material. And the last book: http://www.amazon.com/OpenGL-SuperBible-Comprehensive-Tutorial-Reference/dp/0321712617/ref=sr_1_1?ie=UTF8&qid=1346116133&sr=8-1&keywords=opengl REALLY REALLY disappointed me. I DO NOT want to use someone else's library to learn this stuff, that bothers me SOOO much. So I was hoping there was a newer book that goes into detail, and doesn't use some sort of library "Hiding" everything from you. Or should I just look at older material? If so....anything thats not "too" out of date. Terrain tutorials are a plus (that's kinda my "goal"). Thanks

    Read the article

  • Matrix.CreateBillboard centre rotation problem

    - by Chris88
    I'm having an issue with Matrix.CreateBillboard and a textured Quad where the center axis seems to be positioned incorrectly to the quad object which is rotating around a center point: Using: BasicEffect quadEffect; Drawing the quad shape: Left = Vector3.Cross(Normal, Up); Vector3 uppercenter = (Up * height / 2) + origin; LowerLeft = uppercenter + (Left * width / 2); LowerRight = uppercenter - (Left * width / 2); UpperLeft = LowerLeft - (Up * height); UpperRight = LowerRight - (Up * height); Where height and width are float values passed in (it draws a square) Draw method: quadEffect.View = camera.view; quadEffect.Projection = camera.projection; quadEffect.World = Matrix.CreateBillboard(Origin, camera.cameraPosition, Vector3.Up, camera.cameraDirection); GraphicsDevice.BlendState = BlendState.Additive; foreach (EffectPass pass in quadEffect.CurrentTechnique.Passes) { pass.Apply(); GraphicsDevice.DrawUserIndexedPrimitives <VertexPositionNormalTexture>( PrimitiveType.TriangleList, Vertices, 0, 4, Indexes, 0, 2); } GraphicsDevice.BlendState = BlendState.Opaque; In the screenshots below i draw the image at Vector3(32f, 0f, 32f) The screenshots below show you the position of the quad in relation to the red cross. The red cross shows where it should be drawn http://i.imgur.com/YwRYj.jpg http://i.imgur.com/ZtoHL.jpg It rotates around the red cross position

    Read the article

  • How to use the float value from Noise function in voxel terrain?

    - by therealjohn
    Im using Unity, although this question is not really specific to that engine. Im also using an asset from the store called Coherent Noise. It has some neat noise functionality built it. I am using those functions to produce some noise values. I am getting a value between 0 and 1 (floats). I have an array of blocks (for minecraft like voxel terrain) and I am confused on how to use this float value for terrain? Do I do something like <= 0 == Solid block etc etc? I am confused on how to use the floating values that the noise functions produce to use for height values of an array of say a height of 16. Thanks for any guidance.

    Read the article

  • Box2d world width and height ratio with screen width and height

    - by Sujith
    I have view, for example GameView which extends SurfaceView . I have integrated Box2D physics in GameView. I have two widths , GameView width, height and Box2D physics world width ,height. I need to get the position of box2d world with the GameView co-ordinates. For example, Total width of screen = 240 Total height of screen = 320 Screen points needed to be mapped onto box2d co-ordinates (x,y) = 127, 139 For this i need to get the max width and height of the Box2d physics world. Is there is any way to get the max width and height of Box2d world. or Can i limit the width and height of box2d world within the screen resolution.

    Read the article

  • WIn API Basic Paint program

    - by Tom Burman
    Just trying to learn a bit of Win API. Im trying to make a basic drawing app, a bit like MS Paint. For the time being im trying to get one function to work which is, when you left click and drag the mouse around the screen a line is drawn behind the mouse. Heres what i have so far, but for some reason: 1) the line starts drawing straight away rather then waiting for the left click 2) the line isn't solid its very dotty. case WM_MOUSEMOVE: { if(MK_LBUTTON){ hdc = GetDC(hwnd); hPen = CreatePen(PS_SOLID,5,RGB(0, 0, 255)); SelectObject(hdc, hPen); int x = LOWORD(lParam); int y = HIWORD(lParam); MoveToEx(hdc,x,y,NULL); LineTo(hdc, LOWORD(lParam), HIWORD(lParam)); ReleaseDC(hwnd,hdc); } else break; } } Thanks for any help!

    Read the article

  • circle - rectangle collision in 2D, most efficient way

    - by john smith
    Suppose I have a circle intersecting a rectangle, what is ideally the least cpu intensive way between the two? method A calculate rectangle boundaries loop through all points of the circle and, for each of those, check if inside the rect. method B calculate rectangle boundaries check where the center of the circle is, compared to the rectangle make 9 switch/case statements for the following positions: top, bottom, left, right top left, top right, bottom left, bottom right inside rectangle check only one distance using the circle's radius depending on where the circle happens t be. I know there are other ways that are definitely better than these two, and if could point me a link to them, would be great but, exactly between those two, which one would you consider to be better, regarding both performance and quality/precision? Thanks in advance.

    Read the article

  • Is it normal to these Xcode prompts/errors when you deploy to IOS Simulator from Unity?

    - by Greg
    Just trying out the IOS build process.... Is it normal to see: Q1 - "upgrade to latest project format - project currently in Xcode 3.1 format, this will upgrade to 3.2" - just click OK and let Xcode do it's stuff? Q2 - same as Q1 but this time for the message "Remove obsolete build settings - will remove the build setting PREBINDING" Q3 - also when deploying to "Lastest IOS Simulator" you get the Simulator target produced, but also a non-simulator target which has lots of errors. So I assume you just ignore this target and not use it in Xcode correct? (i.e. just use the simulator target that is produced) Q4 - get a lot of warning after the simulator target is built? program works ok however.... Images For Q1 and Q2: For Q4: Settings used in Unity: Errors I see in XCode:

    Read the article

  • Xna GS 4 Animation Sample bone transforms not copying correctly

    - by annonymously
    I have a person model that is animated and a suitcase model that is not. The person can pick up the suitcase and it should move to the location of the hand bone of the person model. Unfortunately the suitcase doesn't follow the animation correctly. it moves with the hand's animation but its position is under the ground and way too far to the right. I haven't scaled any of the models myself. Thank you. The source code (forgive the rough prototype code): Matrix[] tran = new Matrix[man.model.Bones.Count];// The absolute transforms from the animation player man.model.CopyAbsoluteBoneTransformsTo(tran); Vector3 suitcasePos, suitcaseScale, tempSuitcasePos = new Vector3();// Place holders for the Matrix Decompose Quaternion suitcaseRot = new Quaternion(); // The transformation of the right hand bone is decomposed tran[man.model.Bones["HPF_RightHand"].Index].Decompose(out suitcaseScale, out suitcaseRot, out tempSuitcasePos); suitcasePos = new Vector3(); suitcasePos.X = tempSuitcasePos.Z;// The axes are inverted for some reason suitcasePos.Y = -tempSuitcasePos.Y; suitcasePos.Z = -tempSuitcasePos.X; suitcase.Position = man.Position + suitcasePos;// The actual Suitcase properties suitcase.Rotation = man.Rotation + new Vector3(suitcaseRot.X, suitcaseRot.Y, suitcaseRot.Z); I am also copying the bone transforms from the animation player in the Person class like so: // The transformations from the AnimationPlayer Matrix[] skinTrans = new Matrix[model.Bones.Count]; skinTrans = player.GetBoneTransforms(); // copy each transformation to its corresponding bone for (int i = 0; i < skinTrans.Length; i++) { model.Bones[i].Transform = skinTrans[i]; }

    Read the article

  • 3D terrain map with Hexagon Grids (XNA)

    - by Rob
    I'm working on a hobby project (I'm a web/backend developer by day) and I want to create a 3D Tile (terrain) engine. I'm using XNA, but I can use MonoGame, OpenGL, or straight DirectX, so the answer does not have to be XNA specific. I'm more looking for some high level advice on how to approach this problem. I know about creating height maps and such, there are thousands of references out there on the net for that, this is a bit more specific. I'm more concerned with is the approach to get a 3D hexagon tile grid out of my terrain (since the terrain, and all 3d objects, are basically triangles). The first approach I thought about is to basically draw the triangles on the screen in the following order (blue numbers) to give me the triangles for terrain (black triangles) and then make hexes out of the triangles (red hex). http://screencast.com/t/ebrH2g5V This approach seems complicated to me since i'm basically having to draw 4 different types of triangles. The next approach I thought of was to use the existing triangles like I did for a square grid and get my hexes from 6 triangles as follows http://screencast.com/t/w9b7qKzVJtb8 This seems like the easier approach to me since there are only 2 types of triangles (i would have to play with the heights and widths to get a "perfect" hexagon, but the idea is the same. So I'm looking for: 1) Any suggestions on which approach I should take, and why. 2) How would I translate mouse position to a hexagon grid position (especially when moving the camera around), for example in the second image if the mouse pointer were the green circle, how would I determine to highlight that hexagon and then translating that into grid coordinates (assuming it is 0,0)? 3) Any references, articles, books, etc - to get me going in the right direction. Note: I've done hex grid's and mouse-grid coordinate conversion before in 2d. looking for some pointers on how to do the same in 3d. The result I would like to achieve is something similar to the following: http :// www. youtube .com / watch?v=Ri92YkyC3fw (sorry about the youtube link, but it will only let me post 2 links in this post... same rep problem i mention below...) Thanks for any help! P.S. Sorry for not posting the images inline, I apparently don't have enough rep on this stack exchange site.

    Read the article

  • Starting out with OpenGL when most tutorials are out of date

    - by AUTO
    I'm sure there are already a bunch of questions like this asked, but the constant updating of the OpenGL library throws them all away, and in a month or two, the answers here will be worthless again. I am ready to start programming in OpenGL using C++. I've got a working compiler (DevCpp; do NOT ask me to switch to VC++, and don't ask me why). Now I'm just looking for a solid tutorial on how to program with OpenGL. My assistant found the tutorial provided by NeHe Productions, but as I've come to find out, it's WAY OUT OF DATE! (although I did pull together a basic window to support an OpenGL canvas) Then I went online, and found the OpenGL SuperBible, which apparently uses freeglut? But what I'd like to know is whether or not SuperBible 5th edition is up to date any longer. The suggestion to freeglut I found said the latest version was 2.6.0 but now it's 2.8.0! Is the OpenGL SuperBible still a good, and fairly up-to-date place to start? Is there a better place to go to learn OpenGL? Am I allowed to simply store freeglut in the DevCpp include directory (maybe in GL), or is there some important procedure? Are there any comments or suggestions that I didn't think to ask since I'm only just beginning? @dreta cleared some things up for me, so now I have a better idea of what to ask: I think I'd like to start out with OpenGL using a wrapper library instead of directly accessing OpenGL.I just think that, for a beginner, it would be easier for me to program and get good results, while I don't yet have to understand all the grimy details (as @stephelton mentioned). The problem is, I can't find any library that doesn't have undefined references to no longer supported functions. Freeglut sounds operational, but it still uses GLU.Does anyone know what I can do?Also, I tried compiling the first SuperBible's source, but I got errors since GLAPI is not being defined as a type, the error originating in the GLU library. I'd like to use the SuperBible, but I don't know how to fix this.

    Read the article

  • Masking OpenGL texture by a pattern

    - by user1304844
    Tiled terrain. User wants to build a structure. He presses build and for each tile there is an "allow" or "disallow" tile sprite added to the scene. FPS drops right away, since there are 600+ tiles added to the screen. Since map equals screen, there is no scrolling. I came to an idea to make an allow grid covering the whole map and mask the disallow fields. Approach 1: Create allow and disallow grid textures. Draw a polygon on screen. Pass both textures to the fragment shader. Determine the position inside the polygon and use color from allowTexture if the fragment belongs to the allow field, disallow otherwise Problem: How do I know if I'm on the field that isn't allowed if I cannot pass the matrix representing the map (enum FieldStatus[][] (Allow / Disallow)) to the shader? Therefore, inside the shader I don't know which fragments should be masked. Approach 2: Create allow texture. Create an empty texture buffer same size as the allow texture Memset the pixels of the empty texture to desired color for each pixel that doesn't allow building. Draw a polygon on screen. Pass both textures to the fragment shader. Use texture2 color if alpha 0, texture1 color otherwise. Problem: I'm not sure what is the right way to manipulate pixels on a texture. Do I just make a buffer with width*height*4 size and memcpy the color[] to desired coordinates or is there anything else to it? Would I have to call glTexImage2D after every change to the texture? Another problem with this approach is that it takes a lot more work to get a prettier effect since I'm manipulating the color pixels instead of just masking two textures. varying vec2 TexCoordOut; uniform sampler2D Texture1; uniform sampler2D Texture2; void main(void){ vec4 allowColor = texture2D(Texture1, TexCoordOut); vec4 disallowColor = texture2D(Texture2, TexCoordOut); if(disallowColor.a > 0){ gl_FragColor= disallowColor; }else{ gl_FragColor= allowColor; }} I'm working with OpenGL on Windows. Any other suggestion is welcome.

    Read the article

  • OpenGLES GLSL Shader attributes always bound to 0

    - by codemonkey
    So I have a very simple vertex shader as follows #version 120 attribute vec3 position; attribute vec3 inColor; uniform mat4 mvp; varying vec3 fragColor; void main(void){ fragColor = inColor; gl_Position = mvp * vec4(position, 1.0); } Which I load, as well as the fragment shader: #version 120 varying vec3 fragColor; void main(void) { gl_FragColor = vec4(fragColor,1.0); } Which I then load, compile, and link to my shader program. I check for link status using glGetProgramiv(shaderProgram, GL_LINK_STATUS, &shaderSuccess); which returns GL_TRUE so I think its ok. However, when I query the active attributes and uniforms using #ifdef DEBUG int totalAttributes = -1; glGetProgramiv(shaderProgram, GL_ACTIVE_ATTRIBUTES, &totalAttributes); for(int i=0; i<totalAttributes; ++i) { int name_len=-1, num=-1; GLenum type = GL_ZERO; char name[100]; glGetActiveAttrib(shaderProgram, GLuint(i), sizeof(name)-1, &name_len, &num, &type, name ); name[name_len] = 0; GLuint location = glGetAttribLocation(shaderProgram, name); fprintf(stderr, "Attribute %s is bound at %d\n", name, location); } int totalUniforms = -1; glGetProgramiv(shaderProgram, GL_ACTIVE_UNIFORMS, &totalUniforms); for(int i=0; i<totalUniforms; ++i) { int name_len=-1, num=-1; GLenum type = GL_ZERO; char name[100]; glGetActiveUniform(shaderProgram, GLuint(i), sizeof(name)-1, &name_len, &num, &type, name ); name[name_len] = 0; GLuint location = glGetUniformLocation(shaderProgram, name); fprintf(stderr, "Uniform %s is bound at %d\n", name, location); } #endif I get: Attribute inColor is bound at 0 Attribute position is bound at 1 Uniform mvp is bound at 0 Which leads to failure when trying to use the shader to render the objects. I have tried switching the order of declaration of position & inColor, but still, only position is bound with the other two giving 0 Can someone please explain why this is happening? Thanks

    Read the article

< Previous Page | 544 545 546 547 548 549 550 551 552 553 554 555  | Next Page >