Search Results

Search found 39156 results on 1567 pages for 'device driver development'.

Page 601/1567 | < Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >

  • Toon shader with Texture. Can this be optimized?

    - by Alex
    I am quite new to OpenGL, I have managed after long trial and error to integrate Nehe's Cel-Shading rendering with my Model loaders, and have them drawn using the Toon shade and outline AND their original texture at the same time. The result is actually a very nice Cel Shading effect of the model texture, but it is havling the speed of the program, it's quite very slow even with just 3 models on screen... Since the result was kind of hacked together, I am thinking that maybe I am performing some extra steps or extra rendering tasks that maybe are not needed, and are slowing down the game? Something unnecessary that maybe you guys could spot? Both MD2 and 3DS loader have an InitToon() function called upon creation to load the shader initToon(){ int i; // Looping Variable ( NEW ) char Line[255]; // Storage For 255 Characters ( NEW ) float shaderData[32][3]; // Storate For The 96 Shader Values ( NEW ) FILE *In = fopen ("Shader.txt", "r"); // Open The Shader File ( NEW ) if (In) // Check To See If The File Opened ( NEW ) { for (i = 0; i < 32; i++) // Loop Though The 32 Greyscale Values ( NEW ) { if (feof (In)) // Check For The End Of The File ( NEW ) break; fgets (Line, 255, In); // Get The Current Line ( NEW ) shaderData[i][0] = shaderData[i][1] = shaderData[i][2] = float(atof (Line)); // Copy Over The Value ( NEW ) } fclose (In); // Close The File ( NEW ) } else return false; // It Went Horribly Horribly Wrong ( NEW ) glGenTextures (1, &shaderTexture[0]); // Get A Free Texture ID ( NEW ) glBindTexture (GL_TEXTURE_1D, shaderTexture[0]); // Bind This Texture. From Now On It Will Be 1D ( NEW ) // For Crying Out Loud Don't Let OpenGL Use Bi/Trilinear Filtering! ( NEW ) glTexParameteri (GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri (GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexImage1D (GL_TEXTURE_1D, 0, GL_RGB, 32, 0, GL_RGB , GL_FLOAT, shaderData); // Upload ( NEW ) } This is the drawing for the animated MD2 model: void MD2Model::drawToon() { float outlineWidth = 3.0f; // Width Of The Lines ( NEW ) float outlineColor[3] = { 0.0f, 0.0f, 0.0f }; // Color Of The Lines ( NEW ) // ORIGINAL PART OF THE FUNCTION //Figure out the two frames between which we are interpolating int frameIndex1 = (int)(time * (endFrame - startFrame + 1)) + startFrame; if (frameIndex1 > endFrame) { frameIndex1 = startFrame; } int frameIndex2; if (frameIndex1 < endFrame) { frameIndex2 = frameIndex1 + 1; } else { frameIndex2 = startFrame; } MD2Frame* frame1 = frames + frameIndex1; MD2Frame* frame2 = frames + frameIndex2; //Figure out the fraction that we are between the two frames float frac = (time - (float)(frameIndex1 - startFrame) / (float)(endFrame - startFrame + 1)) * (endFrame - startFrame + 1); // I ADDED THESE FROM NEHE'S TUTORIAL FOR FIRST PASS (TOON SHADE) glHint (GL_LINE_SMOOTH_HINT, GL_NICEST); // Use The Good Calculations ( NEW ) glEnable (GL_LINE_SMOOTH); // Cel-Shading Code // glEnable (GL_TEXTURE_1D); // Enable 1D Texturing ( NEW ) glBindTexture (GL_TEXTURE_1D, shaderTexture[0]); // Bind Our Texture ( NEW ) glColor3f (1.0f, 1.0f, 1.0f); // Set The Color Of The Model ( NEW ) // ORIGINAL DRAWING CODE //Draw the model as an interpolation between the two frames glBegin(GL_TRIANGLES); for(int i = 0; i < numTriangles; i++) { MD2Triangle* triangle = triangles + i; for(int j = 0; j < 3; j++) { MD2Vertex* v1 = frame1->vertices + triangle->vertices[j]; MD2Vertex* v2 = frame2->vertices + triangle->vertices[j]; Vec3f pos = v1->pos * (1 - frac) + v2->pos * frac; Vec3f normal = v1->normal * (1 - frac) + v2->normal * frac; if (normal[0] == 0 && normal[1] == 0 && normal[2] == 0) { normal = Vec3f(0, 0, 1); } glNormal3f(normal[0], normal[1], normal[2]); MD2TexCoord* texCoord = texCoords + triangle->texCoords[j]; glTexCoord2f(texCoord->texCoordX, texCoord->texCoordY); glVertex3f(pos[0], pos[1], pos[2]); } } glEnd(); // ADDED THESE FROM NEHE'S FOR SECOND PASS (OUTLINE) glDisable (GL_TEXTURE_1D); // Disable 1D Textures ( NEW ) glEnable (GL_BLEND); // Enable Blending ( NEW ) glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA); // Set The Blend Mode ( NEW ) glPolygonMode (GL_BACK, GL_LINE); // Draw Backfacing Polygons As Wireframes ( NEW ) glLineWidth (outlineWidth); // Set The Line Width ( NEW ) glCullFace (GL_FRONT); // Don't Draw Any Front-Facing Polygons ( NEW ) glDepthFunc (GL_LEQUAL); // Change The Depth Mode ( NEW ) glColor3fv (&outlineColor[0]); // Set The Outline Color ( NEW ) // HERE I AM PARSING THE VERTICES AGAIN (NOT IN THE ORIGINAL FUNCTION) FOR THE OUTLINE AS PER NEHE'S TUT glBegin (GL_TRIANGLES); // Tell OpenGL What We Want To Draw for(int i = 0; i < numTriangles; i++) { MD2Triangle* triangle = triangles + i; for(int j = 0; j < 3; j++) { MD2Vertex* v1 = frame1->vertices + triangle->vertices[j]; MD2Vertex* v2 = frame2->vertices + triangle->vertices[j]; Vec3f pos = v1->pos * (1 - frac) + v2->pos * frac; Vec3f normal = v1->normal * (1 - frac) + v2->normal * frac; if (normal[0] == 0 && normal[1] == 0 && normal[2] == 0) { normal = Vec3f(0, 0, 1); } glNormal3f(normal[0], normal[1], normal[2]); MD2TexCoord* texCoord = texCoords + triangle->texCoords[j]; glTexCoord2f(texCoord->texCoordX, texCoord->texCoordY); glVertex3f(pos[0], pos[1], pos[2]); } } glEnd (); // Tell OpenGL We've Finished glDepthFunc (GL_LESS); // Reset The Depth-Testing Mode ( NEW ) glCullFace (GL_BACK); // Reset The Face To Be Culled ( NEW ) glPolygonMode (GL_BACK, GL_FILL); // Reset Back-Facing Polygon Drawing Mode ( NEW ) glDisable (GL_BLEND); } Whereas this is the drawToon function in the 3DS loader void Model_3DS::drawToon() { float outlineWidth = 3.0f; // Width Of The Lines ( NEW ) float outlineColor[3] = { 0.0f, 0.0f, 0.0f }; // Color Of The Lines ( NEW ) //ORIGINAL CODE if (visible) { glPushMatrix(); // Move the model glTranslatef(pos.x, pos.y, pos.z); // Rotate the model glRotatef(rot.x, 1.0f, 0.0f, 0.0f); glRotatef(rot.y, 0.0f, 1.0f, 0.0f); glRotatef(rot.z, 0.0f, 0.0f, 1.0f); glScalef(scale, scale, scale); // Loop through the objects for (int i = 0; i < numObjects; i++) { // Enable texture coordiantes, normals, and vertices arrays if (Objects[i].textured) glEnableClientState(GL_TEXTURE_COORD_ARRAY); if (lit) glEnableClientState(GL_NORMAL_ARRAY); glEnableClientState(GL_VERTEX_ARRAY); // Point them to the objects arrays if (Objects[i].textured) glTexCoordPointer(2, GL_FLOAT, 0, Objects[i].TexCoords); if (lit) glNormalPointer(GL_FLOAT, 0, Objects[i].Normals); glVertexPointer(3, GL_FLOAT, 0, Objects[i].Vertexes); // Loop through the faces as sorted by material and draw them for (int j = 0; j < Objects[i].numMatFaces; j ++) { // Use the material's texture Materials[Objects[i].MatFaces[j].MatIndex].tex.Use(); // AFTER THE TEXTURE IS APPLIED I INSERT THE TOON FUNCTIONS HERE (FIRST PASS) glHint (GL_LINE_SMOOTH_HINT, GL_NICEST); // Use The Good Calculations ( NEW ) glEnable (GL_LINE_SMOOTH); // Cel-Shading Code // glEnable (GL_TEXTURE_1D); // Enable 1D Texturing ( NEW ) glBindTexture (GL_TEXTURE_1D, shaderTexture[0]); // Bind Our Texture ( NEW ) glColor3f (1.0f, 1.0f, 1.0f); // Set The Color Of The Model ( NEW ) glPushMatrix(); // Move the model glTranslatef(Objects[i].pos.x, Objects[i].pos.y, Objects[i].pos.z); // Rotate the model glRotatef(Objects[i].rot.z, 0.0f, 0.0f, 1.0f); glRotatef(Objects[i].rot.y, 0.0f, 1.0f, 0.0f); glRotatef(Objects[i].rot.x, 1.0f, 0.0f, 0.0f); // Draw the faces using an index to the vertex array glDrawElements(GL_TRIANGLES, Objects[i].MatFaces[j].numSubFaces, GL_UNSIGNED_SHORT, Objects[i].MatFaces[j].subFaces); glPopMatrix(); } glDisable (GL_TEXTURE_1D); // Disable 1D Textures ( NEW ) // THIS IS AN ADDED SECOND PASS AT THE VERTICES FOR THE OUTLINE glEnable (GL_BLEND); // Enable Blending ( NEW ) glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA); // Set The Blend Mode ( NEW ) glPolygonMode (GL_BACK, GL_LINE); // Draw Backfacing Polygons As Wireframes ( NEW ) glLineWidth (outlineWidth); // Set The Line Width ( NEW ) glCullFace (GL_FRONT); // Don't Draw Any Front-Facing Polygons ( NEW ) glDepthFunc (GL_LEQUAL); // Change The Depth Mode ( NEW ) glColor3fv (&outlineColor[0]); // Set The Outline Color ( NEW ) for (int j = 0; j < Objects[i].numMatFaces; j ++) { glPushMatrix(); // Move the model glTranslatef(Objects[i].pos.x, Objects[i].pos.y, Objects[i].pos.z); // Rotate the model glRotatef(Objects[i].rot.z, 0.0f, 0.0f, 1.0f); glRotatef(Objects[i].rot.y, 0.0f, 1.0f, 0.0f); glRotatef(Objects[i].rot.x, 1.0f, 0.0f, 0.0f); // Draw the faces using an index to the vertex array glDrawElements(GL_TRIANGLES, Objects[i].MatFaces[j].numSubFaces, GL_UNSIGNED_SHORT, Objects[i].MatFaces[j].subFaces); glPopMatrix(); } glDepthFunc (GL_LESS); // Reset The Depth-Testing Mode ( NEW ) glCullFace (GL_BACK); // Reset The Face To Be Culled ( NEW ) glPolygonMode (GL_BACK, GL_FILL); // Reset Back-Facing Polygon Drawing Mode ( NEW ) glDisable (GL_BLEND); glPopMatrix(); } Finally this is the tex.Use() function that loads a BMP texture and somehow gets blended perfectly with the Toon shading void GLTexture::Use() { glEnable(GL_TEXTURE_2D); // Enable texture mapping glBindTexture(GL_TEXTURE_2D, texture[0]); // Bind the texture as the current one }

    Read the article

  • Manipulating Perlin Noise

    - by Numeri
    I've been learning about Procedurally Generated Content lately (in particular, Perlin noise). Perlin noise works great for making things like landscapes, height maps, and stuff like that. But now I am trying to generate structures more like mountain ranges (in 2D, as 3D would be way over my head right now) or underground veins of ores. I can't manage to manipulate Perlin Noise to do this. Making a cut off point (i.e. using only the tops of the 'mountains' of a heightmap) wouldn't work, because I would get lumps of mountains/veins. Any suggestions? Thanks, Numeri

    Read the article

  • Zelda-style Top-down RPG. Storing data for each tile type

    - by Delerat
    I'm creating a Zelda-style RPG using Tiled, C#, and MonoGame. When my code parses the .tmx file, it will get a number to associate with each tile type based off of their position in the tile sheet. If I ever need to change my sprite sheet, this number will change for many of the tiles. How can I guarantee that when I parse my .tmx file, I will be able to know exactly what tile type I'm getting so that I can associate the proper data with it(transparency, animated, collision, etc.)?

    Read the article

  • 2d Ice movement

    - by Jeremy Clarkson
    I am building an top-down 2d RPG like zelda. I have been trying to implement ice sliding. I have a tile with the slide property. I thought it would be easy to get working. I figured that I would read the slide property, and move the character forward until the slide property no longer exists. So I tried a loop but all it did was stop at the first tile in an infinite loop. I then took the loop out and tried taking direct control of the character to move him along the slide path but I couldn't get it to move. Is there an easy way to do an ice sliding tile based movement in libgdx. I looked for a tutorial but none exist.

    Read the article

  • Game Code Design for Rendering

    - by kuroutadori
    I first created a game on the iPhone and I'm now porting it to Android. I wrote most of the code in C++, but when it came to porting it wasn't so easy. The Android way is to have two threads, one for rendering and one for updating. This due to some devices blocking when updating the hardware. My problem is that I am coming from the iPhone. When I transition, say from the Menu to the Game, I would stop the Animation (Rendering) and load up the next Manager (the Menu has a Manager and so has the Game). I could implement the same thing on Android, but I have noticed on game ports like Quake, don't do this - as far as I can tell. I have learnt that I cannot just dynamically add another Renderer class the the tree because I will probably get a dequeuing buffer error - which I believe to be a problem with the OpenGL ES side. So how is it done?

    Read the article

  • Multiplayer game communication framework for mac/ios

    - by ishaq
    (Cross post from stackoverflow) I am creating a multiplayer 2D game for Mac and iOS devices. I'll be using cocso2d for graphics/game engine, however I am largely blank on what to use for multiplayer communication. Please note that I cannot use central severs e.g. SmartFox, RedDwarf, etc since I want the players to "host" games for others and be able to play it on their LAN, VPN or my own servers. Any pointers? I checked lidgren but it's for .NET only and hence not an option for me. EDIT: just in case it wasn't clear, the messaging has to be real time hence it's probably going to be over UDP

    Read the article

  • Euler angles to Cartesian Coordinates for use with gluLookAt

    - by notrodash
    I have searched all of the internet but just couldn't find the answer. I am using LibGDX and this is part of my code that loops over and over: public void render() { GL11 gl = Gdx.gl11; float centerX = (float)Math.cos(yaw) * (float)Math.cos(pitch); float centerY = (float)Math.sin(yaw) * (float)Math.cos(pitch); float centerZ = (float)Math.sin(pitch); System.out.println(centerX+" "+centerY+" "+centerZ+" ~ "+GDXRacing.camera.position.x+" "+GDXRacing.camera.position.y+" "+GDXRacing.camera.position.z); Gdx.glu.gluLookAt(gl, GDXRacing.camera.position.x, GDXRacing.camera.position.y, GDXRacing.camera.position.z, centerX, centerY, centerZ, 0, 1, 0); if(Gdx.input.isKeyPressed(Keys.A)) { yaw--; } if(Gdx.input.isKeyPressed(Keys.D)) { yaw++; } } I might just be bad at the math, but I dont get it. Does someone have a good explanation and an idea about how to deal with this? I am trying to make a first person camera. By the way, the camera is translated by +10 on the Z axis. Currently when I run the application, this is what I get: Watch video in browser | Download video (for those who cant download the video, everything shakes in a clockwise/anticlockwise action, depending on if I increase or decrease the Yaw value) -Thank you. [edit] and with this code: public void render() { GL11 gl = Gdx.gl11; float centerX = (float)(MathUtils.cosDeg(yaw)*4); float centerY = 0; float centerZ = (float)(MathUtils.sinDeg(yaw)*4); System.out.println(centerX+" "+centerY+" "+centerZ+" ~ "+GDXRacing.camera.position.x+" "+GDXRacing.camera.position.y+" "+GDXRacing.camera.position.z); Gdx.glu.gluLookAt(gl, GDXRacing.camera.position.x, GDXRacing.camera.position.y, GDXRacing.camera.position.z, centerX, centerY, centerZ, 0, 1, 0); if(Gdx.input.isKeyPressed(Keys.A)) { yaw--; } if(Gdx.input.isKeyPressed(Keys.D)) { yaw++; } } it slowly swings from the left to the right. This approach worked for turning left and right for 2d games though. What am I doing wrong?

    Read the article

  • box2d resize bodies arround point

    - by philipp
    I have a compound object, consisting of a b2Body, vector-graphics and a list polygons which describe the b2body's shapes. This object has its own transformation matrix to centralize the storage of transformations. So far everything is working quiet fine, even scaling works, but not if i scale around a point. In the initialization phase of the object it is scaled around a point. This happens in this order: transform the main matrix transform the vector graphics and the polygons recreate the b2Body After this function ran, the shapes and all the graphics are exactly where they should be, BUT: after the first steps of the b2World the graphical stuff moves away from the body. When I ran the debugger I found out that the position of the body is 0/0 the red dot shows the center of scaling. the first image shows the basic setup and the second the final position of the graphics. This distance stays constant for the rest of the simulation. If I set the position via myBody.SetPosition( sx, sy ); the whole scenario just plays a bit more distant for the origin. Any Idea how to fix this? EDIT:: I came deeper down to the problem and it lies in the fact that i must not scale the transform matrix for the b2body shapes around the center, but set the b2body's position back to the point after scaling. But how can I calculate that point? EDIT 2 :: I came ever deeper down to it, even solved it, but this is a slow solution and i hope that there is somebody who understands what formula I need. assuming to have a set polygons relative to an origin as basis shapes for a b2body: scaling the whole object around a certain point is done in the following steps: i scale everything around the center except the polygons i create a clone of the polygons matrix i scale this clone around the point i calculate dx, dy as difference of clone.tx - original.tx and clone.ty - original.ty i scale the original polygon matrix NOT around the point i recreate the body i create the fixture i set the position of the body to dx and dy done! So what i an interested in is a formula for dx and dy without cloning matrices, scaling the clone around a point, getting dx and dy and finally scale the vertex matrix.

    Read the article

  • Deferred Shading - Toolkit

    - by AliveDevil
    I recently managed to get some lights rendered in a scene by using a buffer and a for-loop. The problem with this method is the performance drop if more lights are used. I tried to convert Deferred Rendering in XNA4.0 | ROY-T.NL but it is not working, because I am not using any models. I know I have to render color, normals and lights seperate but I don't know how I could get it working. For understanding my structure better I'm using a world-class which holds some chunks. These chunks are loading all vertices from their items. These items have a property which returns the vertices. The item is returning VertexPositionNormalTexture[]. The chunk loads these Vertices and combines them to one large array of VertexPositionNormalTexture via someList.AsParallel().SelectMany(m => m).ToArray()). m is a VertexPositionNormalTexture. someList is List<VertexPositionNormalTexture>. I got my own shader to draw these vertices how I want them to be drawn. The first thing I would try is setting up two RenderTarget2D for rendering the color and normal part. With two different shaders. Than I would have to render the lights and there's the problem: I don't know how. I set up a structure to simplify working with lights but it didn't really help. public struct Light { public Vector3 Position; public Color4 Color; public float Range; public float Intensity; public Light( Vector3 position, Color color, float range, float intensity ) : this() { this.Position = position; this.Color = color; this.Range = range; this.Intensity = intensity; } public float[] Definition { get { return new[] { Position.X, Position.Y, Position.Z, Color.Red, Color.Green, Color.Blue, Intensity, Range }; } } } The next part is equally different because I don't know how to combine the colorMap, normalMap and textureMap to one finalMap. Some information to the system: I'm using SharpDX (Nightly from some months ago) and the SharpDX.Toolkit (I don't want to mess up with Direct3DDevice and similar things). Can someone help me with this problem? If things are missing or I provided insufficient information tell me, I need to get deferred shading working. Things I'm not able to do: create a rendertarget which holds all lights, merge colorMap, normalMap and lightMap to one finalMap and presenting this to the user.

    Read the article

  • Setting the values of a struct array from JS to GLSL

    - by mikidelux
    I've been trying to make a structure that will contain all the lights of my WebGL app, and I'm having troubles setting up it's values from JS. The structure is as follows: struct Light { vec4 position; vec4 ambient; vec4 diffuse; vec4 specular; vec3 spotDirection; float spotCutOff; float constantAttenuation; float linearAttenuation; float quadraticAttenuation; float spotExponent; float spotLightCosCutOff; }; uniform Light lights[numLights]; After testing LOTS of things I made it work but I'm not happy with the code I wrote: program.uniform.lights = []; program.uniform.lights.push({ position: "", diffuse: "", specular: "", ambient: "", spotDirection: "", spotCutOff: "", constantAttenuation: "", linearAttenuation: "", quadraticAttenuation: "", spotExponent: "", spotLightCosCutOff: "" }); program.uniform.lights[0].position = gl.getUniformLocation(program, "lights[0].position"); program.uniform.lights[0].diffuse = gl.getUniformLocation(program, "lights[0].diffuse"); program.uniform.lights[0].specular = gl.getUniformLocation(program, "lights[0].specular"); program.uniform.lights[0].ambient = gl.getUniformLocation(program, "lights[0].ambient"); ... and so on I'm sorry for making you look at this code, I know it's horrible but I can't find a better way. Is there a standard or recommended way of doing this properly? Can anyone enlighten me?

    Read the article

  • isometric background that covers the viewport [on hold]

    - by Richard
    The background image should cover the viewport. The technique I use now is a loop with an innerloop that draws diamond shaped images on a canvas element, but it looks like a rotated square. This is a nice example: ,that covers the whole viewport. I have heard something about clickthrough maps, but what more ways are there that are most efficient with mobile devices and javascript? Any advice in grid design out there?.

    Read the article

  • OpenGL profiling with AMD PerfStudio 2

    - by Aurus
    I'm rendering just a really small amount of polygons for my UI but however I still tried to increase the FPS. In the end I removed redundant calls which increased the FPS. I really don't want to lose FPS for nothing so I keep looking for more improvements. The first thing I noticed is the "huge" time where no calls are made before SwapBuffer (the black one). Well I know that OpenGL works asynchronous so SwapBuffer has to wait until everything is done. But shouldn't PerfStudio mark this time also as black ? Correct me If I am wrong. The second thing I noticed is that some glUniform2f calls just take longer (the brown ones). I mean they should all upload 2floats to the GPU how can the time be so different from call to call. The program isn't even changed or something like that. I also tried to look at other programs like gDebugger or CodeXL but they often crashed and they show less statistics (only # of calls or redundant calls etc.) EDIT: I also realized that the draw calls also have different durations, which was obvious for me but sometimes drawing more vertices is faster than drawing less vertices.

    Read the article

  • Creating a curved mesh on inside of sphere based on texture image coordinates

    - by user5025
    In Blender, I have created a sphere with a panoramic texture on the inside. I have also manually created a plane mesh (curved to match the size of the sphere) that sits on the inside wall where I can draw a different texture. This is great, but I really want to reduce the manual labor, and do some of this work in a script -- like having a variable for the panoramic image, and coordinates of the area in the photograph that I want to replace with a new mesh. The hardest part of doing this is going to be creating a curved mesh in code that can sit on the inside wall of a sphere. Can anyone point me in the right direction?

    Read the article

  • How is the gimbal locked problem solved using accumulative matrix transformations

    - by Luke San Antonio
    I am reading the online "Learning Modern 3D Graphics Programming" book by Jason L. McKesson As of now, I am up to the gimbal lock problem and how to solve it using quaternions. However right here, at the Quaternions page. Part of the problem is that we are trying to store an orientation as a series of 3 accumulated axial rotations. Orientations are orientations, not rotations. And orientations are certainly not a series of rotations. So we need to treat the orientation of the ship as an orientation, as a specific quantity. I guess this is the first spot I start to get confused, the reason is because I don't see the dramatic difference between orientations and rotations. I also don't understand why an orientation cannot be represented by a series of rotations... Also: The first thought towards this end would be to keep the orientation as a matrix. When the time comes to modify the orientation, we simply apply a transformation to this matrix, storing the result as the new current orientation. This means that every yaw, pitch, and roll applied to the current orientation will be relative to that current orientation. Which is precisely what we need. If the user applies a positive yaw, you want that yaw to rotate them relative to where they are current pointing, not relative to some fixed coordinate system. The concept, I understand, however I don't understand how if accumulating matrix transformations is a solution to this problem, how the code given in the previous page isn't just that. Here's the code: void display() { glClearColor(0.0f, 0.0f, 0.0f, 0.0f); glClearDepth(1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glutil::MatrixStack currMatrix; currMatrix.Translate(glm::vec3(0.0f, 0.0f, -200.0f)); currMatrix.RotateX(g_angles.fAngleX); DrawGimbal(currMatrix, GIMBAL_X_AXIS, glm::vec4(0.4f, 0.4f, 1.0f, 1.0f)); currMatrix.RotateY(g_angles.fAngleY); DrawGimbal(currMatrix, GIMBAL_Y_AXIS, glm::vec4(0.0f, 1.0f, 0.0f, 1.0f)); currMatrix.RotateZ(g_angles.fAngleZ); DrawGimbal(currMatrix, GIMBAL_Z_AXIS, glm::vec4(1.0f, 0.3f, 0.3f, 1.0f)); glUseProgram(theProgram); currMatrix.Scale(3.0, 3.0, 3.0); currMatrix.RotateX(-90); //Set the base color for this object. glUniform4f(baseColorUnif, 1.0, 1.0, 1.0, 1.0); glUniformMatrix4fv(modelToCameraMatrixUnif, 1, GL_FALSE, glm::value_ptr(currMatrix.Top())); g_pObject->Render("tint"); glUseProgram(0); glutSwapBuffers(); } To my understanding, isn't what he is doing (modifying a matrix on a stack) considered accumulating matrices, since the author combined all the individual rotation transformations into one matrix which is being stored on the top of the stack. My understanding of a matrix is that they are used to take a point which is relative to an origin (let's say... the model), and make it relative to another origin (the camera). I'm pretty sure this is a safe definition, however I feel like there is something missing which is blocking me from understanding this gimbal lock problem. One thing that doesn't make sense to me is: If a matrix determines the difference relative between two "spaces," how come a rotation around the Y axis for, let's say, roll, doesn't put the point in "roll space" which can then be transformed once again in relation to this roll... In other words shouldn't any further transformations to this point be in relation to this new "roll space" and therefore not have the rotation be relative to the previous "model space" which is causing the gimbal lock. That's why gimbal lock occurs right? It's because we are rotating the object around set X, Y, and Z axes rather than rotating the object around it's own, relative axes. Or am I wrong? Since apparently this code I linked in isn't an accumulation of matrix transformations can you please give an example of a solution using this method. So in summary: What is the difference between a rotation and an orientation? Why is the code linked in not an example of accumulation of matrix transformations? What is the real, specific purpose of a matrix, if I had it wrong? How could a solution to the gimbal lock problem be implemented using accumulation of matrix transformations? Also, as a bonus: Why are the transformations after the rotation still relative to "model space?" Another bonus: Am I wrong in the assumption that after a transformation, further transformations will occur relative to the current? Also, if it wasn't implied, I am using OpenGL, GLSL, C++, and GLM, so examples and explanations in terms of these are greatly appreciated, if not necessary. The more the detail the better! Thanks in advance...

    Read the article

  • DX11 - Weird shader behavior with and without branching

    - by Martin Perry
    I have found problem in my shader code, which I dont´t know how to solve. I want to rewrite this code without "ifs" tmp = evaluate and result is 0 or 1 (nothing else) if (tmp == 1) val = X1; if (tmp == 0) val = X2; I rewite it this way, but this piece of code doesn ´t word correctly tmp = evaluate and result is 0 or 1 (nothing else) val = tmp * X1 val = !tmp * X2 However if I change it to: tmp = evaluate and result is 0 or 1 (nothing else) val = tmp * X1 if (!tmp) val = !tmp * X2 It works fine... but it is useless because of "if", which need to be eliminated I honestly don´t understand it Posted Image . I tried compilation with NO and FULL optimalization, result is same

    Read the article

  • How do display a "mucus spreading" effect in a 2D environment?

    - by nathan
    Here is an example of such a mucus spreading. The substance is spread around the source (in this example, the source would be the main alien building). The game is starcraft, the purple substance is called creep. How this kind of substance spreading would be achieved in a top down 2D environment? Recalculating the substance progression and regenerate the effect on the fly each frame or rather use a large collection of tiles or something else?

    Read the article

  • How do I pass an object location into a vertex shader?

    - by Greg Kassapidis
    I am using Blender Game Engine. I want to create a large flat plane, and deform it locally near a moving object. So far (despite being a beginner at shaders) I've written a vertex shader for the plane which moves the vertices to their correct positions (constant positions, for now). I cannot find a way to swap that constant location with an object's location updated every frame, while the shader is running. I am not even sure if it's possible. I only want to access a specific object's center from the shader.

    Read the article

  • Texturize a shape of multiple triangles in 2D

    - by Deukalion
    This is an example of a shape consisting of multiple points, triangles and eventually a shape: Red Dots = Vector3 (X, Y, Z) or Vector2 (X, Y) If I have a Texture of a certain size, how do I texturize this area in the best way so that the texture inside the shape matches the shape and does not overlap anywhere? Perhaps also with a chance to scale the texture in case it's too small or to big for the shape, but still so that it gets rendered correctly. Do I treat the shape as a rectangle? Figure out it's 4 corners? Or do I calculate the distance between Center - (Texture Width / 2) and Point (to see how "many" times the texture can fit between on that axis to estimate at what Coordinates the Texture should be at that certain point? I've looked at Texture Mapping but haven't found any concrete examples that it explains it well, it's also confusing with 0.0-1.0 values for Texture Coordinates.

    Read the article

  • Detecting extremely fast joystick button presses?

    - by DBRalir
    Is it usually possible for the player to press and release a button within a single frame, so that the game engine doesn't have time to detect it? How do programmers usually handle this situation? Is it even necessary to handle it? Specifically, I am asking about GLFW's joystick input capabilities. I am currently using GLFW to make a game, and I've noticed that keyboard and mouse have callback functions, while joysticks do not. Also, it does not appear to be possible to enable "sticky keys" for a joystick. (I have only recently started using GLFW, so please correct me if I am wrong, as having either of those would solve the problem.)

    Read the article

  • Tips for communication between JS browser game and node.js server?

    - by Petteri Hietavirta
    I am tinkering around with some simple Canvas based cave flyer game and I would like to make it multiplayer eventually. The plan is to use Node.js on the server side. The data sent over would consists of position of each player, direction, velocity and such. The player movements are simple force physics, so I should be able to extrapolate movements before next update from server. Any tips or best practices on the communications side? I guess web sockets are the way to go. Should I send information in every pass of the game loop or with specified intervals? Also, I don't mind if it doesn't work with older browsers.

    Read the article

  • XNA Moddable Game - Architecture Design and Reflection

    - by David K
    I've decided to embark on an XNA moddable game project of a simple rogue style. For all purposes of this question, I'm going to not be using a scripting engine, but rather allow modders to directly compile assemblies that are loaded by the game at run time. I know about the security problems this may raise. So in order to expose the moddable content, I have gone about creating a generic project in XNA called MyModel. This contains a number of interfaces that all inherit from IPlugin, such as IGameSystem, IRenderingSystem, IHud, IInputSystem etc. Then I've created another project called MyRogueModel. This references MyModel project, and holds interfaces such as IMonster, IPlayer, IDungeonGenerator, IInventorySystem. More rogue specific interfaces, but again, all interfaces in this project inherit from IPlugin. Then finally, I've created another project called MyRogueGame, that references both MyModel and MyRogueModel projects. This project will be the game that you run and play. Here I have put the actual implementation of the Monster, DungeonGenerator, InputSystem and RenderingSystem classes. This project will also scan the mods directory during run time and load any IPlugins it finds using reflection and override anything it finds from the default. For example if it finds a new implementation of the DungeonGenerator it will use that one instead. Now my question is, in order to get this far, I have effectively 2 projects that contain nothing but interfaces... which seems a little... strange ? For people to create mods for the game, I would give them both the MyModel and MyRogueModel assemblies in which they would reference. I'm not sure whether this is the right way to do it, but my reasoning goes as follows : If I write 1 input system, I can use it in any game I write. If I create 3 rogue like games, and a modder writes 1 rendering system, that modder could use the rendering system for all 3 games, because it all comes from the MyModel project. I come from a more web based C# role, so having empty interface projects doesn't seem wrong, its just something I haven't done before. Before I embark on something that might be crazy, I'd just like to know whether this is a foolish idea and whether there's a better (or established) design principle I should be following ?

    Read the article

  • Calculate gears rotation for a realtime simulation

    - by nkint
    Hi I'm trying to do a game with real time simulations of gears. There is a big Gear with inside a smaller gear. I managed to draw gears with different diameters but equal size teeth, but if i try to move the smaller one inside the bigger one the movement is odd. see the animated gif. the biggest gear is in center C1 and the small in the center C2. I calculate C2 position in this way: C2.x = C1.x + C1_RADIUS-C2_RADIUS) * cos(t); C2.y = C1.y - C1_RADIUS-C2_RADIUS) * sin(t); for t that goes from 0 to TWO_PI in n steps. I apply as rotation the angle t, but maybe it is wrong and i have to calculate another rotation for get a perfect joint

    Read the article

  • OpenGL ES 2.0 example for JOGL

    - by fjdutoit
    I've scoured the internet for the last few hours looking for an example of how to run even the most basic OpenGL ES 2 example using JOGL but "by Jupiter!" it has been a total fail. I tried converting the android example from the OpenGL ES 2.0 Programming Guide examples (and at the same time looking at the WebGL example -- which worked fine) yet without any success. Are there any examples out there? If anyone else wants some extra help regarding this question see this thread on the official Jogamp forum.

    Read the article

  • 3D Model not translating correctly (visually)

    - by ChocoMan
    In my first image, my model displays correctly: But when I move the model's position along the Z-axis (forward) I get this, yet the Y-axis doesnt change. An if I keep going, the model disappears into the ground: Any suggestions as to how I can get the model to translate properly visually? Here is how Im calling the model and the terrain in draw(): cameraPosition = new Vector3(camX, camY, camZ); // Copy any parent transforms. Matrix[] transforms = new Matrix[mShockwave.Bones.Count]; mShockwave.CopyAbsoluteBoneTransformsTo(transforms); Matrix[] ttransforms = new Matrix[terrain.Bones.Count]; terrain.CopyAbsoluteBoneTransformsTo(ttransforms); // Draw the model. A model can have multiple meshes, so loop. foreach (ModelMesh mesh in mShockwave.Meshes) { // This is where the mesh orientation is set, as well // as our camera and projection. foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(modelRotation) * Matrix.CreateTranslation(modelPosition); // Looking at the model (picture shouldnt change other than rotation) effect.View = Matrix.CreateLookAt(cameraPosition, modelPosition, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.TextureEnabled = true; } // Draw the mesh, using the effects set above. prepare3d(); mesh.Draw(); } //Terrain test foreach (ModelMesh meshT in terrain.Meshes) { foreach (BasicEffect effect in meshT.Effects) { effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; effect.World = ttransforms[meshT.ParentBone.Index] * Matrix.CreateRotationY(0) * Matrix.CreateTranslation(terrainPosition); // Looking at the model (picture shouldnt change other than rotation) effect.View = Matrix.CreateLookAt(cameraPosition, terrainPosition, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.TextureEnabled = true; } // Draw the mesh, using the effects set above. prepare3d(); meshT.Draw(); DrawText(); } base.Draw(gameTime); } Im suspecting that there may be something wrong with how I'm handling my camera. The model rotates fine on its Y-axis.

    Read the article

  • PokeMMO. How they do it?

    - by RufioLJ
    Well PokeMMO is a JAVA game project which basically is the original FireRed title for the GBA made online. They know this type of projects don't last long because of the copyrighted material used, but they somehow made their client extract resources from ROMS. So they don't offer any copyrighted material on their download. I wonder what technique they could be using for this? All I know is that they use LWJGL.

    Read the article

< Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >