Search Results

Search found 28914 results on 1157 pages for 'cloud development'.

Page 476/1157 | < Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >

  • Physics timestep questions

    - by SSL
    I've got a projectile working perfectly using the code below: //initialised in loading screen 60 is the FPS - projectilEposition and velocity are Vector3 types gravity = new Vector3(0, -(float)9.81 / 60, 0); //called every frame projectilePosition += projectileVelocity; This seems to work fine but I've noticed in various projectile examples I've seen that the elapsedtime per update is taken into account. What's the difference between the two and how can I convert the above to take into account the elapsedtime? (I'm using XNA - do I use ElapsedTime.TotalSeconds or TotalMilliseconds)? Edit: Forgot to add my attempt at using elapsedtime, which seemed to break the physics: projectileVelocity.Y += -(float)((9.81 * gameTime.ElapsedGameTime.TotalSeconds * gameTime.ElapsedGameTime.TotalSeconds) * 0.5f); Thanks for the help

    Read the article

  • Camera closes in on the fixed point

    - by V1ncam
    I've been trying to create a camera that is controlled by the mouse and rotates around a fixed point (read: (0,0,0)), both vertical and horizontal. This is what I've come up with: camera.Eye = Vector3.Transform(camera.Eye, Matrix.CreateRotationY(camRotYFloat)); Vector3 customAxis = new Vector3(-camera.Eye.Z, 0, camera.Eye.X); camera.Eye = Vector3.Transform(camera.Eye, Matrix.CreateFromAxisAngle(customAxis, camRotXFloat * 0.0001f)); This works quit well, except from the fact that when I 'use' the second transformation (go up and down with the mouse) the camera not only goes up and down, it also closes in on the point. It zooms in. How do I prevent this? Thanks in advance.

    Read the article

  • Updating physics for animated models

    - by Mathias Hölzl
    For a new game we have do set up a scene with a minimum of 30 bone animated models.(shooter) The problem is that the update process for the animated models takes too long. Thats what I do: Each character has ~30 bones and for every update tick the animation gets calculated and every bone fires a event with the new matrix. The physics receives the event with the new matrix and updates the collision shape for that bone. The time that it takes to build the animation isn't that bad (0.2ms for 30 Bones - 6ms for 30 models). But the main problem is that the physic engine (Bullet) uses a diffrent matrix for transformation and so its necessary to convert it. Code for matrix conversion: (~0.005ms) btTransform CLEAR_PHYSICS_API Mat_to_btTransform( Mat mat ) { btMatrix3x3 bulletRotation; btVector3 bulletPosition; XMFLOAT4X4 matData = mat.GetStorage(); // copy rotation matrix for ( int row=0; row<3; ++row ) for ( int column=0; column<3; ++column ) bulletRotation[row][column] = matData.m[column][row]; for ( int column=0; column<3; ++column ) bulletPosition[column] = matData.m[3][column]; return btTransform( bulletRotation, bulletPosition ); } The function for updating the transform(Physic): void CLEAR_PHYSICS_API BulletPhysics::VKinematicMove(Mat mat, ActorId aid) { if ( btRigidBody * const body = FindActorBody( aid ) ) { btTransform tmp = Mat_to_btTransform( mat ); body->setWorldTransform( tmp ); } } The real problem is the function FindActorBody(id): ActorIDToBulletActorMap::const_iterator found = m_actorBodies.find( id ); if ( found != m_actorBodies.end() ) return found->second; All physic actors are stored in m_actorBodies and thats why the updating process takes to long. But I have no idea how I could avoid this. Friendly greedings, Mathias

    Read the article

  • Which data structure you will use to for a witness list?

    - by mateen
    I'm making a game where the plot is a bank robbery. Lots of people witness that robbery. The game will load a list of suspects, while the players (witnesses) will have to identify the suspects of this robbery. The game should load a list of suspects to identify the one as quickly as possible. Admin can add/remove suspects in the lists and two or more lists of suspects can also be merged into one (to show it to the player). The question is which data structure will be suitable to develop the lists?

    Read the article

  • Implementing an automatic navigation mesh generation for 2d top down map?

    - by J2V
    I am currently in the middle of implementing an A* pathfinding for enemies. In order to implement the actual A* logic, I need a navigation mesh for my map. I am working on a 2D top down rpg map. The world is static, meaning there is no requirement for dynamic runtime mesh generation. My world objects are pixel based, not tile based and have associated data with them such as scale, rotation, origin etc. I will obviously need some vertex data being generated from my world objects, maybe create a polygon generation from color data? I could create a colormap with objects for my whole map, but I have no idea how to begin creating nav mesh polygons. How would an actual navigation mesh generation look like with this kind of available information? Can anyone maybe point to some great resources? I have looked into some 3D nav mesh tools, but they seem kind of overly complex for my situation and also have a lot of their req data available from models. Thanks a lot in advance! I have been trying to get my head around it for some time now.

    Read the article

  • Looking for literature about graphics pipeline optimization

    - by zacharmarz
    I am looking for some books, articles or tutorials about graphics architecture and graphics pipeline optimizations. It shouldn't be too old (2008 or newer) - the newer, the better. I have found something in [Optimising the Graphics Pipeline, NVIDIA, Koji Ashida] - too old, [Real-time rendering, Akenine Moller], [OpenGL Bindless Extensions, NVIDIA, Jeff Bolz], [Efficient multifragment effects on graphics processing units, Louis Frederic Bavoil] and some internet discussions. But there is not too much information and I want to read more. It should contain something about application, driver, memory and shader units communication and data transfers. About vertices and attributes. Also pre and post T&L cache (if they still exist in nowadays architectures) etc. I don't need anything about textures, frame buffers and rasterization. It can also be about OpenGL (not about DirecX) and optimizing extensions (not old extensions like VBOs, but newer like vertex_buffer_unified_memory).

    Read the article

  • Why does my sprite glitch when moving? [closed]

    - by rphello101
    Using Slick 2D/Java, I'm using the mouse to rotate a sprite and WASD to move it (A and D are used to strafe). I finally got the directional keys and rotation to work in sync, but I'm having problems with sporadic movement. It seems that the move speed is not always set to the value I have it at. Sometimes the sprite with just shoot across the screen. Furthermore, it seems that at 0 degrees, when the left key is pressed, the sprite moves backwards, not to the left. There also seems to be quite a bit of glitching when two keys are pressed, like left and up. Anyone see anything obvious? Here is the rotational code: int mX = Mouse.getX(); int mY = HEIGHT - Mouse.getY(); int pX = sprite.x+sprite.image.getWidth()/2; int pY = sprite.y+sprite.image.getHeight()/2; double mAng; if(mX!=pX){ mAng = Math.toDegrees(Math.atan2(mY - pY, mX - pX)); if(mAng==0 && mX<=pX) mAng=180; } else{ if(mY>pY) mAng=90; else mAng=270; } sprite.angle = mAng; sprite.image.setRotation((float) mAng); Movement code: Input input = gc.getInput(); Vector2f direction = new Vector2f(); Vector2f velocity = new Vector2f(); Vector2f left; Vector2f right; direction.x = (float) Math.cos(Math.toRadians(sprite.angle)); direction.y = (float) Math.sin(Math.toRadians(sprite.angle)); if(direction.length()>0) direction = direction.normalise(); left = new Vector2f(-direction.y, direction.x); right = new Vector2f(direction.y, -direction.x); velocity.x = (float) (direction.x * sprite.moveSpeed); velocity.y = (float) (direction.y * sprite.moveSpeed); if(input.isKeyDown(sprite.up)){ sprite.x += velocity.x*delta; sprite.y += velocity.y*delta; }if (input.isKeyDown(sprite.down)){ sprite.x -= velocity.x*delta; sprite.y -= velocity.y*delta; }if (input.isKeyDown(sprite.left)){ sprite.x += left.x * sprite.moveSpeed * delta; sprite.y += left.y * sprite.moveSpeed * delta; }if (input.isKeyDown(sprite.right)){ sprite.x += right.x * sprite.moveSpeed * delta; sprite.y += right.y * sprite.moveSpeed * delta; }

    Read the article

  • 2D SAT How to find collision center or point or area?

    - by Felipe Cypriano
    I've just implemented collision detection using SAT and this article as reference to my implementation. The detection is working as expected but I need to know where both rectangles are colliding. I need to find the center of the intersection, the black point on the image above. I've found some articles about this but they all involve avoiding the overlap or some kind of velocity, I don't need this. I just need to put a image on top of it. Like two cars crashed so I put an image on top of the collision. Any ideas? ## Update The information I've about the rectangles are the four points that represents them, the upper right, upper left, lower right and lower left coordinates. I'm trying to find an algorithm that can give me the intersection of these points.

    Read the article

  • Painting with pixel shaders

    - by Gustavo Maciel
    I have an almost full understanding of how 2D Lighting works, saw this post and was tempted to try implementing this in HLSL. I planned to paint each of the layers with shaders, and then, combine them just drawing one on top of another, or just pass the 3 textures to the shader and getting a better way to combine them. Working almost as planned, but I got a little question in the matter. I'm drawing each layer this way: GraphicsDevice.SetRenderTarget(lighting); GraphicsDevice.Clear(Color.Transparent); //... Setup shader SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullNone, lightingShader); SpriteBatch.Draw(texture, fullscreen, Color.White); SpriteBatch.End(); GraphicsDevice.SetRenderTarget(darkMask); GraphicsDevice.Clear(Color.Transparent); //... Setup shader SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullNone, darkMaskShader); SpriteBatch.Draw(texture, fullscreen, Color.White); SpriteBatch.End(); Where lightingShader and darkMaskShader are shaders that, with parameters (view and proj matrices, light pos, color and range, etc) generate a texture meant to be that layer. It works fine, but I'm not sure if drawing a transparent quad on top of a transparent render target is the best way of doing it. Because I actually just need the position and params. Concluding: Can I paint a texture with shaders without having to clear it and then draw a transparent texture on top of it?

    Read the article

  • What's the best way to generate an NPC's face using web technologies?

    - by Vael Victus
    I'm in the process of creating a web app. I have many randomly-generated non-player characters in a database. I can pull a lot of information about them - their height, weight, down to eye color, hair color, and hair style. For this, I am solely interested in generating a graphical representation of the face. Currently the information is displayed with text in the nicest way possible, but I believe it's worth generating these faces for a more... human experience. Problem is, I'm not artist. I wouldn't mind commissioning an artist for this system, but I wouldn't know where to start. Were it 2007, I'd naturally think to myself that using Flash would be the best choice. I'd love to see "breathing" simulated. However, since Flash is on its way out, I'm not sure of a solid solution. With a previous game, I simply used layered .pngs to represent various aspects of the player's body: their armor, the face, the skin color. However, these solutions weren't very dynamic and felt very amateur. I can't go deep into this project feeling like that's an inferior way to present these faces, and I'm certain there's a better way. Can anyone give some suggestion on how to pull this off well?

    Read the article

  • how can I specify interleaved vertex attributes and vertex indices

    - by freefallr
    I'm writing a generic ShaderProgram class that compiles a set of Shader objects, passes args to the shader (like vertex position, vertex normal, tex coords etc), then links the shader components into a shader program, for use with glDrawArrays. My vertex data already exists in a VertexBufferObject that uses the following data structure to create a vertex buffer: class CustomVertex { public: float m_Position[3]; // x, y, z // offset 0, size = 3*sizeof(float) float m_TexCoords[2]; // u, v // offset 3*sizeof(float), size = 2*sizeof(float) float m_Normal[3]; // nx, ny, nz; float colour[4]; // r, g, b, a float padding[20]; // padded for performance }; I've already written a working VertexBufferObject class that creates a vertex buffer object from an array of CustomVertex objects. This array is said to be interleaved. It renders successfully with the following code: void VertexBufferObject::Draw() { if( ! m_bInitialized ) return; glBindBuffer( GL_ARRAY_BUFFER, m_nVboId ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, m_nVboIdIndex ); glEnableClientState( GL_VERTEX_ARRAY ); glEnableClientState( GL_TEXTURE_COORD_ARRAY ); glEnableClientState( GL_NORMAL_ARRAY ); glEnableClientState( GL_COLOR_ARRAY ); glVertexPointer( 3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 0) ); glTexCoordPointer(3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 12)); glNormalPointer(GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 20)); glColorPointer(3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 32)); glDrawElements( GL_TRIANGLES, m_nNumIndices, GL_UNSIGNED_INT, ((char*)NULL + 0) ); glDisableClientState( GL_VERTEX_ARRAY ); glDisableClientState( GL_TEXTURE_COORD_ARRAY ); glDisableClientState( GL_NORMAL_ARRAY ); glDisableClientState( GL_COLOR_ARRAY ); glBindBuffer( GL_ARRAY_BUFFER, 0 ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, 0 ); } Back to the Vertex Array Object though. My code for creating the Vertex Array object is as follows. This is performed before the ShaderProgram runtime linking stage, and no glErrors are reported after its steps. // Specify the shader arg locations (e.g. their order in the shader code) for( int n = 0; n < vShaderArgs.size(); n ++) glBindAttribLocation( m_nProgramId, n, vShaderArgs[n].sFieldName.c_str() ); // Create and bind to a vertex array object, which stores the relationship between // the buffer and the input attributes glGenVertexArrays( 1, &m_nVaoHandle ); glBindVertexArray( m_nVaoHandle ); // Enable the vertex attribute array (we're using interleaved array, since its faster) glBindBuffer( GL_ARRAY_BUFFER, vShaderArgs[0].nVboId ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, vShaderArgs[0].nVboIndexId ); // vertex data for( int n = 0; n < vShaderArgs.size(); n ++ ) { glEnableVertexAttribArray(n); glVertexAttribPointer( n, vShaderArgs[n].nFieldSize, GL_FLOAT, GL_FALSE, vShaderArgs[n].nStride, (GLubyte *) NULL + vShaderArgs[n].nFieldOffset ); AppLog::Ref().OutputGlErrors(); } This doesn't render correctly at all. I get a pattern of white specks onscreen, in the shape of the terrain rectangle, but there are no regular lines etc. Here's the code I use for rendering: void ShaderProgram::Draw() { using namespace AntiMatter; if( ! m_nShaderProgramId || ! m_nVaoHandle ) { AppLog::Ref().LogMsg("ShaderProgram::Draw() Couldn't draw object, as initialization of ShaderProgram is incomplete"); return; } glUseProgram( m_nShaderProgramId ); glBindVertexArray( m_nVaoHandle ); glDrawArrays( GL_TRIANGLES, 0, m_nNumTris ); glBindVertexArray(0); glUseProgram(0); } Can anyone see errors or omissions in either the VAO creation code or rendering code? thanks!

    Read the article

  • How would I be able to get a game over screen using the pause function?

    - by Joachim Velzel
    I am having problems with my snake game, when the snake collides with itself it draws a "game over" image in the background, but only while it's colliding with itself. I want it to behave like the pause function, so that as soon as the snake collides with itself it draws an image on the screen and stops the game play. And then how would you be able to restart or to quit the game? I just have this for the detection at the moment: if (snakeHeadRectangle.Intersects(snakeBodyRectangleArray[bodyNumber])) { spriteBatch.Draw(textureGameOver, gameOverPosition, Color.White); } Thanks

    Read the article

  • Strange 3D game engine camera with X,Y,Zoom instead of X,Y,Z

    - by Jenko
    I'm using a 3D game engine, that uses a 4x4 matrix to modify the camera projection, in this format: r r r x r r r y r r r z - - - zoom Strangely though, the camera does not respond to the Z translation parameter, and so you're forced to use X, Y, Zoom to move the camera around. Technically this is plausible for isometric-style games such as Age Of Empires III. But this is a 3D engine, and so why would they have designed the camera to ignore Z and respond only to zoom? Am I missing something here? I've tried every method of setting the camera and it really seems to ignore Z. So currently I have to resort to moving the main object in the scene graph instead of moving the camera in relation to the objects. My question: Do you have any idea why the engine would use such a scheme? Is it common? Why? Or does it seem like I'm missing something and the SetProjection(Matrix) function is broken and somehow ignores the Z translation in the matrix? (unlikely, but possible) Anyhow, what are the workarounds? Is moving objects around the only way? Edit: I'm sorry I cannot reveal much about the engine because we're in a binding contract. It's a locally developed engine (Australia) written in managed C# used for data visualizations. Edit: The default mode of the engine is orthographic, although I've switched it into perspective mode. Its probably more effective to use X, Y, Zoom in orthographic mode, but I need to use perspective mode to render everyday objects as well.

    Read the article

  • Using orientation to calculate position on Windows Phone 7

    - by Lavinski
    I'm using the motion API and I'm trying to figure out a control scheme for the game I'm currently developing. What I'm trying to achive is for a orienation of the device to correlate directly to a position. Such that tilting the phone forward and to the left represents the top left position and back to the right would be the bottom right position. Photos to make it clearer (the red dot would be the calculated position). Forward and Left Back and Right Now for the tricky bit. I also have to make sure that the values take into account left landscape and right landscape device orientations (portrait is the default so no calculations would be needed for it). Has anyone done anything like this? Notes: I've tried using the yaw, pitch, roll and Quaternion readings. Sample: // Get device facing vector public static Vector3 GetState() { lock (lockable) { var down = Vector3.Forward; var direction = Vector3.Transform(down, state); switch (Orientation) { case Orientation.LandscapeLeft: return Vector3.TransformNormal(direction, Matrix.CreateRotationZ(-rightAngle)); case Orientation.LandscapeRight: return Vector3.TransformNormal(direction, Matrix.CreateRotationZ(rightAngle)); } return direction; } }

    Read the article

  • Rendering Unity across multiple monitors

    - by N0xus
    At the moment I am trying to get unity to run across 2 monitors. I've done some research and know that this is, strictly, possible. There is a workaround where you basically have to fluff your window size in order to get unity to render across both monitors. What I've done is create a new custom screen resolution that takes in the width of both of my monitors, as seen in the following image, its the 3840 x 1080: How ever, when I go to run my unity game exe that size isn't available. All I get is the following: My custom size should be at the very bottom, but isn't. Is there something I haven't done, or missed, that will get unity to take in my custom screen size when it comes to running my game through its exe? Oddly enough, inside the unity editor, my custom screen size is picked up and I can have it set to that in my game window: Is there something that I have forgotten to do when I build and run the game from the file menu? Has someone ever beaten this issue before?

    Read the article

  • GLSL Shader Texture Performance

    - by Austin
    I currently have a project that renders OpenGL video using a vertex and fragment shader. The shaders work fine as-is, but in trying to add in texturing, I am running into performance issues and can't figure out why. Before adding texturing, my program ran just fine and loaded my CPU between 0%-4%. When adding texturing (specifically textures AND color -- noted by comment below), my CPU is 100% loaded. The only code I have added is the relevant texturing code to the shader, and the "glBindTexture()" calls to the rendering code. Here are my shaders and relevant rending code. Vertex Shader: #version 150 uniform mat4 mvMatrix; uniform mat4 mvpMatrix; uniform mat3 normalMatrix; uniform vec4 lightPosition; uniform float diffuseValue; layout(location = 0) in vec3 vertex; layout(location = 1) in vec3 color; layout(location = 2) in vec3 normal; layout(location = 3) in vec2 texCoord; smooth out VertData { vec3 color; vec3 normal; vec3 toLight; float diffuseValue; vec2 texCoord; } VertOut; void main(void) { gl_Position = mvpMatrix * vec4(vertex, 1.0); VertOut.normal = normalize(normalMatrix * normal); VertOut.toLight = normalize(vec3(mvMatrix * lightPosition - gl_Position)); VertOut.color = color; VertOut.diffuseValue = diffuseValue; VertOut.texCoord = texCoord; } Fragment Shader: #version 150 smooth in VertData { vec3 color; vec3 normal; vec3 toLight; float diffuseValue; vec2 texCoord; } VertIn; uniform sampler2D tex; layout(location = 0) out vec3 colorOut; void main(void) { float diffuseComp = max( dot(normalize(VertIn.normal), normalize(VertIn.toLight)) ), 0.0); vec4 color = texture2D(tex, VertIn.texCoord); colorOut = color.rgb * diffuseComp * VertIn.diffuseValue + color.rgb * (1 - VertIn.diffuseValue); // FOLLOWING LINE CAUSES PERFORMANCE ISSUES colorOut *= VertIn.color; } Relevant Rendering Code: // 3 textures have been successfully pre-loaded, and can be used // texture[0] is a 1x1 white texture to effectively turn off texturing glUseProgram(program); // Draw squares glBindTexture(GL_TEXTURE_2D, texture[1]); // Set attributes, uniforms, etc glDrawArrays(GL_QUADS, 0, 6*4); // Draw triangles glBindTexture(GL_TEXTURE_2D, texture[0]); // Set attributes, uniforms, etc glDrawArrays(GL_TRIANGLES, 0, 3*4); // Draw reference planes glBindTexture(GL_TEXTURE_2D, texture[0]); // Set attributes, uniforms, etc glDrawArrays(GL_LINES, 0, 4*81*2); // Draw terrain glBindTexture(GL_TEXTURE_2D, texture[2]); // Set attributes, uniforms, etc glDrawArrays(GL_TRIANGLES, 0, 501*501*6); // Release glBindTexture(GL_TEXTURE_2D, 0); glUseProgram(0); Any help is greatly appreciated!

    Read the article

  • Blur gets displaced compared to original image

    - by user1294203
    I have implemented a SSAO and I'm using a blur step to smooth it out. The problem is that the blurred texture is slightly displaced compared to the original. I'm blurring using a 4x4 kernel since that was my noise kernel in SSAO. The following is the blurring shader: float result = 0.0; for(int i = 0; i < 4; i++){ for(int j = 0; j < 4; j++){ vec2 offset = vec2(TEXEL_SIZE.x * i, TEXEL_SIZE.y * j); result += texture(aoSampler, TexCoord + offset).r; } } out_AO = vec4(vec3(0.0), result * 0.0625); Where TEXEL_SIZE is one over my window resolution. I was thinking that this is was an error based on how OpenGL counts the Texel center, so I tried displacing the texture coordinate I was using by 0.5 * TEXEL_SIZE, but there was still a slight displacement. The texture input to my blur shader, has wrap parameters: glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); When I tell the blur shader to just output the the value of the pixel, the result is not displaced, so it must have something to do with how neighboring pixels are sampled. Any thoughts?

    Read the article

  • Virtual Economy Setup - Virtual currencies advice

    - by Sarah Simpson
    I'm trying to figure out how to build my virtual economy. It seems like some games have one currency and some of them have up to 3 and 4 different ones. The game is an action game which is currently single player but I'm planning on adding a tournament mode that allows users to compete against each other. The virtual goods that a user would be able to purchase would be either customization to the character or powerups and utilities that give the character more abilities in the game. The character is able to gain coins during game play. The advice I'm trying to get is whether or not it makes sense to set up more than one currency and more than two currencies? What are the pros and cons? Reference to some resources that indicate research would be great.

    Read the article

  • Game thread, render thread, animation/inverse kinematics, and synchronization

    - by user782220
    In a multithreaded setup with a game logic thread and a render thread, with some kind of skin mesh animation with inverse kinematics plus etc how does animation work? Does the game logic thread just update a number saying time T in the animation and then the render thread infers Who owns the skin mesh animation, the game logic thread or the render thread? How is it stored in the scene graph if it is stored there at all? When the game logic updates does it do the computation of the skin mesh animation and the computation of the inverse kinematics and then store the result directly in the scene graph or is it stored indirectly and the render thread does the computation?

    Read the article

  • 3D Camera Problem

    - by Chris
    I allow the user to look around the scene by holding down the left mouse button and moving the mouse. The problem that I have is I can be facing one direction, I move the mouse up and the view tilts up, I move down and the view titles down. If I spin around 180 my left and right still works fine, but when I move the mouse up the view tilts down, and when I move the mouse down the view titles up. This is the code I am using, can anyone see what the problem with the logic is? var viewDir = g_math.subVector(target, g_eye); var rotatedViewDir = []; rotatedViewDir[0] = (Math.cos(g_mouseXDelta * g_rotationDelta) * viewDir[0]) - (Math.sin(g_mouseXDelta * g_rotationDelta) * viewDir[2]); rotatedViewDir[1] = viewDir[1]; rotatedViewDir[2] = (Math.cos(g_mouseXDelta * g_rotationDelta) * viewDir[2]) + (Math.sin(g_mouseXDelta * g_rotationDelta) * viewDir[0]); viewDir = rotatedViewDir; rotatedViewDir[0] = viewDir[0]; rotatedViewDir[1] = (Math.cos(g_mouseYDelta * g_rotationDelta * -1) * viewDir[1]) - (Math.sin(g_mouseYDelta * g_rotationDelta * -1) * viewDir[2]); rotatedViewDir[2] = (Math.cos(g_mouseYDelta * g_rotationDelta * -1) * viewDir[2]) + (Math.sin(g_mouseYDelta * g_rotationDelta * -1) * viewDir[1]); g_lookingDir = rotatedViewDir; var newtarget = g_math.addVector(rotatedViewDir, g_eye);

    Read the article

  • Constructive criticsm on my linear sampling Gaussian blur

    - by Aequitas
    I've been attempting to implement a gaussian blur utilising linear sampling, I've come across a few articles presented on the web and a question posed here which dealt with the topic. I've now attempted to implement my own Gaussian function and pixel shader drawing reference from these articles. This is how I'm currently calculating my weights and offsets: int support = int(sigma * 3.0) weights.push_back(exp(-(0*0)/(2*sigma*sigma))/(sqrt(2*pi)*sigma)); total += weights.back(); offsets.push_back(0); for (int i = 1; i <= support; i++) { float w1 = exp(-(i*i)/(2*sigma*sigma))/(sqrt(2*pi)*sigma); float w2 = exp(-((i+1)*(i+1))/(2*sigma*sigma))/(sqrt(2*pi)*sigma); weights.push_back(w1 + w2); total += 2.0f * weights[i]; offsets.push_back(w1 / weights[i]); } for (int i = 0; i < support; i++) { weights[i] /= total; } Here is an example of my vertical pixel shader: vec3 acc = texture2D(tex_object, v_tex_coord.st).rgb*weights[0]; vec2 pixel_size = vec2(1.0 / tex_size.x, 1.0 / tex_size.y); for (int i = 1; i < NUM_SAMPLES; i++) { acc += texture2D(tex_object, (v_tex_coord.st+(vec2(0.0, offsets[i])*pixel_size))).rgb*weights[i]; acc += texture2D(tex_object, (v_tex_coord.st-(vec2(0.0, offsets[i])*pixel_size))).rgb*weights[i]; } gl_FragColor = vec4(acc, 1.0); Am I taking the correct route with this? Any criticism or potential tips to improving my method would be much appreciated.

    Read the article

  • Finding Z given X & Y coordinates on terrain?

    - by mrky
    I need to know what the most efficient way of finding Z given X & Y coordinates on terrain. My terrain is set up as a grid, each grid block consisting of two triangles, which may be flipped in any direction. I want to move game objects smoothly along the floor of the terrain without "stepping." I'm currently using the following method with unexpected results: double mapClass::getZ(double x, double y) { int vertexIndex = ((floor(y))*width*2)+((floor(x))*2); vec3ray ray = {glm::vec3(x, y, 2), glm::vec3(x, y, 0)}; vec3triangle tri1 = { glmFrom(vertices[vertexIndex].v1), glmFrom(vertices[vertexIndex].v2), glmFrom(vertices[vertexIndex].v3) }; vec3triangle tri2 = { glmFrom(vertices[vertexIndex+1].v1), glmFrom(vertices[vertexIndex+1].v2), glmFrom(vertices[vertexIndex+1].v3) }; glm::vec3 intersect; if (!intersectRayTriangle(tri1, ray, intersect)) { intersectRayTriangle(tri2, ray, intersect); } return intersect.z; } intersectRayTriangle() and glmFrom() are as follows: bool intersectRayTriangle(vec3triangle tri, vec3ray ray, glm::vec3 &worldIntersect) { glm::vec3 barycentricIntersect; if (glm::intersectLineTriangle(ray.origin, ray.direction, tri.p0, tri.p1, tri.p2, barycentricIntersect)) { // Convert barycentric to world coordinates double u, v, w; u = barycentricIntersect.x; v = barycentricIntersect.y; w = 1 - (u+v); worldIntersect.x = (u * tri.p0.x + v * tri.p1.x + w * tri.p2.x); worldIntersect.y = (u * tri.p0.y + v * tri.p1.y + w * tri.p2.y); worldIntersect.z = (u * tri.p0.z + v * tri.p1.z + w * tri.p2.z); return true; } else { return false; } } glm::vec3 glmFrom(s_point3f point) { return glm::vec3(point.x, point.y, point.z); } My convenience structures are defined as: struct s_point3f { GLfloat x, y, z; }; struct s_triangle3f { s_point3f v1, v2, v3; }; struct vec3ray { glm::vec3 origin, direction; }; struct vec3triangle { glm::vec3 p0, p1, p2; }; vertices is defined as: std::vector<s_triangle3f> vertices; Basically, I'm trying to get the intersect of a ray (which is positioned at the x, and y coordinates specified facing pointing downwards toward the terrain) and one of the two triangles on the grid. getZ() rarely returns anything but 0. Other times, the numbers it generates seem to be completely off. Am I taking the wrong approach? Can anyone see a problem with my code? Any help or critique is appreciated!

    Read the article

  • Capitalizing on JavaScript's prototypal inheritance

    - by keithjgrant
    JavaScript has a class-free object system in which objects inherit properties directly from other objects. This is really powerful, but it is unfamiliar to classically trained programmers. If you attempt to apply classical design patterns directly to JavaScript, you will be frustrated. But if you learn to work with JavaScript's prototypal nature, your efforts will be rewarded. ... It is Lisp in C's clothing. -Douglas Crockford What does this mean for a game developer working with canvas and HTML5? I've been looking over this question on useful design patterns in gaming, but prototypal inheritance is very different than classical inheritance, and there are surely differences in the best way to apply some of these common patterns. For example, classical inheritance allows us to create a moveableEntity class, and extend that with any classes that move in our game world (player, monster, bullet, etc.). Sure, you can strongarm JavaScript to work that way, but in doing so, you are kind of fighting against its nature. Is there a better approach to this sort of problem when we have prototypal inheritance at our fingertips?

    Read the article

  • Balancing game difficulty against player progression

    - by Raven Dreamer
    It seems that the current climate of games seems to cater to an obvious progression of player power, whether that means getting a bigger, more explosive gun in Halo, leveling up in an RPG, or unlocking new options in Command and Conquer 4. Yet this concept is not exclusive to video or computer games -- even in Dungeons and Dragons players can strive to acquire a +2 sword to replace the +1 weapon they've been using. Yet as a systems designer, the concept of player progression is giving me headache after headache. Should I balance around the players exact capabilities and give up on a simple linear progression? (I think ESIV:Oblivion is a good example of this) Is it better to throw the players into an "arms race" with their opponents, where if the players don't progress in an orderly manner, it is only a matter of time until gameplay is unbearably difficult? (4th Edition DnD strikes me as a good example of this) Perhaps it would make most sense to untether the core gameplay mechanics from progression at all -- give them flashier, more interesting (but not more powerful!) ways to grow?

    Read the article

  • Meaning of offset in pygame Mask.overlap methods

    - by Alan
    I have a situation in which two rectangles collide, and I have to detect how much did they collide so so I can redraw the objects in a way that they are only touching each others edges. It's a situation in which a moving ball should hit a completely unmovable wall and instantly stop moving. Since the ball sometimes moves multiple pixels per screen refresh, it it possible that it enters the wall with more that half its surface when the collision is detected, in which case i want to shift it position back to the point where it only touches the edges of the wall. Here is the conceptual image it: I decided to implement this with masks, and thought that i could supply the masks of both objects (wall and ball) and get the surface (as a square) of their intersection. However, there is also the offset parameter which i don't understand. Here are the docs for the method: Mask.overlap Returns the point of intersection if the masks overlap with the given offset - or None if it does not overlap. Mask.overlap(othermask, offset) -> x,y The overlap tests uses the following offsets (which may be negative): +----+----------.. |A | yoffset | +-+----------.. +--|B |xoffset | | : :

    Read the article

< Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >