Search Results

Search found 16410 results on 657 pages for 'game component'.

Page 344/657 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • Level design - Are games development degrees worth it? [on hold]

    - by Tristan
    I want to go into Level design or Environment design and wondering if degrees are at all worth it for this area, as long as you have a good portfolio. I'm currently on a "Computing and Games Development" course and I feel like dropping out because I am not enjoying it. It's mostly computer based, which I'm not doing that great at, and little games Dev. But I don't know what do to... I do already have some high level education with a "Level 3 extended diploma". Thanks.

    Read the article

  • Triangle Picking Picking Back faces

    - by Tangeleno
    I'm having a bit of trouble with 3D picking, at first I thought my ray was inaccurate but it turns out that the picking is happening on faces facing the camera and faces facing away from the camera which I'm currently culling. Here's my ray creation code, I'm pretty sure the problem isn't here but I've been wrong before. private uint Pick() { Ray cursorRay = CalculateCursorRay(); Vector3? point = Control.Mesh.RayCast(cursorRay); if (point != null) { Tile hitTile = Control.TileMesh.GetTileAtPoint(point); return hitTile == null ? uint.MaxValue : (uint)(hitTile.X + hitTile.Y * Control.Generator.TilesWide); } return uint.MaxValue; } private Ray CalculateCursorRay() { Vector3 nearPoint = Control.Camera.Unproject(new Vector3(Cursor.Position.X, Control.ClientRectangle.Height - Cursor.Position.Y, 0f)); Vector3 farPoint = Control.Camera.Unproject(new Vector3(Cursor.Position.X, Control.ClientRectangle.Height - Cursor.Position.Y, 1f)); Vector3 direction = farPoint - nearPoint; direction.Normalize(); return new Ray(nearPoint, direction); } public Vector3 Camera.Unproject(Vector3 source) { Vector4 result; result.X = (source.X - _control.ClientRectangle.X) * 2 / _control.ClientRectangle.Width - 1; result.Y = (source.Y - _control.ClientRectangle.Y) * 2 / _control.ClientRectangle.Height - 1; result.Z = source.Z - 1; if (_farPlane - 1 == 0) result.Z = 0; else result.Z = result.Z / (_farPlane - 1); result.W = 1f; result = Vector4.Transform(result, Matrix4.Invert(ProjectionMatrix)); result = Vector4.Transform(result, Matrix4.Invert(ViewMatrix)); result = Vector4.Transform(result, Matrix4.Invert(_world)); result = Vector4.Divide(result, result.W); return new Vector3(result.X, result.Y, result.Z); } And my triangle intersection code. Ripped mainly from the XNA picking sample. public float? Intersects(Ray ray) { float? closestHit = Bounds.Intersects(ray); if (closestHit != null && Vertices.Length == 3) { Vector3 e1, e2; Vector3.Subtract(ref Vertices[1].Position, ref Vertices[0].Position, out e1); Vector3.Subtract(ref Vertices[2].Position, ref Vertices[0].Position, out e2); Vector3 directionCrossEdge2; Vector3.Cross(ref ray.Direction, ref e2, out directionCrossEdge2); float determinant; Vector3.Dot(ref e1, ref directionCrossEdge2, out determinant); if (determinant > -float.Epsilon && determinant < float.Epsilon) return null; float inverseDeterminant = 1.0f/determinant; Vector3 distanceVector; Vector3.Subtract(ref ray.Position, ref Vertices[0].Position, out distanceVector); float triangleU; Vector3.Dot(ref distanceVector, ref directionCrossEdge2, out triangleU); triangleU *= inverseDeterminant; if (triangleU < 0 || triangleU > 1) return null; Vector3 distanceCrossEdge1; Vector3.Cross(ref distanceVector, ref e1, out distanceCrossEdge1); float triangleV; Vector3.Dot(ref ray.Direction, ref distanceCrossEdge1, out triangleV); triangleV *= inverseDeterminant; if (triangleV < 0 || triangleU + triangleV > 1) return null; float rayDistance; Vector3.Dot(ref e2, ref distanceCrossEdge1, out rayDistance); rayDistance *= inverseDeterminant; if (rayDistance < 0) return null; return rayDistance; } return closestHit; } I'll admit I don't fully understand all of the math behind the intersection and that is something I'm working on, but my understanding was that if rayDistance was less than 0 the face was facing away from the camera, and shouldn't be counted as a hit. So my question is, is there an issue with my intersection or ray creation code, or is there another check I need to perform to tell if the face is facing away from the camera, and if so any hints on what that check might contain would be appreciated.

    Read the article

  • Java : 2D Collision Detection

    - by neko
    I'm been working on 2D rectangle collision for weeks and still cannot get this problem fixed. The problem I'm having is how to adjust a player to obstacles when it collides. I'm referencing this link. The player sometime does not get adjusted to obstacles. Also, it sometimes stuck in obstacle guy after colliding. Here, the player and the obstacle are inheriting super class Sprite I can detect collision between the two rectangles and the point by ; public Point getSpriteCollision(Sprite sprite, double newX, double newY) { // set each rectangle Rectangle spriteRectA = new Rectangle( (int)getPosX(), (int)getPosY(), getWidth(), getHeight()); Rectangle spriteRectB = new Rectangle( (int)sprite.getPosX(), (int)sprite.getPosY(), sprite.getWidth(), sprite.getHeight()); // if a sprite is colliding with the other sprite if (spriteRectA.intersects(spriteRectB)){ System.out.println("Colliding"); return new Point((int)getPosX(), (int)getPosY()); } return null; } and to adjust sprites after a collision: // Update the sprite's conditions public void update() { // only the player is moving for simplicity // collision detection on x-axis (just x-axis collision detection at this moment) double newX = x + vx; // calculate the x-coordinate of sprite move Point sprite = getSpriteCollision(map.getSprite().get(1), newX, y);// collision coordinates (x,y) if (sprite == null) { // if the player is no colliding with obstacle guy x = newX; // move } else { // if collided if (vx > 0) { // if the player was moving from left to right x = (sprite.x - vx); // this works but a bit strange } else if (vx < 0) { x = (sprite.x + vx); // there's something wrong with this too } } vx=0; y+=vy; vy=0; } I think there is something wrong in update() but cannot fix it. Now I only have a collision with the player and an obstacle guy but in future, I'm planning to have more of them and making them all collide with each other. What would be a good way to do it? Thanks in advance.

    Read the article

  • Optimized algorithm for line-sphere intersection in GLSL

    - by fernacolo
    Well, hello then! I need to find intersection between line and sphere in GLSL. Right now my solution is based on Paul Bourke's page and was ported to GLSL this way: // The line passes through p1 and p2: vec3 p1 = (...); vec3 p2 = (...); // Sphere center is p3, radius is r: vec3 p3 = (...); float r = ...; float x1 = p1.x; float y1 = p1.y; float z1 = p1.z; float x2 = p2.x; float y2 = p2.y; float z2 = p2.z; float x3 = p3.x; float y3 = p3.y; float z3 = p3.z; float dx = x2 - x1; float dy = y2 - y1; float dz = z2 - z1; float a = dx*dx + dy*dy + dz*dz; float b = 2.0 * (dx * (x1 - x3) + dy * (y1 - y3) + dz * (z1 - z3)); float c = x3*x3 + y3*y3 + z3*z3 + x1*x1 + y1*y1 + z1*z1 - 2.0 * (x3*x1 + y3*y1 + z3*z1) - r*r; float test = b*b - 4.0*a*c; if (test >= 0.0) { // Hit (according to Treebeard, "a fine hit"). float u = (-b - sqrt(test)) / (2.0 * a); vec3 hitp = p1 + u * (p2 - p1); // Now use hitp. } It works perfectly! But it seems slow... I'm new at GLSL. You can answer this questions in two ways: Tell me there is no solution, showing some proof or strong evidence. Tell me about GLSL features (vector APIs, primitive operations) that makes the above algorithm faster, showing some example. Thanks a lot!

    Read the article

  • Directional and orientation problem

    - by Ahmed Saleh
    I have drawn 5 tentacles which are shown in red. I have drew those tentacles on a 2D Circle, and positioned them on 5 vertices of the that circle. BTW, The circle is never be drawn, I have used it to simplify the problem. Now I wanted to attached that circle with tentacles underneath the jellyfish. There is a problem with the current code but I don't know what is it. You can see that the circle is parallel to the base of the jelly fish. I want it to be shifted so that it be inside the jelly fish. but I don't know how. I tried to multiply the direction vector to extend it but that didn't work. // One tentacle is constructed from nodes // Get the direction of the first tentacle's node 0 to node 39 of that tentacle; Vec3f dir = m_tentacle[0]->geNodesPos()[0] - m_tentacle[0]->geNodesPos()[39]; // Draw the circle with tentacles on it Vec3f pos = m_SpherePos; drawCircle(pos,dir,30,m_tentacle.size()); for (int i=0; i<m_tentacle.size(); i++) { m_tentacle[i]->Draw(); } // Draw the jelly fish, and orient it on the 2D Circle gl::pushMatrices(); Quatf q; // assign quaternion to rotate the jelly fish around the tentacles q.set(Vec3f(0,-1,0),Vec3f(dir.x,dir.y,dir.z)); // tanslate it to the position of the whole creature per every frame gl::translate(m_SpherePos.x,m_SpherePos.y,m_SpherePos.z); gl::rotate(q); // draw the jelly fish at center 0,0,0 drawHemiSphere(Vec3f(0,0,0),m_iRadius,90); gl::popMatrices();

    Read the article

  • Whats a good setup/toolchain for a project?

    - by acidzombie24
    I was thinking, what is needed for a good setup and what are good (free) tools to use? Some of what i came up with are Bug tracking Some good (distributed:P) source control (which means no svn fellas) automated nightly builds or a continuous integration (or anything that automates builds and possibly sends emails when there are build errors) wiki to document decisions, road map or milestones. Something to backup assets (art, sound, etc) What else? and do you have suggestions for any of the above? i pretty much clueless of all of these except for source control

    Read the article

  • Efficiently rendering to 3D texture

    - by TravisG
    I have an existing depth texture and some other color textures, and want to process the information in them by rendering to a 3D texture (based on the depth contained in the depth texture, i.e. a point at (x/y) in the depth texture will be rendered to (x/y/texture(depth,uv)) in the 3D texture). Simply doing one manual draw call for each slice of the 3D texture (via glFramebufferTextureLayer) is terribly slow, since I don't know beforehand to what slice of the 3D texture a given texel from one of the color textures or the depth texture belongs. This means the entire process is effectively for each slice for each texel in depth texture process color textures and render to slice So I have to sample the depth texture completely per each slice, and I also have to go through the processing (at least until to discard;) for all texels in it. It would be much faster if I could rearrange the process to for each texel in depth texture figure out what slice it should end up in process color textures and render to slice Is this possible? If so, how? What I'm actually trying to do: the color textures contain lighting information (as seen from light view, it's a reflective shadow map). I want to accumulate that information in the 3D texture and then later use it to light the scene. More specifically I'm trying to implement Cryteks Light Propagation Volumes algorithm.

    Read the article

  • Movement of body after applying weld joint

    - by ved
    I have two rectangular bodies. I've applied Weldjoint successfully on these bodies. I want to move that joined body by applying linear impulse. After weld joint, these two bodies becomes single body right? How do I apply force/impulse on the joined body? I am using Box2D with LibGDX. I've tried this: polygon1.applyLinearImpulse(new Vector2(-5, 0), polygon1.getWorldCenter(), true); I thought that if I move polygon1 then polygon2 will also move due to my weld joint but it is not working properly. Why don't they move together after being welded?

    Read the article

  • XNA hlsl tex2D() only reads 3 channels from normal maps and specular maps

    - by cubrman
    Our engine uses deferred rendering and at the main draw phase gathers plenty of data from the objects it draws. In order to save on tex2D calls, we packed our objects' specular maps with all sorts of data, so three out of four channels are already taken. To make it clear: I am talking about the assets that come with the models and are stored in their material's Specular Level channel, not about the RenderTarget. So now I need another information to be stored in the alpha channel, but I cannot make the shader to read it properly! Nomatter what I write into alpha it ends up being 1 (255)! I tried: saving the textures in PNG/TGA formats. turning off pre-computed alpha in model's properties. Out of every texture available to me (we use Diffuse map, Normal Map and Specular Map) I was only able to read alpha successfully from the Diffuse Map! Here is how I add specular and normal maps to my model's material in the content processor: if (geometry.Material.Textures.ContainsKey(normalMapKey)) { ExternalReference<TextureContent> texRef = geometry.Material.Textures[normalMapKey]; geometry.Material.Textures.Remove("NormalMap"); geometry.Material.Textures.Add("NormalMap", texRef); } ... foreach (KeyValuePair<String, ExternalReference<TextureContent>> texture in material.Textures) { if ((texture.Key == "Texture") || (texture.Key == "NormalMap") || (texture.Key == "SpecularMap")) mat.Textures.Add(texture.Key, texture.Value); } In the shader I obviously use: float4 data = tex2D(specularMapSampler, TexCoords); so data.a is always 1 in my case, could you suggest a reason?

    Read the article

  • How to capture the screen in DirectX 9 to a raw bitmap in memory without using D3DXSaveSurfaceToFile

    - by cloudraven
    I know that in OpenGL I can do something like this glReadBuffer( GL_FRONT ); glReadPixels( 0, 0, _width, _height, GL_RGB, GL_UNSIGNED_BYTE, _buffer ); And its pretty fast, I get the raw bitmap in _buffer. When I try to do this in DirectX. Assuming that I have a D3DDevice object I can do something like this if (SUCCEEDED(D3DDevice->GetBackBuffer(0, 0, D3DBACKBUFFER_TYPE_MONO, &pBackbuffer))) { HResult hr = D3DXSaveSurfaceToFileA(filename, D3DXIFF_BMP, pBackbuffer, NULL, NULL); But D3DXSaveSurfaceToFile is pretty slow, and I don't need to write the capture to disk anyway, so I was wondering if there was a faster way to do this

    Read the article

  • 3D Texture Mapping (Atlas)

    - by Tim Hatch
    This is a pretty simple question. If I was to use multiple images in a single texture for a 3D cube, how would I go about re-using each vertex (having 8 total vs 24)? With a single buffer of 8 vertices, I don't see how I'd properly reuse the UV values. Any help on that? I know it's not terribly clear, but I figured it was a simple question. The 2D method is pretty easy, the next coordinates would be the same as the first (0,0 and 0,1 respectively). However, the above 3D version has me quite befuddled.

    Read the article

  • Best C++ containers for UI in Games.

    - by Vijayendra
    I am writing some UI stuff for my games in C++. Basically its a very common problem, but I dont know the best answer yet. Suppose inside my UI Library I have a view class which renders 2D/3D scene. This view can contain many subviews. I needs a container which allows me to iterate over these views fast and also insert/delete subviews. I am not sure which container is best for the job - list, vector or something else?

    Read the article

  • C# Collision test of a ship and asteriod, angle confusion

    - by Cherry
    We are trying to to do a collision detection for the ship and asteroid. If success than it should detect the collision before N turns. However it is confused between angle 350 and 15 and it is not really working. Sometimes it is moving but sometime it is not moving at all. On the other hand, it is not shooting at the right time as well. I just want to ask how to make the collision detection working??? And how to solve the angle confusion problem? // Get velocities of asteroid Console.WriteLine("lol"); // IF equation is between -2 and -3 if (equation1a <= -2) { // Calculate no. turns till asteroid hits float turns_till_hit = dx / vx; // Calculate angle of asteroid float asteroid_angle_rad = (float)Math.Atan(Math.Abs(dy / dx)); float asteroid_angle_deg = (float)(asteroid_angle_rad * 180 / Math.PI); float asteroid_angle = 0; // Calculate angle if asteroid is in certain positions if (asteroid.Y > ship.Y && asteroid.X > ship.X) { asteroid_angle = asteroid_angle_deg; } else if (asteroid.Y < ship.Y && asteroid.X > ship.X) { asteroid_angle = (360 - asteroid_angle_deg); } else if (asteroid.Y < ship.Y && asteroid.X < ship.X) { asteroid_angle = (180 + asteroid_angle_deg); } else if (asteroid.Y > ship.Y && asteroid.X < ship.X) { asteroid_angle = (180 - asteroid_angle_deg); } // IF turns till asteroid hits are less than 35 if (turns_till_hit < 50) { float angle_between = 0; // Calculate angle between if asteroid is in certain positions if (asteroid.Y > ship.Y && asteroid.X > ship.X) { angle_between = ship_angle - asteroid_angle; } else if (asteroid.Y < ship.Y && asteroid.X > ship.X) { angle_between = (360 - Math.Abs(ship_angle - asteroid_angle)); } else if (asteroid.Y < ship.Y && asteroid.X < ship.X) { angle_between = ship_angle - asteroid_angle; } else if (asteroid.Y > ship.Y && asteroid.X < ship.X) { angle_between = ship_angle - asteroid_angle; } // If angle less than 0, add 360 if (angle_between < 0) { //angle_between %= 360; angle_between = Math.Abs(angle_between); } // Calculate no. of turns to face asteroid float turns_to_face = angle_between / 25; if (turns_to_face < turns_till_hit) { float ship_angle_left = ShipAngle(ship_angle, "leftKey", 1); float ship_angle_right = ShipAngle(ship_angle, "rightKey", 1); float angle_between_left = Math.Abs(ship_angle_left - asteroid_angle); float angle_between_right = Math.Abs(ship_angle_right - asteroid_angle); if (angle_between_left < angle_between_right) { leftKey = true; } else if (angle_between_right < angle_between_left) { rightKey = true; } } if (angle_between > 0 && angle_between < 25) { spaceKey = true; } } }

    Read the article

  • View Frustum Alternative

    - by Kuros
    I am working on a simulation project that requires me to have entities walking around in a 3D world. I have all that working, matrix transformations, etc. I'm at the point where I need what is essentially a view frustum, so I can give each entity a visible area. However, when looking over the calculations required to do it, it seems like a perspective frustum is only required to be able to project it onto a 2D screen. Is there another, easier to code solution, that would function better, such as an orthogonal perspective? Could I just define a shape mathematically and test wether the coordinates of the objects are inside or out? I am not really a 3D coder (and I am doing this all from scratch, not using an engine or anything), so I would like the simplest solution possible for my needs. Thank you!

    Read the article

  • XNA 4.0 Point Vertex Rendering

    - by luis
    I have a buffer of about 134 million particles and a very powerful computer to render them smoothly but I am getting an error when trying to render them as primitive lines it says I cannot render more than around 1 million. I wonder how can I do this, also if is there a better way to render this other than with lines, I'm comfortable with having 1 pixel points or anything as long as the vertices are shown all the time. I'm basically just plotting the points. thanks.

    Read the article

  • Learning OpenGL GLSL - VAO buffer problems?

    - by Bleary
    I've just started digging through OpenGL and GLSL, and now stumbled on something I can't get my head around this one!? I've stepped back to loading a simple cube and using a simple shader on it, but the result is triangles drawn incorrectly and/or missing. The code I had working perfectly on meshes, but was attempting to move to using VAOs so none of the code for storing the vertices and indices has changed. http://i.stack.imgur.com/RxxZ5.jpg http://i.stack.imgur.com/zSU50.jpg What I have for creating the VAO and buffers is this //Create the Vertex array object glGenVertexArrays(1, &vaoID); // Finally create our vertex buffer objects glGenBuffers(VBO_COUNT, mVBONames); glBindVertexArray(vaoID); // Save vertex attributes into GPU glBindBuffer(GL_ARRAY_BUFFER, mVBONames[VERTEX_VBO]); // Copy data into the buffer object glBufferData(GL_ARRAY_BUFFER, lPolygonVertexCount*VERTEX_STRIDE*sizeof(GLfloat), lVertices, GL_STATIC_DRAW); glEnableVertexAttribArray(pos); glVertexAttribPointer(pos, 3, GL_FLOAT, GL_FALSE, VERTEX_STRIDE*sizeof(GLfloat),0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mVBONames[INDEX_VBO]); glBufferData(GL_ELEMENT_ARRAY_BUFFER, lPolygonCount*sizeof(unsigned int), lIndices, GL_STATIC_DRAW); glBindVertexArray(0); And the code for drawing the mesh. glBindVertexArray(vaoID); glUseProgram(shader->programID); GLsizei lOffset = mSubMeshes[pMaterialIndex]->IndexOffset*sizeof(unsigned int); const GLsizei lElementCount = mSubMeshes[pMaterialIndex]->TriangleCount*TRIAGNLE_VERTEX_COUNT; glDrawElements(GL_TRIANGLES, lElementCount, GL_UNSIGNED_SHORT, reinterpret_cast<const GLvoid*>(lOffset)); // All the points are indeed in the correct place!? //glPointSize(10.0f); //glDrawElements(GL_POINTS, lElementCount, GL_UNSIGNED_SHORT, 0); glUseProgram(0); glBindVertexArray(0); Eyes have become bleary looking at this today so any thoughts or a fresh set of eyes would be greatly appreciated.

    Read the article

  • Player sprite moving slower on iPhone 4

    - by nvillec
    I just finished getting movement/jump animation for a player sprite in Xcode using Cocos2D. The basic movement algorithm is a timer that updates every 0.01 sec, changing the sprite position to (sprite.position.x + xVel, sprite.position.y + yVel). Each time a movement button is tapped, the appropriate velocity (initialized to 0) is changed to whatever speed I choose, then a stop movement button returns the velocity to 0. It's not an ideal solution but I'm very new at this and stoked to at least have that working with little help from the internet. So I may not have explained that perfectly, but it is in fact working to my satisfaction in Xcode's iPhone Simulator, however when I build it for my device and run it on my phone, the sprite's movement speed is noticeably slower than in Xcode. At first I thought it must have to do with the resolution of the iPhone 4, making the sprite's movement path twice as long, but I found that if I pull up the multitask bar, then return to the app the speed will sometimes jump back to normal. My second theory was that the code is just inefficient and is bogging the processes down, but I would see this reflected in the frame rate wouldn't I? It stays at 59-60 the whole time, and the spritesheet animation runs at the correct speed. Has anyone experienced this? Is this a really obvious issue that I'm completely missing? Any help (or tips for optimizing my approach to movement) would be much appreciated!

    Read the article

  • Falling CCSprites

    - by Coder404
    Im trying to make ccsprites fall from the top of the screen. Im planning to use a touch delegate to determine when they fall. How could I make CCSprites fall from the screen in a way like this: -(void)addTarget { Monster *target = nil; if ((arc4random() % 2) == 0) { target = [WeakAndFastMonster monster]; } else { target = [StrongAndSlowMonster monster]; } // Determine where to spawn the target along the Y axis CGSize winSize = [[CCDirector sharedDirector] winSize]; int minY = target.contentSize.height/2; int maxY = winSize.height - target.contentSize.height/2; int rangeY = maxY - minY; int actualY = (arc4random() % rangeY) + minY; // Create the target slightly off-screen along the right edge, // and along a random position along the Y axis as calculated above target.position = ccp(winSize.width + (target.contentSize.width/2), actualY); [self addChild:target z:1]; // Determine speed of the target int minDuration = target.minMoveDuration; //2.0; int maxDuration = target.maxMoveDuration; //4.0; int rangeDuration = maxDuration - minDuration; int actualDuration = (arc4random() % rangeDuration) + minDuration; // Create the actions id actionMove = [CCMoveTo actionWithDuration:actualDuration position:ccp(-target.contentSize.width/2, actualY)]; id actionMoveDone = [CCCallFuncN actionWithTarget:self selector:@selector(spriteMoveFinished:)]; [target runAction:[CCSequence actions:actionMove, actionMoveDone, nil]]; // Add to targets array target.tag = 1; [_targets addObject:target]; } This code makes CCSprites move from the right side of the screen to the left. How could I change this to make the CCSprites to move from the top of the screen to the bottom?

    Read the article

  • Point Light Soft Shadows

    - by notabene
    How to implement soft shadows for omni directional (point) light. We use typical shadow mapping technique. Depth is rendered to texture cube and addresing is pretty simple then. Just using vector from light to fragments world position. It works perfectly. Until you want soft shadows. In our engine we use PCSS technique for spot lights. But for point light there begins troubles. How to sample in 3D? I developed technique when orthonormal basis is created from a direction and upvector (0,1,0). And then multiply sampling vector (something like this (1.0,i/depthMapSize,j/depthMapSize) with this basis. But this (of course :)) looks pretty bad for vectors near (0,1,0) and (0,-1,0). I will appreciate any help on this.

    Read the article

  • SDL mouse wheel not picking up

    - by Chris
    Running Ubuntu 11.04, SDL 1.2 trying to pickup mouse wheel up/down movement with this (stripped down) code: int main( int argc, char **argv ) { SDL_MouseButtonEvent *mousebutton = NULL; while ( !done ) { if(mousebutton != NULL && mousebutton->button == SDL_BUTTON_LEFT) yrot += 0.75f; else if(mousebutton != NULL && mousebutton->button == SDL_BUTTON_RIGHT) yrot -= 0.75f; else if(mousebutton != NULL && mousebutton->button == SDL_BUTTON_WHEELUP){ xrot += 0.75f; }else if(mousebutton != NULL && mousebutton->button == SDL_BUTTON_WHEELDOWN){ xrot -= 0.75f; } while ( SDL_PollEvent( &event ) ) { switch( event.type ) { case SDL_MOUSEBUTTONDOWN: mousebutton = &event.button; break; case SDL_MOUSEBUTTONUP: mousebutton = NULL; break; default: break; } } } return 0; } strange thing is, scrolling with the mouse button does nothing, but if I hold down a mouse button or two and then move the mouse it hits the SDL_BUTTON_WHEEL code occasionally. This honestly reeks of a pointer issue, which would make sense since I've been spoiled with C# for the past couple years, but I am just not seeing it. How do i correctly find mouse scroll events in SDL?

    Read the article

  • Debugging HLSL for Windows 8 application [migrated]

    - by Shervanator
    i'm currently in the process of creating a Windows 8 applicaiton using SharpDX (the managed c# directx wrapper). However I have ran into problems with one of my shaders and I want to know if its possible to debug such applications. PIX doesn't seem to work of directX apps as the executable does not like opening directly, and the new visual studio graphics debugging toolkit in VS2012 always states "unable to start the experiment" when I try to capture any information about my session. Thanks!

    Read the article

  • Intersection points of plane set forming convex hull

    - by Toji
    Mostly looking for a nudge in the right direction here. Given a set of planes (defined as a normal and distance from origin) that form a convex hull, I would like to find the intersection points that form the corners of that hull. More directly, I'm looking for a way to generate a point cloud appropriate to provide to Bullet. Bonus points if someone knows of a way I could give bullet the plane list directly, since I somewhat suspect that's what it's building on the backend anyway.

    Read the article

  • ssao implementation

    - by Irbis
    I try to implement a ssao based on this tutorial: link I use a deferred rendering and world coordinates for shading calculations. When saving gbuffer a vertex shader output looks like this: worldPosition = vec3(ModelMatrix * vec4(inPosition, 1.0)); normal = normalize(normalModelMatrix * inNormal); gl_Position = ProjectionMatrix * ViewMatrix * ModelMatrix * vec4(inPosition, 1.0); Next for a ssao calculations I render a scene as a full screen quad and I save an occlusion parameter in a texture. (Vertex positions in the world space: link Normals in the world space: link) SSAO implementation: subroutine (RenderPassType) void ssao() { vec2 texCoord = CalcTexCoord(); vec3 worldPos = texture(texture0, texCoord).xyz; vec3 normal = normalize(texture(texture1, texCoord).xyz); vec2 noiseScale = vec2(screenSize.x / 4, screenSize.y / 4); vec3 rvec = texture(texture2, texCoord * noiseScale).xyz; vec3 tangent = normalize(rvec - normal * dot(rvec, normal)); vec3 bitangent = cross(normal, tangent); mat3 tbn = mat3(tangent, bitangent, normal); float occlusion = 0.0; float radius = 4.0; for (int i = 0; i < kernelSize; ++i) { vec3 pix = tbn * kernel[i]; pix = pix * radius + worldPos; vec4 offset = vec4(pix, 1.0); offset = ProjectionMatrix * ViewMatrix * offset; offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; float sample_depth = texture(texture0, offset.xy).z; float range_check = abs(worldPos.z - sample_depth) < radius ? 1.0 : 0.0; occlusion += (sample_depth <= pix.z ? 1.0 : 0.0); } outputColor = vec4(occlusion, occlusion, occlusion, 1); } That code gives following results: camera looking towards -z world space: link camera looking towards +z world space: link I wonder if it is possible to use world coordinates in the above code ? When I move camera I get different results because world space positions don't change. Can I treat worldPos.z as a linear depth ? What should I change to get a correct results ? I except the white areas in place of occlusion, so the ground should has the white areas only near to the object.

    Read the article

  • HLSL: An array of textures and sampler states

    - by nate142
    The shader must switch between multiple textures depending on the Alpha value of the original texture for each pixel. Now this would word fine if I didn't have to worry about SamplerStates. I have created my array of textures and can select a texture based on the Alpha value of the pixel. But how do I create an Array of SamplerStates and link it to my array of textures? I attempted to treat the SamplerState as a function by adding the (int i) but that didn't work. Also I can't use Texture.Sample since this is shader model 2.0. //shader model 2.0 (DX9) texture subTextures[255]; SamplerState MeshTextureSampler(int i) { Texture = (subTextures[i]); }; float4 SampleCompoundTexture(float2 texCoord, float4 diffuse) { float4 SelectedColor = SAMPLE_TEXTURE(Texture, texCoord); int i = SelectedColor.a; texture SelectedTx = subTextures[i]; return tex2D(MeshTextureSampler(i), texCoord) * diffuse; }

    Read the article

  • Algorithms for rainfall + river creation in procedurally generated terrain

    - by Peck
    I've recently become fascinated by the things that can be done with procedurally terrain and have started experimenting with world building a bit. I'd like to be able to make worlds something like Dwarf fortress with biomes created from meshing together various maps. So first step has been done. Using the diamond-square algorithm I've created some nice hieghtmaps. Next step is I would like to add some water features and have them somewhat realistically generated with rainfall. I've read about a few different approaches such as starting at the high points of the map, and "stepping" down to the lowest neighboring point, pooling/eroding as it works its way down to sea level. Are there any documented algorithms with this or are they more off the cuff? Would love any advice/thoughts.

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >