Search Results

Search found 2562 results on 103 pages for 'vector'.

Page 77/103 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Flickering problem with world matrix

    - by gnomgrol
    I do have a pretty wierd problem today. As soon as I try to change my translation- or rotationmatrix for an object to something else than (0,0,0), the object starts to flicker (scaling works fine). It rapid and randomly switches between the spot it should be in and a crippled something. I first thought that the problem would be z-fighting, but now Im pretty sure it isn't. I have now clue at all what it could be, here are two screenshots of the two states the plant is switching between. I already used PIX, but could find anything of use (Im not a very good debugger anyway) I would appreciate any help, thanks a lot! Important code: D3DXMatrixIdentity(&World); D3DXVECTOR3 rotaxisX = D3DXVECTOR3(1.0f, 0.0f, 0.0f); D3DXVECTOR3 rotaxisY = D3DXVECTOR3(0.0f, 1.0f, 0.0f); D3DXVECTOR3 rotaxisZ = D3DXVECTOR3(0.0f, 0.0f, 1.0f); D3DXMATRIX temprot1, temprot2, temprot3; D3DXMatrixRotationAxis(&temprot1, &rotaxisX, 0); D3DXMatrixRotationAxis(&temprot2, &rotaxisY, 0); D3DXMatrixRotationAxis(&temprot3, &rotaxisZ, 0); Rotation = temprot1 *temprot2 * temprot3; D3DXMatrixTranslation(&Translation, 0.0f, 10.0f, 0.0f); D3DXMatrixScaling(&Scale, 0.02f, 0.02f, 0.02f); //Set objs world space using the transformations World = Translation * Rotation * Scale; shader: cbuffer cbPerObject { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix);

    Read the article

  • backface culling error

    - by acrilige
    I write simple software renderer. In my pipeline i have stage of backface culling. But looks like it has some error (see picture). I perform culling right after world transformation. (i can't insert picture in post coz i don't have enough points, so i just upload it (cube model): http://imageshack.us/photo/my-images/705/bcerror.png/) Vector3F view_dir(0.0f, 0.0f, 1.0f); std::vector<Triangle> to_remove; for (Triangle &t : m_triangles) { Vector4F e1 = t.v2 - t.v1; Vector4F e2 = t.v3 - t.v1; Vector3F normal( e1.y * e2.z - e1.z * e2.y, e1.z * e2.x - e1.x * e2.z, e1.x * e2.y - e1.y * e2.x ); normal.Normalize(); float dot = Dot(view_dir, normal); if (dot <= 0) to_remove.push_back(t); } for (Triangle& t : to_remove) m_triangles.erase(std::remove(m_triangles.begin(), m_triangles.end(), t), m_triangles.end()); Camera sits in origin and points in screen (RH). What is the reason?

    Read the article

  • Central renderer for a given scene

    - by Loggie
    When creating a central rendering system for all game objects in a given scene I am trying to work out the best way to go about passing the scene to the render system to be rendered. If I have a scene managed by an arbitrary structure, i.e., an octree, bsp trees, quad-tree, kd tree, etc. What is the best way to pass this to the render system? The obvious problem is that if simply given the root node of the structure, the render system would require an intrinsic knowledge of the structure in order to traverse the structure. My solution to this is to clip all objects outside the frustum in the scene manager and then create a list of the objects which are left and pass this simple list to the render system, be it an array, a vector, a linked list, etc. (This would be a structure required by the render system as a means to know which objects should be rendered). The list would of course attempt to minimise OpenGL state changes by grouping objects that require the same rendering operations to be performed on them. I have been thinking a lot about this and started searching various terms on here and followed any additional information/links but I have not really found a definitive answer. The case may be that there is no definitive answer but I would appreciate some advice and tips. My question is, is this a reasonable solution to the problem? Are there any improvements that I could make? Are there any caveats I should know about? Side question: Am I right in assuming that octrees, bsp trees, etc are all forms of BVH?

    Read the article

  • Better way to go up/down slope based on yaw?

    - by CyanPrime
    Alright, so I got a bit of movement code and I'm thinking I'm going to need to manually input when to go up/down a slope. All I got to work with is the slope's normal, and vector, and My current and previous position, and my yaw. Is there a better way to rotate whether I go up or down the slope based on my yaw? Vector3f move = new Vector3f(0,0,0); move.x = (float)-Math.toDegrees(Math.cos(Math.toRadians(yaw))); move.z = (float)-Math.toDegrees(Math.sin(Math.toRadians(yaw))); move.normalise(); if(move.z < 0 && slopeNormal.z > 0 || move.z > 0 && slopeNormal.z < 0){ if(move.x < 0 && slopeNormal.x > 0 || move.x > 0 && slopeNormal.x < 0){ move.y += slopeVec.y; } } if(move.z > 0 && slopeNormal.z > 0 || move.z < 0 && slopeNormal.z < 0){ if(move.x > 0 && slopeNormal.x > 0 || move.x < 0 && slopeNormal.x < 0){ move.y -= slopeVec.y; } } move.scale(movementSpeed * delta); Vector3f.add(pos, move, pos);

    Read the article

  • What's a way to implement a flexible buff/debuff system?

    - by gkimsey
    Overview: Lots of games which RPG-like statistics allow for character "buffs", ranging from simple "Deal 25% extra damage" to more complicated things like "Deal 15 damage back to attackers when hit." The specifics of each type of buff aren't really relevant. I'm looking for a (presumably object-oriented) way to handle arbitrary buffs. Details: In my particular case, I have multiple characters in a turn-based battle environment, so I envisioned buffs being tied to events like "OnTurnStart", "OnReceiveDamage", etc. Perhaps each buff is a subclass of a main Buff abstract class, where only the relevant events are overloaded. Then each character could have a vector of buffs currently applied. Does this solution make sense? I can certainly see dozens of event types being necessary, it feels like making a new subclass for each buff is overkill, and it doesn't seem to allow for any buff "interactions". That is, if I wanted to implement a cap on damage boosts so that even if you had 10 different buffs which all give 25% extra damage, you only do 100% extra instead of 250% extra. And there's more complicated situations that ideally I could control. I'm sure everyone can come up with examples of how more sophisticated buffs can potentially interact with each other in a way that as a game developer I may not want. As a relatively inexperienced C++ programmer (I generally have used C in embedded systems), I feel like my solution is simplistic and probably doesn't take full advantage of the object-oriented language. Thoughts? Has anyone here designed a fairly robust buff system before?

    Read the article

  • How to achieve selection of a tile from a tile sheet based on an ID?

    - by Bugster
    Let's say I have a tile sheet that contains 8 sprites per sheet. Each sprite is a tile of 30x30. I wrote my own custom map parser/map loader however I'm having trouble extracting a certain tile sprite from the file. I'll describe my problem better in order for everyone to understand. I wrote an enum of materials, each material has a value according to it's location relative to the tile sheet. For example void is 1, grass is 2, rock is 3, etc. So in my tile sheet they are represented as such: +---+---+---+---+---+ | 1 | 2 | 3 | 4 | 5 | +---+---+---+---+---+ Which is equivalent to: +------+-------+-------+ | void | grass | stone | +------+-------+-------+ Basically when rendering, I created a tile class, each tile has 2 coordinates: X and Y (They are calculated automatically) and a material which can be represented either as a number, either as a value (ID). When rendering, I have a vector of sprites which are all taken from 1 file called tilesheet.png, however each of them must only draw a certain portion of the tile sheet, for example say I have something like this: tile coordinateBounds(topLeftX, topLeftY, tileWidth, tileHeight); During the initialization of the map I calculate an array of tiles, and I give each of them their position, their materials based on the values in a map file and a few other variables such as collision. I need to apply the coordinateBounds to each of them according to their material value. For example if the material is grass it should only take the grass sprite from the tilesheet. I must also mention I'm using SFML, and there are no borders or spacing between the tiles.

    Read the article

  • 2D Smooth Turning in a Tile-Based Game

    - by ApoorvaJ
    I am working on a 2D top-view grid-based game. A ball that rolls on the grid made up of different tiles. The tiles interact with the ball in a variety of ways. I am having difficulty cleanly implementing the turning tile. The image below represents a single tile in the grid, which turns the ball by a right angle. If the ball rolls in from the bottom, it smoothly changes direction and rolls to the right. If it rolls in from the right, it is turned smoothly to the bottom. If the ball rolls in from top or left, its trajectory remains unchanged by the tile. The tile shouldn't change the magnitude of the velocity of the ball - only change its direction. The ball has Velocity and Position vectors, and the tile has Position and Dimension vectors. I have already implemented this, but the code is messy and buggy. What is an elegant way to achieve this, preferably by modification of the ball's Velocity vector by a formula?

    Read the article

  • OpenGL - Rendering from part of an index and vertex array depending on an element count

    - by user1423893
    I'm currently drawing my shapes as lines by using a VAO and then assigning the dynamic vertices and indices each frame. // Bind VAO glBindVertexArray(m_vao); // Update the vertex buffer with the new data (Copy data into the vertex buffer object) glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(VertexPosition), m_vertices.data(), GL_DYNAMIC_DRAW); // Update the index buffer with the new data (Copy data into the index buffer object) glBufferData(GL_ELEMENT_ARRAY_BUFFER, numIndices * sizeof(unsigned short), indices.data(), GL_DYNAMIC_DRAW); glDrawElements(GL_LINES, numIndices, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0)); // Unbind VAO glBindVertexArray(0); What I would like to do is draw the lines using only part of the data stored in the index and vertex buffer objects. The vertex buffer has its vertices set from an array of defined maximum size: std::array<VertexPosition, maxVertices> m_vertices; The index buffer has its elements set from an array of defined maximum size: std::array<unsigned short, maxIndices> indices = { 0 }; A running total is kept of the number of vertices and indices needed for each draw call numVertices numIndices Can I not specify that the buffer data contain the entire array and only read from part of it when drawing? For example using the vertex buffer object glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(VertexPosition), m_vertices.data(), GL_DYNAMIC_DRAW); m_vertices.data() = Entire array is stored numVertices * sizeof(VertexPosition) = Amount of data to read from the entire array Is this not the correct way to approach this? I do not wish to use std::vector if possible.

    Read the article

  • Coordinate spaces and transformation matrices

    - by Belgin
    I'm trying to get an object from object space, into projected space using these intermediate matrices: The first matrix (I) is the one that transforms from object space into inertial space, but since my object is not rotated or translated in any way inside the object space, this matrix is the 4x4 identity matrix. The second matrix (W) is the one that transforms from inertial space into world space, which is just a scale transform matrix of factor a = 14.1 on all coordinates, since the inertial space origin coincides with the world space origin. /a 0 0 0\ W = |0 a 0 0| |0 0 a 0| \0 0 0 1/ The third matrix (C) is the one that transforms from world space, into camera space. This matrix is a translation matrix with a translation of (0, 0, 10), because I want the camera to be located behind the object, so the object must be positioned 10 units into the z axis. /1 0 0 0\ C = |0 1 0 0| |0 0 1 10| \0 0 0 1/ And finally, the fourth matrix is the projection matrix (P). Bearing in mind that the eye is at the origin of the world space and the projection plane is defined by z = 1, the projection matrix is: /1 0 0 0\ P = |0 1 0 0| |0 0 1 0| \0 0 1/d 0/ where d is the distance from the eye to the projection plane, so d = 1. I'm multiplying them like this: (((P x C) x W) x I) x V, where V is the vertex' coordinates in column vector form: /x\ V = |y| |z| \1/ After I get the result, I divide x and y coordinates by w to get the actual screen coordinates. Apparenly, I'm doing something wrong or missing something completely here, because it's not rendering properly. Here's a picture of what is supposed to be the bottom side of the Stanford Dragon: Also, I should add that this is a software renderer so no DirectX or OpenGL stuff here.

    Read the article

  • How should I start refactoring my mostly-procedural C++ application?

    - by oob
    We have a program written in C++ that is mostly procedural, but we do use some C++ containers from the standard library (vector, map, list, etc). We are constantly making changes to this code, so I wouldn't call it a stagnant piece of legacy code that we can just wrap up. There are a lot of issues with this code making it harder and harder for us to make changes, but I see the three biggest issues being: Many of the functions do more (way more) than one thing We violate the DRY principle left and right We have global variables and global state up the wazoo. I was thinking we should attack areas 1 and 2 first. Along the way, we can "de-globalize" our smaller functions from the bottom up by passing in information that is currently global as parameters to the lower level functions from the higher level functions and then concentrate on figuring out how to removing the need for global variables as much as possible. I just finished reading Code Complete 2 and The Pragmatic Programmer, and I learned a lot, but I am feeling overwhelmed. I would like to implement unit testing, change from a procedural to OO approach, automate testing, use a better logging system, fully validate all input, implement better error handling and many other things, but I know if we start all this at once, we would screw ourselves. I am thinking the three I listed are the most important to start with. Any suggestions are welcome. We are a team of two programmers mostly with experience with in-house scripting. It is going to be hard to justify taking the time to refactor, especially if we can't bill the time to a client. Believe it or not, this project has been successful enough to keep us busy full time and also keep several consultants busy using it for client work.

    Read the article

  • XNA - 2D Rotation of an object to a selected direction

    - by lobsterhat
    I'm trying to figure out the best way of rotating an object towards the directional input of the user. I'm attempting to mimic making turns on ice skates. For instance, if the player is moving right and the input is down and left, the player should start rotating to the right a set amount each tick. I'll calculate a new vector based on current velocity and rotation and apply that to the current velocity. That should give me nice arcing turns, correct? At the moment I've got eight if/else statements for each key combination which in turn check the current rotation: // Rotate to 225 if (keyboardState.IsKeyDown(Keys.Up) && keyboardState.IsKeyDown(Keys.Left)) { // Rotate right if (rotation >= 45 || rotation < 225) { rotation += ROTATION_PER_TICK; } // Rotate left else if (rotation < 45 || rotation > 225) { rotation -= ROTATION_PER_TICK; } } This seems like a sloppy way to do this and eventually, I'll need to do this check about 10 times a tick. Any help toward a more efficient solution is appreciated.

    Read the article

  • Why isn't my other two constant buffers being updated to the shader?

    - by Paul Ske
    I posted previously before about my two dynamic buffers not being dynamically updating the constant shader. The tessellation buffer isn't working because I have to manually update the tessellation factor inside the hull shader. I believe the camera position isn't updating either because when I perform distance adaptation the far edges are more tessellated then the what's truly in front of the camera. I have all the buffers set to dynamic. Inside the render loop I have them set as: ID3D11Buffer *multiBuffers[3]; devcon->VSSetConstantBuffers(0,3,multiBuffers); ... devcon->DSSetConstantBuffers(0,3,multiBuffers); I only got that from a directX Sample. Inside the shader file I have the three cbuffer structs. cbuffer ConstantBuffer { float4x4 WorldMatrix; float4x4 viewMatrix; float4x4 projectionMatrix; float4x4 modelWorldMatrix; // the rotation matrix float3 lightvec; // the light's vector float4 lightcol; // the light's color float4 ambientcol; // the ambient light's color bool isSelected; } cbuffer cameraBuffer { float3 cameraDirection; float padding; } cbuffer TessellationBuffer { float tessellationAmount; float3 padding2; } Am I missing something or would anyone know why wouldn't my buffers update to the shader file?

    Read the article

  • Point in Polygon, Ray Method: ending infinite line

    - by user2878528
    Having a bit of trouble with point in polygon collision detection using the ray method i.e. http://en.wikipedia.org/wiki/Point_in_polygon My problem is I need to give an end to the infinite line created. As with this infinite line I always get an even number of intersections and hence an invalid result. i.e. ignore or intersection to the right of the point being checked what I have what I want My current code based of Mecki awesome response for (int side = 0; side < vertices.Length; side++) { // Test if current side intersects with ray. // create infinite line // See: http://en.wikipedia.org/wiki/Linear_equation a = end_point.Y - start_point.Y; b = start_point.X - end_point.X; c = end_point.X * start_point.Y - start_point.X * end_point.Y; //insert points of vector d2 = a * vertices[side].Position.X + b * vertices[side].Position.Y + c; if (side - 1 < 0) d1 = a * vertices[vertices.Length - 1].Position.X + b * vertices[vertices.Length - 1].Position.Y + c; else d1 = a * vertices[side-1].Position.X + b * vertices[side-1].Position.Y + c; // If points have opposite sides, intersections++; if (d1 > 0 && d2 < 0 ) intersections++; if (d1 < 0 && d2 > 0 ) intersections++; } //if intersections odd inside = true if ((intersections % 2) == 1) inside = true; else inside = false;

    Read the article

  • Translating an object along its heading

    - by Kuros
    I am working on a simulation that requires me to have several objects moving around in 3D space (text output of their current position on the grid and heading is fine, I do not need graphics), and I am having some trouble getting objects to move along their relative headings. I have a basic understanding of vectors and matrices. I am using a vector to represent their position, and I am also using Euler Angles. I can translate one of my entities with a matrix along whatever axis, and I can alter their heading. For example, if I have an entity at (order is XYZ) 1, 1, 1, with a heading of 0, I can apply a translation matrix to get them to talk to 1, 1, 2 fine. However, if I change their heading to 270, they still walk to 1, 1, 3, instead of 2, 1, 2 as I desire. I have a feeling that my problem lies in not translating my matrix from world space to object space, but I am not sure how to go about that. How can I do this? Addition: I am using 3D vectors to represent their current position and their heading (using the three euler angles). For now, all I want to do is have an entity walk in a square, reporting their current position at each step. So, assuming it starts at 10, 10, 10 I want it to walk as follows: 10,10,10 -> 10, 10, 15 10, 10, 15 -> 5, 10, 15 5, 10, 15 -> 5, 10, 10 5, 10, 10 -> 10, 10, 10 My 1 Z unit translation matrix is as follows: [1 0 0 0] [0 1 0 0] [0 0 1 1] [0 0 0 1] My rotation matrix is as follows: [0 0 1 0] [0 1 0 0] [-1 0 0 0] [0 0 0 1]

    Read the article

  • Informing GUI objects about screen size - Designing

    - by Mosquito
    I have a problem with designing classes for my game which I create. In my app, there is: class CGame which contains all the information about game itself, e.g. screen width, screen height, etc. In the main() function I create a pointer to CGame instance. class CGUIObject which includes fields specifying it's position and draw() method, which should know how to draw an object according to screen size. class CGUIManager which is a singleton and it includes a list of CGUIObject's. For each object in a list it just calls draw() method. For clarity's sake, I'll put some simple code: class CGame { int screenWidth; int screenHeight; }; class CGUIObject { CPoint position; void draw(); // this one needs to know what is a screen's width and height }; class CGUIManager // it's a singleton { vector<CGUIObject*> guiObjects; void drawObjects(); }; And the main.cpp: CGame* g; int main() { g = new CGame(); while(1) { CGUIManager::Instance().drawObjects(); } return 0; } Now the problem is, that each CGUIObject needs to know the screen size which is held by CGame, but I find it very dumb to include pointer to CGame instance in every object. Could anyone, please, tell me what would be the best approach to achieve this?

    Read the article

  • Space Invaders-type game: Keeping the enemies aligned with each other as they turn around?

    - by CorundumGames
    OK, so here's the lowdown of the problem I'm trying to solve. I'm developing a game in PyGame that's a cross between Space Invaders and Columns. I'm trying to make the motion of the enemies similar to that of the aliens in Space Invaders; that is, they're all clustered in a grid, and if even one hits the side of the screen, the entire formation moves down and turns around. However, the motion of these aliens is continuous (as continuous as a monitor can be, anyway), not on a discrete grid like in the original. The enemies are instances of an Enemy class, and in turn they're held by a 2D array in a enemysquadron module (which, if you don't use Python, is in this case essentially a singleton due to the way Python modules work). Inside the Enemy class I have a class-scope velocity vector that is reversed every time an Enemy object touches the edge of the screen. This won't do, though, because as time goes on the enemies just become disorganized and jumbled (i.e. not in a grid as planned). I haven't implemented the Enemies going downward yet, so let's not worry about that right now. Any tips?

    Read the article

  • How can I render a semi transparent model with OpenGL correctly?

    - by phobitor
    I'm using OpenGL ES 2 and I want to render a simple model with some level of transparency. I'm just starting out with shaders, and I wrote a simple diffuse shader for the model without any issues but I don't know how to add transparency to it. I tried to set my fragment shader's output (gl_FragColor) to a non opaque alpha value but the results weren't too great. It sort of works, but it looks like certain model triangles are only rendered based on the camera position... It's really hard to describe what's wrong so please watch this short video I recorded: http://www.youtube.com/watch?v=s0JqA0rZabE I thought this was a depth testing issue so I tried playing around with enabling/disabling depth testing and back face culling. Enabling back face culling changes the output slightly but the problem in the video is still there. Enabling/disabling depth testing doesn't seem to do anything. Could anyone explain what I'm seeing and how I can add some simple transparency to my model with the shader? I'm not looking for advanced order independent transparency implementations. edit: Vertex Shader: // color varying for fragment shader varying mediump vec3 LightIntensity; varying highp vec3 VertexInModelSpace; void main() { // vec4 LightPosition = vec4(0.0, 0.0, 0.0, 1.0); vec3 LightColor = vec3(1.0, 1.0, 1.0); vec3 DiffuseColor = vec3(1.0, 0.25, 0.0); // find the vector from the given vertex to the light source vec4 vertexInWorldSpace = gl_ModelViewMatrix * vec4(gl_Vertex); vec3 normalInWorldSpace = normalize(gl_NormalMatrix * gl_Normal); vec3 lightDirn = normalize(vec3(LightPosition-vertexInWorldSpace)); // save vertexInWorldSpace VertexInModelSpace = vec3(gl_Vertex); // calculate light intensity LightIntensity = LightColor * DiffuseColor * max(dot(lightDirn,normalInWorldSpace),0.0); // calculate projected vertex position gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } Fragment Shader: // varying to define color varying vec3 LightIntensity; varying vec3 VertexInModelSpace; void main() { gl_FragColor = vec4(LightIntensity,0.5); }

    Read the article

  • Client-server application design issue

    - by user2547823
    I have a collection of clients on server's side. And there are some objects that need to work with that collection - adding and removing clients, sending message to them, updating connection settings and so on. They should perform these actions simultaneously, so mutex or another synchronization primitive is required. I want to share one instance of collection between these objects, but all of them require access to private fields of collection. I hope that code sample makes it more clear[C++]: class Collection { std::vector< Client* > clients; Mutex mLock; ... } class ClientNotifier { void sendMessage() { mLock.lock(); // loop over clients and send message to each of them } } class ConnectionSettingsUpdater { void changeSettings( const std::string& name ) { mLock.lock(); // if client with this name is inside collection, change its settings } } As you can see, all these classes require direct access to Collection's private fields. Can you give me an advice about how to implement such behaviour correctly, i.e. keeping Collection's interface simple without it knowing about its users?

    Read the article

  • Following a set of points?

    - by user1010005
    Lets assume that i have a set of path that an entity should follow : const int Paths = 2 Vector2D<float> Path[Paths] = { Vector2D(100,0),Vector2D(100,50) }; Now i define my entity's position in a 2D vector as follows : Vector2D<float> FollowerPosition(0,0); And now i would like to move the "follower" to the path at index 1 : int PathPosition = 0; //Start with path 1 Currently i do this : Vector2D<float>& Target = Path[PathPosition]; bool Changed = false; if (FollowerPosition.X < Target.X) FollowerPosition.X += Vel,Changed = true; if (FollowerPosition.X > Target.X) FollowerPosition.X -= Vel,Changed = true; if (FollowerPosition.Y < Target.Y) FollowerPosition.Y += Vel;,Changed = true; if (FollowerPosition.Y > Target.Y) FollowerPosition.Y -= Vel,Changed = true; if (!Changed) { PathPosition = PathPosition + 1; if (PathPosition > Paths) PathPosition = 0; } Which works except for one little detail : The movement is not smooth!! ...So i would like to ask if anyone sees anything wrong with my code. Thanks and sorry for my english.

    Read the article

  • How to make other semantics behave like SV_Position?

    - by object
    I'm having a lot of trouble with shadow mapping, and I believe I've found the problem. When passing vectors from the vertex shader to the pixel shader, does the hardware automatically change any of the values based on the semantic? I've compiled a barebones pair of shaders which should illustrate the problem. Vertex shader : struct Vertex { float3 position : POSITION; }; struct Pixel { float4 position : SV_Position; float4 light_position : POSITION; }; cbuffer Matrices { matrix projection; }; Pixel RenderVertexShader(Vertex input) { Pixel output; output.position = mul(float4(input.position, 1.0f), projection); output.light_position = output.position; // We simply pass the same vector in screenspace through different semantics. return output; } And a simple pixel shader to go along with it: struct Pixel { float4 position : SV_Position; float4 light_position : POSITION; }; float4 RenderPixelShader(Pixel input) : SV_Target { // At this point, (input.position.z / input.position.w) is a normal depth value. // However, (input.light_position.z / input.light_position.w) is 0.999f or similar. // If the primitive is touching the near plane, it very quickly goes to 0. return (0.0f).rrrr; } How is it possible to make the hardware treat light_position in the same way which position is being treated between the vertex and pixel shaders? EDIT: Aha! (input.position.z) without dividing by W is the same as (input.light_position.z / input.light_position.w). Not sure why this is.

    Read the article

  • How to configure Bullet for LookAt?

    - by AllCoder
    I'm having problems positioning Bullet objects. I am doing: ToolVec3 origin = ToolVec3( obj_posx, obj_posy, obj_posz ); ToolVec3 vmod = ToolVec3( object_sizex / 2.0f, object_sizey / 2.0f, object_sizez / 2.0f ); btTransform shapeTransform = btTransform::getIdentity(); shapeTransform.setOrigin( btVector3(origin.x+vmod.x, origin.y+vmod.y, origin.z+vmod.z) ); btDefaultMotionState* myMotionState = new btDefaultMotionState(shapeTransform); btRigidBody::btRigidBodyConstructionInfo rbInfo(mass,myMotionState,m_collisionShapes[2],localInertia); btRigidBody* body = new btRigidBody(rbInfo); I then do: btCollisionObject* colObj = m_dynamicsWorld->getCollisionObjectArray()[i]; btRigidBody* body = btRigidBody::upcast(colObj); if(body && body->getMotionState()) { btDefaultMotionState* myMotionState = (btDefaultMotionState*)body->getMotionState(); myMotionState->m_graphicsWorldTrans.getOpenGLMatrix(m); } else { colObj->getWorldTransform().getOpenGLMatrix(m); } And after obtaining the matrix m, I paste it as model matrix. I am observing few things: I must add some weird "size / 2" to object's position, to have it drawed normally, I have following "up" look at vector defined: "0.0f, -1.0f, 0.0f" – basically, Y grows up, Z grows forward (to monitor), BUT – x grows LEFT, I think there is some conflict with the X direction.. I cannot obtain consistent positioning having world setup like this How to configure this in Bullet? Why the weird + size/2 requirement?

    Read the article

  • 3D Box Collision Data Import

    - by cboe
    I'm trying to implement a collision system using oriented bounding boxes, using a center for the box, it's extents as a 3D Vector and a rotation matrix, which is all stuff I picked up online and seem to be somewhat the standard. Detecting the center is no problem so I'm gonna leave these out here. My problem however is importing the data from a 3D file. Say I've placed a box with 2 units length on each side aligned to the world axis. The logic results here are extents of 1,1,1 and I use an identity matrix for rotation - easy. However I'm stuck when I rotate the box in the 3D program, say 30 degrees each axis. How would I parse the box? I only have these 8 vertices as information, and I guess what I would need to do is to find out the rotation of said box, apply it to the vertices so they are aligned to world axes and then calculate the extents out of that. How do I get the rotation of the box when I only have the vertex information of the box available?

    Read the article

  • Implementing Explosions

    - by Xkynar
    I want to add explosions to my 2D game, but im having a hard time with the architecture. Several game elements might be responsible for explosions, like, lets say, explosive barrels and bullets (and there might be chain reactions with close barrels). The only options i can come up with are: 1 - Having an array of explosions and treat them as a game element as important as any other Pros: Having a single array which is updated and drawn with all the other game element arrays makes it more organized and simple to update, and the explosive barrels at a first glance would be easy to create, simply by passing the explosion array as a pointer to each explosive barrel constructor Cons: It might be hard for the bullets to add an explosion to the vector, since bullets are shot by a Weapon class which is located in every mob, so lets say, if i create a new enemy and add it to the enemy array, that enemy will have a weapon and functions to be able to use it, and if i want the weapon (rocket launcher in this case) to have access to the explosions array to be able to add a new one, id have to pass the explosion array as a pointer to the enemy, which would then pass it to the weapon, which would pass it to the bullets (ugly chain). Another problem I can think of is a little more weird: If im checking the collisions between explosions and barrels (so i create a chain reaction) and i detect an explosion colliding with a barrel, if i add a new explosion while im iterating the explosions java will trow an exception. So this is kinda annoying, i cant iterate through the explosions and add a new explosion, i must do it in another way... The other way which isnt really well thought yet is to just add an explosive component to every element that might explode so that when it dies, it explodes or something, but i dont have good ways on implementing this theory either Honestly i dont like either the solutions so id like to know how is it usually done by actual game developers, sorry if my problem seems trivial and dumb.

    Read the article

  • C++ Parallel Asynchonous task

    - by Doodlemeat
    I am currently building a randomly generated terrain game where terrain is created automatically around the player. I am experiencing lag when the generated process is active, as I am running quite heavy tasks with post-processing and creating physics bodies. Then I came to mind using a parallel asynchronous task to do the post-processing for me. But I have no idea how I am going to do that. I have searched for C++ std::async but I believe that is not what I want. In the examples I found, a task returned something. I want the task to change objects in the main program. This is what I want: // Main program // Chunks that needs to be processed. // NOTE! These chunks are already generated, but need post-processing only! std::vector<Chunk*> unprocessedChunks; And then my task could look something like this, running like a loop constantly checking if there is chunks to process. // Asynced task if(unprocessedChunks.size() > 0) { processChunk(unprocessedChunks.pop()); } I know it's not far from easy as I wrote it, but it would be a huge help for me if you could push me at the right direction. In Java, I could type something like this: asynced_task = startAsyncTask(new PostProcessTask()); And that task would run until I do this: asynced_task.cancel();

    Read the article

  • Scale an image with unscalable parts

    - by Uko
    Brief description of problem: imagine having some vector picture(s) and text annotations on the sides outside of the picture(s). Now the task is to scale the whole composition while preserving the aspect ratio in order to fit some view-port. The tricky part is that the text is not scalable only the picture(s). The distance between text and the image is still relative to the whole image, but the text size is always a constant. Example: let's assume that our total composition is two times larger than a view-port. Then we can just scale it by 1/2. But because the text parts are a fixed font size, they will become larger than we expect and won't fit in the view-port. One option I can think of is an iterative process where we repeatedly scale our composition until the delta between it and the view-port satisfies some precision. But this algorithm is quite costly as it involves working with the graphics and the image may be composed of a lot of components which will lead to a lot of matrix computations. What's more, this solution seems to be hard to debug, extend, etc. Are there any other approaches to solving this scaling problem?

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >