Search Results

Search found 25377 results on 1016 pages for 'development 4 0'.

Page 497/1016 | < Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >

  • OpenGL depth texture wrong

    - by CoffeeandCode
    I have been writing a game engine for a while now and have decided to reconstruct my positions from depth... but how I read the depth seems to be wrong :/ What is wrong in my rendering? How I init my depth texture in the FBO gl::BindTexture(gl::TEXTURE_2D, this->textures[0]); // Depth gl::TexImage2D( gl::TEXTURE_2D, 0, gl::DEPTH32F_STENCIL8, width, height, 0, gl::DEPTH_STENCIL, gl::FLOAT_32_UNSIGNED_INT_24_8_REV, nullptr ); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_MAG_FILTER, gl::NEAREST); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_MIN_FILTER, gl::NEAREST); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_WRAP_S, gl::CLAMP_TO_EDGE); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_WRAP_T, gl::CLAMP_TO_EDGE); gl::FramebufferTexture2D( gl::FRAMEBUFFER, gl::DEPTH_STENCIL_ATTACHMENT, gl::TEXTURE_2D, this->textures[0], 0 ); Linear depth readings in my shader Vertex #version 150 layout(location = 0) in vec3 position; layout(location = 1) in vec2 uv; out vec2 uv_f; void main(){ uv_f = uv; gl_Position = vec4(position, 1.0); } Fragment (where the issue probably is) #version 150\n uniform sampler2D depth_texture; in vec2 uv_f; out vec4 Screen; void main(){ float n = 0.00001; float f = 100.0; float z = texture(depth_texture, uv_f).x; float linear_depth = (n * z)/(f - z * (f - n)); Screen = vec4(linear_depth); // It ISN'T because I don't separate alpha } When Rendered so gamedev.stackexchange, what's wrong with my rendering/glsl?

    Read the article

  • 2D game big background images for maps

    - by WhiteCat
    Update: this question is general, not specific to Sprite Kit or a single language/platform. I'm toying with Sprite Kit with an idea to make a 2D side-scroller. Now the backgrounds for the maps are going to be hand-drawn and surely bigger than retina display, so the maps could span more than 1 screen in both axis. I imagine loading such a huge image could mean trouble and I don't plan to use tiling. I'm not sure how Sprite Kit splits images bigger than max texture size, if it does. I could split the images myself and use more sprites for each part of the background. What is the usual way to handle this?

    Read the article

  • AABB vs OBB Collision Resolution jitter on corners

    - by patt4179
    I've implemented a collision library for a character who is an AABB and am resolving collisions between AABB vs AABB and AABB vs OBB. I wanted slopes for certain sections, so I've toyed around with using several OBBs to make one, and it's working great except for one glaring issue; The collision resolution on the corner of an OBB makes the player's AABB jitter up and down constantly. I've tried a few things I've thought of, but I just can't wrap my head around what's going on exactly. Here's a video of what's happening as well as my code: Here's the function to get the collision resolution (I'm likely not doing this the right way, so this may be where the issue lies): public Vector2 GetCollisionResolveAmount(RectangleCollisionObject resolvedObject, OrientedRectangleCollisionObject b) { Vector2 overlap = Vector2.Zero; LineSegment edge = GetOrientedRectangleEdge(b, 0); if (!SeparatingAxisForRectangle(edge, resolvedObject)) { LineSegment rEdgeA = new LineSegment(), rEdgeB = new LineSegment(); Range axisRange = new Range(), rEdgeARange = new Range(), rEdgeBRange = new Range(), rProjection = new Range(); Vector2 n = edge.PointA - edge.PointB; rEdgeA.PointA = RectangleCorner(resolvedObject, 0); rEdgeA.PointB = RectangleCorner(resolvedObject, 1); rEdgeB.PointA = RectangleCorner(resolvedObject, 2); rEdgeB.PointB = RectangleCorner(resolvedObject, 3); rEdgeARange = ProjectLineSegment(rEdgeA, n); rEdgeBRange = ProjectLineSegment(rEdgeB, n); rProjection = GetRangeHull(rEdgeARange, rEdgeBRange); axisRange = ProjectLineSegment(edge, n); float axisMid = (axisRange.Maximum + axisRange.Minimum) / 2; float projectionMid = (rProjection.Maximum + rProjection.Minimum) / 2; if (projectionMid > axisMid) { overlap.X = axisRange.Maximum - rProjection.Minimum; } else { overlap.X = rProjection.Maximum - axisRange.Minimum; overlap.X = -overlap.X; } } edge = GetOrientedRectangleEdge(b, 1); if (!SeparatingAxisForRectangle(edge, resolvedObject)) { LineSegment rEdgeA = new LineSegment(), rEdgeB = new LineSegment(); Range axisRange = new Range(), rEdgeARange = new Range(), rEdgeBRange = new Range(), rProjection = new Range(); Vector2 n = edge.PointA - edge.PointB; rEdgeA.PointA = RectangleCorner(resolvedObject, 0); rEdgeA.PointB = RectangleCorner(resolvedObject, 1); rEdgeB.PointA = RectangleCorner(resolvedObject, 2); rEdgeB.PointB = RectangleCorner(resolvedObject, 3); rEdgeARange = ProjectLineSegment(rEdgeA, n); rEdgeBRange = ProjectLineSegment(rEdgeB, n); rProjection = GetRangeHull(rEdgeARange, rEdgeBRange); axisRange = ProjectLineSegment(edge, n); float axisMid = (axisRange.Maximum + axisRange.Minimum) / 2; float projectionMid = (rProjection.Maximum + rProjection.Minimum) / 2; if (projectionMid > axisMid) { overlap.Y = axisRange.Maximum - rProjection.Minimum; overlap.Y = -overlap.Y; } else { overlap.Y = rProjection.Maximum - axisRange.Minimum; } } return overlap; } And here is what I'm doing to resolve it right now: if (collisionDetection.OrientedRectangleAndRectangleCollide(obb, player.PlayerCollision)) { var resolveAmount = collisionDetection.GetCollisionResolveAmount(player.PlayerCollision, obb); if (Math.Abs(resolveAmount.Y) < Math.Abs(resolveAmount.X)) { var roundedAmount = (float)Math.Floor(resolveAmount.Y); player.PlayerCollision._position.Y -= roundedAmount; } else if (Math.Abs(resolveAmount.Y) <= 30.0f) //Catch cases where the player should be able to step over the top of something { var roundedAmount = (float)Math.Floor(resolveAmount.Y); player.PlayerCollision._position.Y -= roundedAmount; } else { var roundedAmount = (float)Math.Floor(resolveAmount.X); player.PlayerCollision._position.X -= roundedAmount; } } Can anyone see what might be the issue here, or has anyone experienced this before that knows a possible solution? I've tried for a few days to figure this out on my own, but I'm just stumped.

    Read the article

  • Getting a mirrored mesh from my data structure

    - by Steve
    Here's the background: I'm in the beginning stages of an RTS game in Unity. I have a procedurally generated terrain with a perlin-noise height map, as well as a function to generate a river. The problem is that the graphical creation of the map is taking the data structure of the map and rotating it by 180 degrees. I noticed this problem when I was creating my rivers. I would set the River's height to flat, and noticed that the actual tiles that were flat in the graphical representation were flipped and mirrored. Here's 3 screenshots of the map from different angles: http://imgur.com/a/VLHHq As you can see, if you flipped (graphically) the river by 180 degrees on the z axis, it would fit where the terrain is flattened. I have a suspicion it is being caused by a misunderstanding on my part of how vertices work. Alas, here is a snippet of the code that is used: This code here creates a new array of Tile objects, which hold the information for each tile, including its type, coordinate, height, and it's 4 vertices public DTileMap (int size_x, int size_y) { this.size_x = size_x; this.size_y = size_y; //Initialize Map_Data Array of Tile Objects map_data = new Tile[size_x, size_y]; for (int j = 0; j < size_y; j++) { for (int i = 0; i < size_x; i++) { map_data [i, j] = new Tile (); map_data[i,j].coordinate.x = (int)i; map_data[i,j].coordinate.y = (int)j; map_data[i,j].vertices[0] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -j * GTileMap.TileMap.tileSize); map_data[i,j].vertices[1] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[2] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[3] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); } } This code sets the river tiles to height 0 foreach (Tile t in map_data) { if (t.realType == "Water") { t.vertices[0].y = 0f; t.vertices[1].y = 0f; t.vertices[2].y = 0f; t.vertices[3].y = 0f; } } And below is the code to generate the actual graphics from the data: public void BuildMesh () { DTileMap.DTileMap map = new DTileMap.DTileMap (size_x, size_z); int numTiles = size_x * size_z; int numTris = numTiles * 2; int vsize_x = size_x + 1; int vsize_z = size_z + 1; int numVerts = vsize_x * vsize_z; // Generate the mesh data Vector3[] vertices = new Vector3[ numVerts ]; Vector3[] normals = new Vector3[numVerts]; Vector2[] uv = new Vector2[numVerts]; int[] triangles = new int[ numTris * 3 ]; int x, z; for (z=0; z < vsize_z; z++) { for (x=0; x < vsize_x; x++) { normals [z * vsize_x + x] = Vector3.up; uv [z * vsize_x + x] = new Vector2 ((float)x / size_x, 1f - (float)z / size_z); } } for (z=0; z < vsize_z; z+=1) { for (x=0; x < vsize_x; x+=1) { if (x == vsize_x - 1 && z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z - 1].vertices [3]; } else if (z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z - 1].vertices [2]; } else if (x == vsize_x - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z].vertices [1]; } else { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [0]; vertices [z * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [1]; vertices [(z+1) * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [2]; vertices [(z+1) * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [3]; } } } } for (z=0; z < size_z; z++) { for (x=0; x < size_x; x++) { int squareIndex = z * size_x + x; int triOffset = squareIndex * 6; triangles [triOffset + 0] = z * vsize_x + x + 0; triangles [triOffset + 2] = z * vsize_x + x + vsize_x + 0; triangles [triOffset + 1] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 3] = z * vsize_x + x + 0; triangles [triOffset + 5] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 4] = z * vsize_x + x + 1; } } // Create a new Mesh and populate with the data Mesh mesh = new Mesh (); mesh.vertices = vertices; mesh.triangles = triangles; mesh.normals = normals; mesh.uv = uv; // Assign our mesh to our filter/renderer/collider MeshFilter mesh_filter = GetComponent<MeshFilter> (); MeshCollider mesh_collider = GetComponent<MeshCollider> (); mesh_filter.mesh = mesh; mesh_collider.sharedMesh = mesh; calculateMeshTangents (mesh); BuildTexture (map); } If this looks familiar to you, its because i got most of it from Quill18. I've been slowly adapting it for my uses. And please include any suggestions you have for my code. I'm still in the very early prototyping stage.

    Read the article

  • Character with several colliders and rigidbodies

    - by Lautaro
    I am doing a PvP fighting game. This is the GameObject hierarchy of the player character. Player contains: Legs Sword Torso Head I want to be able to Register impacts of the sword on a specific body part Use AddForce on the whole player entity when a body part is struck Change the animation of the player that owns the sword that hit Questions Is it correct that the only rigidbody should be on the root Player GameObject ? Is it correct that The body parts should have colliders and be triggers ? Is it correct that The swords should have colliders but not be trigger ?

    Read the article

  • How do I implement a selectable world map?

    - by Clay
    I want to have a selectable map of the world, preferably zoomable, in a cocos2d project. When I tap on a country, I want that country to be selected so that I can perform some other operations with it. It seems that the best approach would be to use a vector world map, but I'm unsure how to implement this with cocos2d. Other options include using map tiles, but it seems that still would require the implementation of country polygons for tap/click detection. Depending on user input, I want to add icons to various countries on the map. What is a good way to approach the implementation of this type of map?

    Read the article

  • Using multiplication and division with delta time

    - by tesselode
    Using delta time with addition and subtraction is easy. player.x += 100 * dt However, multiplication and division complicate things a bit. For example, let's say I want the player to double his speed every second. player.x = player.x * 2 * dt I can't do this because it'll slow down the player (unless delta time is really high). Division is the same way, except it'll speed things way up. How can I handle multiplication and division with delta time?

    Read the article

  • How to get the blocks seen by the player?

    - by m4tx
    I'm writing a Minecraft-like game using Ogre engine and I have a problem. I must optimize my game, because when I try draw 10000 blocks, I have 2 FPS... So, I got the idea that blocks display of the plane and to hide the invisible blocks. But I have a problem - how do I know which blocks at a time are visible to the player? And - if you know of other optimization methods for such a game, write what and how to use them in Ogre.

    Read the article

  • I Need Help With A Game (Well The API)! [closed]

    - by user1758938
    I'm not "sure" which API (or language) I should use for a little 3D FPS game I'm gonna make although I don't have helpers lol. Anyway I'm ok with Java, C# and C++ but I need a good setup (easy to use) with the tools I need to make the game. I tried things like XNA but I want to check other options first because I don't like how it makes a installer and stuff, it's really annoying. I Need A API That Can Do These Things: 3D Rendering Input Sound And If It's Not Too Much To Ask Some Cool Shaders, Dynamic Lighting And A 3D Sound System Im "Ok" If I Have To Use Multiple APIs To Do This But Please Help Me!

    Read the article

  • How to acheive a smooth 2D lighting effect?

    - by Cyral
    I'm making a tile based game in XNA So currently my lightning looks like this: How can I get it to look like this? Instead of each block having its own tint, it has a smooth overlay. I'm assuming some sort of shader, and to tell it the lighting and blur it some how. But im not an expert with shaders. My current lighting calculates the light, and then passes it to a spritebatch and draws with a color parameter EDIT: No longer uses spritebatch tint, I was testing and now pass parameters to set the light values. But still looking for a way to smooth it

    Read the article

  • Architectural approaches to creating a game menu/shell overlay on PC/Linux?

    - by Ghopper21
    I'm am working on a collection of games for a custom digital tabletop installation (similar to Microsoft Surface tables). Each game will be an individual executable that runs full-screen. In addition, there needs to be a menu/shell overlay program running simultaneously. The menu/shell will allow users to pause games, switch to other games, check their game history, etc. Some key requirements of the shell: it intercepts all user input (mainly multitouch) first before passing it on to the currently running game (so that it can, for instance, know to pop-up at a "pause" command); can reveal on arbitrary portions of the screen, with the currently running (but presumably paused) game still showing underneath, ideally with its shape/size being dynamic, to allow for creation of an animated in/out drawer effect over the game. I'm currently looking into different architectural approaches to this problem, including Fraps and DirectX overlays, but I'm sure I'm missing some ways to think about this. What are the main approaches I should be considering? (Note the table is currently being run by Windows PC, but it could potentially be a Linux box instead.)

    Read the article

  • Technologies stack to create soccer game vizualization on web page [on hold]

    - by Lambrusco
    I want to create soccer game vizualization. What technologies will be best to create such one for web page? On input I have two teams with players. I have theory about their movements, the movement of the ball on field and so on. I just want to vizualize their movements. What will be the best technology stack? I mean programming languages (C++, Ruby, Java, PHP) and vizualization ways (Flash, HTML5, JS)

    Read the article

  • LibGDX drawing map using tiles without space

    - by Enayat Muradi
    I am making a board game. To draw the map on the board I use different tiles. On some screen the map looks good but on some other screens there is a space between the tiles. How can I do so there won't be any space between the tiles? I am designing my game with the size 480x800. To fit other screens I stretch it. My tiles looks like this: I draw the map using a for loop to draw the tile in different (x,y) position on screen. Here is what I mean with space between tiles: Screen with 240x400 Screen with 360x600, here there is no spacing between tiles. I use camera and the screen to draw I don't use stage. I have also tried to use Viewport but I get the same results. cam = new OrthographicCamera();cam.setToOrtho(true, gameWidth, gameHeight); batcher = new SpriteBatch(); batcher.setProjectionMatrix(cam.combined); shapeRenderer = new ShapeRenderer(); shapeRenderer.setProjectionMatrix(cam.combined); How can I do to solve the problem?

    Read the article

  • OpenGL - Rendering from part of an index and vertex array depending on an element count

    - by user1423893
    I'm currently drawing my shapes as lines by using a VAO and then assigning the dynamic vertices and indices each frame. // Bind VAO glBindVertexArray(m_vao); // Update the vertex buffer with the new data (Copy data into the vertex buffer object) glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(VertexPosition), m_vertices.data(), GL_DYNAMIC_DRAW); // Update the index buffer with the new data (Copy data into the index buffer object) glBufferData(GL_ELEMENT_ARRAY_BUFFER, numIndices * sizeof(unsigned short), indices.data(), GL_DYNAMIC_DRAW); glDrawElements(GL_LINES, numIndices, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0)); // Unbind VAO glBindVertexArray(0); What I would like to do is draw the lines using only part of the data stored in the index and vertex buffer objects. The vertex buffer has its vertices set from an array of defined maximum size: std::array<VertexPosition, maxVertices> m_vertices; The index buffer has its elements set from an array of defined maximum size: std::array<unsigned short, maxIndices> indices = { 0 }; A running total is kept of the number of vertices and indices needed for each draw call numVertices numIndices Can I not specify that the buffer data contain the entire array and only read from part of it when drawing? For example using the vertex buffer object glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(VertexPosition), m_vertices.data(), GL_DYNAMIC_DRAW); m_vertices.data() = Entire array is stored numVertices * sizeof(VertexPosition) = Amount of data to read from the entire array Is this not the correct way to approach this? I do not wish to use std::vector if possible.

    Read the article

  • Using Instance Nodes, worth it?

    - by Twitch
    I am making a 2d game where there are various environments with lots and lots of objects. There is a forest scene with like 1200 objects in total(trees mainly), of which around 100 are visible on the camera at any given time, as you move through the level. These are comprised of around 20 different kind of trees and other props. Each object is usually 2-6 triangles with a transparent texture. My developer asked me to replace each object in the scene with a node, and keeping only a minimal amount of actual objects which would be 300+ or so(?), since there are a few modified unique meshes. So he can instantiate the actual objects to keep the game light. Is this actually effective? And if so how much? I 've read about draw calls and such and I suppose that if I combine each texture (10 kinds of trees) in 1 mesh it will have the same effect?

    Read the article

  • Questions before I revamp my rendering engine to use shaders (GLSL)

    - by stephelton
    I've written a fairly robust rendering engine using OpenGL ES 1.1 (fixed-function.) I've been looking into revamping the engine to use OpenGL ES 2.0, which necessitates that I use shaders. I've been absorbing information all day long and still have some questions. Firstly, lighting. The fixed-function pipeline is guaranteed to have at least 8 lights available. My current engine finds lights that are "close" to the primitives being drawn and enables them; I don't know how many lights are going to be enabled until I draw a given model. Nothing is dynamically allocated in GLSL, so I have to define in a shader some number of lights to be used, right? So if I want to stick with 8, should I write my general purpose shader to have 8 lights and then use uniforms to tell it how many / which lights to use? Which brings me to another question: should I be concerned with the amount of data I'm allocating in a shader? Recent video cards have hundreds of "stream processors." If I've got a fragment shader being used on some number of fragments in a given triangle, I assume they must each have their own stack to work on. Are read-only variables copied here, or read when needed? My initial goal is to rework my code so that it is virtually identical to the current implementation. What I have in mind is to create my own matrix stack so that I can implement something along the lines of push/popMatrix and apply all my translations, rotations, and scales to this matrix, then provide the matrix to the vertex shader so that it can make very quick vertex translations. Is this approach sound? Edit: My original intention was to ask if there was a tutorial that would explain the bare minimum necessary to jump from fixed-function to using shaders. Thanks!

    Read the article

  • Does it make the game more fun when the user is forced to progress thru the levels sequentially rather than letting them pick and play?

    - by BeachRunnerJoe
    Hello. For the first time in my game, I'm stuck with a real design dilemma. I guess that's a good thing ;) I'm building a word puzzle game that has five levels, each with 30 puzzles. Currently, the user has to solve one puzzle at a time before moving to the next. However, I'm finding the user occasionally gets stuck on a puzzle, at which point they can no longer play until they solve it. This is obviously bad because many people will just quit playing the game and delete the app since they get frustrated and can't play any other puzzles until the current puzzle is solved. The only elegant solution I can find to helping the player get unstuck is changing the design of the game to allow the users to pick any puzzle to play at any time. This way, if they get stuck, they can come back to it later and at least they have other puzzles to play in the meantime. It's my opinion, however, that this new flow design doesn't make the game as fun as the original flow design where the player has to complete a puzzle before moving to the next. To me, it's like anything else, when you only have one of something, it's more enjoyable, but when you have 30 of something, it's far less enjoyable. In fact, when I present the user with 30 puzzles to choose from that they need to solve before unlocking the next level, it almost seems as tho I'm making them feel like it's work they have to do. I even had a tester voluntarily tell me that being forced to complete a puzzle before moving to the next is more motivating. My questions are... Do you agree/disagree? Do you have any suggestions for how I can help the player get unstuck? Thanks so much in advance for your thoughts! EDIT: I should mention that I've already considered a few other solutions to helping the user get unstuck, but none of them seem like good ideas. They are... Add more hints: Currently, the user gets two hints per puzzle. If I increase the hint count, it only makes the game more easy and still leaves the possibility of the user getting stuck. Add a "Show Solution" button: This seems like a bad idea because it's my opinion this takes the fun out of the game for many people who would probably otherwise solve the puzzle if they didn't have the quick option to see the solution.

    Read the article

  • How can I access bitmaps created in another activity?

    - by user22241
    I am currently loading my game bitmaps when the user presses 'start' in my animated splash screen activity (the first / launch activity) and the app progresses from my this activity to the main game activity, This is causing choppy animation in the splashscreen while it loads/creates the bitmaps for the new activity. I've been told that I should load all my bitmaps in one go at the very beginning. However, I can't work out how to do this - could anyone please point me in the right direction? I have 2 activities, a splash screen and the main game. Each consist of a class that extends activity and a class that extends SurfaceView (with an inner class for the rendering / logic updating). So, for example at the moment I am creating my bitmaps in the constructor of my SurfaceView class like so: public class OptionsScreen extends SurfaceView implements SurfaceHolder.Callback { //Create variables here public OptionsScreen(Context context) { Create bitmaps here } public void intialise(){ //This method is called from onCreate() of corresponding application context // Create scaled bitmaps here (from bitmaps previously created) }

    Read the article

  • Texture not drawing on cubes

    - by Christian Frantz
    I can draw the cubes fine but they are just solid black besides the occasional lighting that goes on. The basic effect is being set for each cube also. public void Draw(BasicEffect effect) { foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); device.SetVertexBuffer(vertexBuffer); device.Indices = indexBuffer; device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, 8, 0, 12); } } The cubes draw method. TextureEnabled is set to true in my main draw method. My texture is also loading fine. public Cube(GraphicsDevice graphicsDevice, Vector3 Position, Texture2D Texture) { device = graphicsDevice; texture = Texture; cubePosition = Position; effect = new BasicEffect(device); } The constructor seems fine too. Could this be caused by the Vector2's of my VertexPositionNormalTexture? Even if they were out of order something should still be drawn other than a black cube

    Read the article

  • How can I model a pendulum blade?

    - by Micah Delane Bolen
    Like this one from Saw V: What primitive shape/s would you start out with? How would you transform the primitive shape/s to give it a nice, smooth, sharp blade on one side without distorting the entire object in a weird way? I tried starting out with a cylinder and then subtracting the top half using a duplicate cylinder and a difference modifier, but I ended up distorting the entire object when I tried to pull the "blade" edges together. I think I need to add lattices to smoothly "sharpen" the edge of the blade.

    Read the article

  • How to achieve selection of a tile from a tile sheet based on an ID?

    - by Bugster
    Let's say I have a tile sheet that contains 8 sprites per sheet. Each sprite is a tile of 30x30. I wrote my own custom map parser/map loader however I'm having trouble extracting a certain tile sprite from the file. I'll describe my problem better in order for everyone to understand. I wrote an enum of materials, each material has a value according to it's location relative to the tile sheet. For example void is 1, grass is 2, rock is 3, etc. So in my tile sheet they are represented as such: +---+---+---+---+---+ | 1 | 2 | 3 | 4 | 5 | +---+---+---+---+---+ Which is equivalent to: +------+-------+-------+ | void | grass | stone | +------+-------+-------+ Basically when rendering, I created a tile class, each tile has 2 coordinates: X and Y (They are calculated automatically) and a material which can be represented either as a number, either as a value (ID). When rendering, I have a vector of sprites which are all taken from 1 file called tilesheet.png, however each of them must only draw a certain portion of the tile sheet, for example say I have something like this: tile coordinateBounds(topLeftX, topLeftY, tileWidth, tileHeight); During the initialization of the map I calculate an array of tiles, and I give each of them their position, their materials based on the values in a map file and a few other variables such as collision. I need to apply the coordinateBounds to each of them according to their material value. For example if the material is grass it should only take the grass sprite from the tilesheet. I must also mention I'm using SFML, and there are no borders or spacing between the tiles.

    Read the article

  • How access PhysicalMaterial from Actor Class?

    - by EmAdpres
    I use Projectile for my weapon system and UDKProjectile has two main function to handle Hit of projectiles(=bullet of my weapon): simulated function ProcessTouch(Actor Other, Vector HitLocation, Vector HitNormal) // For Actors simulated event HitWall(vector HitNormal, actor Wall, PrimitiveComponent WallComp) // Everything except Actors ( I guess) the first method, the function just give me the actor which I hit and my question is How I can get that actor's physical material by first parameter ( Other ), in order to make a proper react about it ( for example a proper Sound of collide ) ... A tricky (but hateful ) way which I knew works is, make a Trace from a little back of that actor to that actor, and use HitInfo parameter which include physical Material ! But there should be a more standard way !

    Read the article

  • Extremely simple online multiplayer game

    - by Postscripter
    I am considering creating a simple multiplayer game, which focuses on physics and can accommodate up to 30 players per session. Very simple graphics, but smart physics (pushing, weight and gravity, balance) is required. After some research I found a good java script (framework ??) called box2d.js I found the demo to be excellent. this is is kind of physics am looking for in my game. Now, what other frameworks will I need? Node.js?? Prototype.js?? (btw, I found the latest versoin of protoype.js to be released in 2010...?? is this still supported? Should I avoid using it?) What bout HTML 5 and Canvas? would I need them? websockets? Am a beginner in web programming + game programming world. but I will learn fast, am computer science graduate. (but no much web expeience but know essentionals javascript, html, css..). I just need a guiding path to build my game. Thanks

    Read the article

  • How can I read a portion of one Minecraft world file and write it into another?

    - by RapierMother
    I'm looking to read block data from one Minecraft world and write the data into certain places in another. I have a Minecraft world, let's say "TemplateWorld", and a 2D list of Point objects. I'm developing an application that should use the x and y values of these Points as x and z reference coordinates from which to read constant-sized areas of blocks from the TemplateWorld. It should then write these blocks into another Minecraft world at constant y coordinates, with x & z coordinates determined based on each Point's index in the 2D list. The issue is that, while I've found a decent amount of information online regarding Minecraft world formats, I haven't found what I really need: more of a breakdown by hex address of where/what everything is. For example, I could have the TemplateWorld actually be a .schematic file rather than a world; I just need to be able to read the bytes of the file, know that the actual block data starts always at a certain address (or after a certain instance of FF, etc.), and how it's stored. Once I know that, it's easy as pie to just read the bytes and store them.

    Read the article

  • Draw contour around object in Opengl

    - by Maciekp
    I need to draw contour around 2d objects in 3d space. I tried drawing lines around object(+points to fill the gap), but due to line width, some part of it(~50%) was covering object. I tried to use stencil buffer, to eliminate this problem, but I got sth like this(contour is green): http://goo.gl/OI5uc (sorry I can't post images, due to my reputation) You can see(where arrow points), that some parts of line are behind object, and some are above. This changes when I move camera, but always there is some part, that is covering it. Here is code, that I use for drawing object: glColorMask(1,1,1,1); std::list<CObjectOnScene*>::iterator objIter=ptr->objects.begin(),objEnd=ptr->objects.end(); int countStencilBit=1; while(objIter!=objEnd) { glColorMask(1,1,1,1); glStencilFunc(GL_ALWAYS,countStencilBit,countStencilBit); glStencilOp(GL_REPLACE,GL_KEEP,GL_REPLACE ); (*objIter)->DrawYourVertices(); glStencilFunc(GL_NOTEQUAL,countStencilBit,countStencilBit); glStencilOp(GL_KEEP,GL_KEEP,GL_REPLACE); (*objIter)->DrawYourBorder(); ++objIter; ++countStencilBit; } I've tried different settings of stencil buffer, but always I was getting sth like that. Here is question: 1.Am I setting stencil buffer wrong? 2. Are there any other simple ways to create contour on such objects? Thanks in advance. EDIT: 1. I don't have normals of objects. 2. Object can be concave. 3. I can't use shaders(see below why).

    Read the article

< Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >