Search Results

Search found 16410 results on 657 pages for 'game component'.

Page 344/657 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • Per-pixel collision detection - why does XNA transform matrix return NaN when adding scaling?

    - by JasperS
    I looked at the TransformCollision sample on MSDN and added the Matrix.CreateTranslation part to a property in my collision detection code but I wanted to add scaling. The code works fine when I leave scaling commented out but when I add it and then do a Matrix.Invert() on the created translation matrix the result is NaN ({NaN,NaN,NaN},{NaN,NaN,NaN},...) Can anyone tell me why this is happening please? Here's the code from the sample: // Build the block's transform Matrix blockTransform = Matrix.CreateTranslation(new Vector3(-blockOrigin, 0.0f)) * // Matrix.CreateScale(block.Scale) * would go here Matrix.CreateRotationZ(blocks[i].Rotation) * Matrix.CreateTranslation(new Vector3(blocks[i].Position, 0.0f)); public static bool IntersectPixels( Matrix transformA, int widthA, int heightA, Color[] dataA, Matrix transformB, int widthB, int heightB, Color[] dataB) { // Calculate a matrix which transforms from A's local space into // world space and then into B's local space Matrix transformAToB = transformA * Matrix.Invert(transformB); // When a point moves in A's local space, it moves in B's local space with a // fixed direction and distance proportional to the movement in A. // This algorithm steps through A one pixel at a time along A's X and Y axes // Calculate the analogous steps in B: Vector2 stepX = Vector2.TransformNormal(Vector2.UnitX, transformAToB); Vector2 stepY = Vector2.TransformNormal(Vector2.UnitY, transformAToB); // Calculate the top left corner of A in B's local space // This variable will be reused to keep track of the start of each row Vector2 yPosInB = Vector2.Transform(Vector2.Zero, transformAToB); // For each row of pixels in A for (int yA = 0; yA < heightA; yA++) { // Start at the beginning of the row Vector2 posInB = yPosInB; // For each pixel in this row for (int xA = 0; xA < widthA; xA++) { // Round to the nearest pixel int xB = (int)Math.Round(posInB.X); int yB = (int)Math.Round(posInB.Y); // If the pixel lies within the bounds of B if (0 <= xB && xB < widthB && 0 <= yB && yB < heightB) { // Get the colors of the overlapping pixels Color colorA = dataA[xA + yA * widthA]; Color colorB = dataB[xB + yB * widthB]; // If both pixels are not completely transparent, if (colorA.A != 0 && colorB.A != 0) { // then an intersection has been found return true; } } // Move to the next pixel in the row posInB += stepX; } // Move to the next row yPosInB += stepY; } // No intersection found return false; }

    Read the article

  • Java keyboard input [on hold]

    - by dØd
    I'm trying to implement a input system that can detect whether a certain key was held or was only pressed briefly. So far I have this: KEY_INTERACTION_TRESHOLD = 400ms //inside a constructor shouldMeasure = true; @Override public void keyPressed(KeyEvent e) { if (shouldMeasure) { startTime = System.currentTimeMillis(); shouldMeasure = false; return; } System.out.println("Button is held down"); e.consume(); } @Override public void keyReleased(KeyEvent e) { if (System.currentTimeMillis() - startTime < KEY_INTERACTION_TRESHOLD) { System.out.println("Button was only pressed briefly"); } startTime = 0; shouldMeasure = true; e.consume(); } Now this works, but the problem is that there is this delay between when I press a key to hold and when the message 'Button is held down' gets displayed. I understand why this delay occurs (for example when you press and hold a letter there will be a similar delay between the first and the second letter printed out), but I would like to somehow avoid it. I'm using only the Java API.

    Read the article

  • Learning OpenGL GLSL - VAO buffer problems?

    - by Bleary
    I've just started digging through OpenGL and GLSL, and now stumbled on something I can't get my head around this one!? I've stepped back to loading a simple cube and using a simple shader on it, but the result is triangles drawn incorrectly and/or missing. The code I had working perfectly on meshes, but was attempting to move to using VAOs so none of the code for storing the vertices and indices has changed. http://i.stack.imgur.com/RxxZ5.jpg http://i.stack.imgur.com/zSU50.jpg What I have for creating the VAO and buffers is this //Create the Vertex array object glGenVertexArrays(1, &vaoID); // Finally create our vertex buffer objects glGenBuffers(VBO_COUNT, mVBONames); glBindVertexArray(vaoID); // Save vertex attributes into GPU glBindBuffer(GL_ARRAY_BUFFER, mVBONames[VERTEX_VBO]); // Copy data into the buffer object glBufferData(GL_ARRAY_BUFFER, lPolygonVertexCount*VERTEX_STRIDE*sizeof(GLfloat), lVertices, GL_STATIC_DRAW); glEnableVertexAttribArray(pos); glVertexAttribPointer(pos, 3, GL_FLOAT, GL_FALSE, VERTEX_STRIDE*sizeof(GLfloat),0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mVBONames[INDEX_VBO]); glBufferData(GL_ELEMENT_ARRAY_BUFFER, lPolygonCount*sizeof(unsigned int), lIndices, GL_STATIC_DRAW); glBindVertexArray(0); And the code for drawing the mesh. glBindVertexArray(vaoID); glUseProgram(shader->programID); GLsizei lOffset = mSubMeshes[pMaterialIndex]->IndexOffset*sizeof(unsigned int); const GLsizei lElementCount = mSubMeshes[pMaterialIndex]->TriangleCount*TRIAGNLE_VERTEX_COUNT; glDrawElements(GL_TRIANGLES, lElementCount, GL_UNSIGNED_SHORT, reinterpret_cast<const GLvoid*>(lOffset)); // All the points are indeed in the correct place!? //glPointSize(10.0f); //glDrawElements(GL_POINTS, lElementCount, GL_UNSIGNED_SHORT, 0); glUseProgram(0); glBindVertexArray(0); Eyes have become bleary looking at this today so any thoughts or a fresh set of eyes would be greatly appreciated.

    Read the article

  • Why does my VertexDeclaration apparently not contain Position0?

    - by Phil
    I'm trying to get my code from calling each individual draw call down to using at least a VertexBuffer, and preferably an indexBuffer, but now that I'm attempting to test my code, I'm getting the error: The current vertex declaration does not include all the elements required by the current vertex shader. Position0 is missing. Which makes absolutely no sense to me, as my VertexDeclaration is: public readonly static VertexDeclaration VertexDeclaration = new VertexDeclaration( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(sizeof(float) * 3, VertexElementFormat.Color, VertexElementUsage.Color, 0), new VertexElement(sizeof(float) * 3 + 4, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0) ); Which clearly contains the information. I am attempting to draw with the following lines: VertexBuffer vb = new VertexBuffer(GraphicsDevice, VertexPositionColorNormal.VertexDeclaration, c.VertexList.Count, BufferUsage.WriteOnly); IndexBuffer ib = new IndexBuffer(GraphicsDevice, typeof(int), c.IndexList.Count, BufferUsage.WriteOnly); vb.SetData<VertexPositionColorNormal>(c.VertexList.ToArray()); ib.SetData<int>(c.IndexList.ToArray()); GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vb.VertexCount, 0, c.IndexList.Count/3); Where c is a Chunk class containing an 8x8x8 array of boxes. Full code is available at https://github.com/mrbaggins/Box/tree/ProperMeshing/box/box. Relevant locations are Chunk.cs (Contains the VertexDeclaration) and Game1.cs (Draw() is in Lines 230-250). Not much else of relevance to this problem anywhere else. Note that large commented sections are from old version of drawing.

    Read the article

  • Turn Based Event Algorithm

    - by GamersIncoming
    I'm currently working on a small roguelike in XNA, which sees the player in a randomly generated series of dungeons fending off creeps, as you might expect. As with most roguelikes, the player makes a move, and then each of the creeps currently active on screen will make a move in turns, until all creeps have updated, and it return's to the player's go. On paper, the simple algorithm is straightforward: Player takes turn Turn Number increments For each active creep, update Position Once all active creeps have updated, allow player to take next turn However, when it comes to actually writing this in more detail, the concept becomes a bit more tricky for me. So my question comes as this: what is the best way to handle events taking turns to trigger, where the completion of each last event triggers the next, when dealing with a large number of creeps (probably stored as an array of an enemy object), and is there an easier way to create some kind of engine that just takes all objects that need updating and chains them together so their updates follow suit? I'm not asking for code, just algorithms and theory in the direction of objects triggering updates one after the other, in a turn based manner. Thanks in advance. Edited: Here's the code I currently have that is horrible :/ if (player.getTurnOver() && updateWait == 0) { if (creep[creepToUpdate].getActive()) { creep[creepToUpdate].moveObject(player, map1); updateWait = 10; } if (creepToUpdate < creep.Length -1) { creepToUpdate++; } else { creepToUpdate = 0; player.setTurnOver(false); } } if (updateWait > 0) { updateWait--; }

    Read the article

  • When mapping the surface of a sphere with tiles, how might you deal with polar distortion?

    - by clweeks
    It's easy to deal with the way locations interact on a clean Cartesian grid. It's just vanilla math. And you can kind of ignore the geometry of the sphere's surface for a bunch of it if you want to just truncate the poles or something. But I keep coming up with ideas for games where the polar space matters. Geo-coded ARGs and global roguelikes and stuff. I want square(ish?) locations -- reasonably representable by square tiles of the same size across the globe, anyway. This has to be a solved problem, right? What are the solutions? ETA: At the equator -- and assuming that your square locations are reasonably small, it's close enough to true that you can get away with having one square in the rows north and south of the most equatorial row. And you could probably get away with that by just hand-waving the difference up to like 45-degrees or so. But eventually, you need to have fewer squares in a pole-ward circumferential row. If I reduce the length of the row by one and offset the squares by 1/2 then they're just like hexes and it's relatively easy to do the coding to keep track of the connections. But as you get pole-ward, it gets more and more extreme. Projecting the surface of the world onto the surface of a cube is tempting. But I figured there must be more elegant solutions already in use. If I did the cube thing (not dissecting it further through geodesy) Are there any pros and cons related to placing the pole at the center of a face or at the vertex of three sides?

    Read the article

  • glsl demo suggestions ?

    - by brainydexter
    In a lot of places I interviewed recently, I have been asked many a times if I have worked with shaders. Even though, I have read and understand the pipeline, the answer to that question has been no. Recently, one of the places asked me if I can send them a sample of 'something' that is "visually polished". So, I decided to take the plunge and wrote some simple shader in GLSL(with opengl).I now have a basic setup where I can use vbos with glsl shaders. I have a very short window left to send something to them and I was wondering if someone with experience, could suggest an idea that is interesting enough to grab someone's attention. Thanks

    Read the article

  • Attaching two objects and changing their world matrices accordingly

    - by A-Type
    I'm having a hard time wrapping my head around the transformations required to bind two objects together in either a two-way or one-way relationship. I will need to implement both types. For the first case, I want to be able to 'couple' two ships together in space. The ships have different mass, of course. Forces applied to either ship will use combined mass and moment of inertia to calculate and move both ships. The trick is, being sure that the point at which they are coupled remains the same, and they don't move at all relative to each other. The second case is similar: I want a ship to be able to enter the atmosphere of a planet and move relative to the planet. The planet will be orbiting the sun, which is fixed at 0,0,0. Essentially, when the ship is sitting still outside of the atmosphere, the planet will move past it on its course-- but when the ship is sitting still inside the atmosphere, it moves and rotates with the planet, so that it is always relative to the horizon. Essentially, the vertices which make up the ship are now transformed just like the ones that make up the planet, except that the ship can move itself around relative to the planet. I get the feeling I can implement both of these with the same code. Essentially, I am thinking of giving each object (which I call Fixtures) a list of "slave" Fixtures onto which that Fixture's world matrix is imposed. So, this would be the planet imposing its world on any contained ships. In the case of coupling, I would simply make each ship a slave of the other, somehow. Obviously I can't just multiply the ship's world matrix by the planet's, or each ship by the others. What I'd like some help with is what calculations to make in order to get a nice, seamless relative world to the other object. I was thinking maybe I could just multiply the world of the slave by the inverse of the master, but then when you couple two ships you would lose all that world data. So, perhaps I need an intermediate "world" which is the absolute world, but use a secondary "final world" to actually transform the objects?

    Read the article

  • How to improve Minecraft-esque voxel world performance?

    - by SomeXnaChump
    After playing Minecraft I marveled a bit at its large worlds but at the same time I found them extremely slow to navigate, even with a quad core and meaty graphics card. Now I assume Minecraft is fairly slow because: A) It's written in Java, and as most of the spatial partitioning and memory management activities happen in there, it would naturally be slower than a native C++ version. B) It doesn't partition its world very well. I could be wrong on both assumptions; however it got me thinking about the best way to manage large voxel worlds. As it is a true 3D world, where a block can exist in any part of the world, it is basically a big 3D array [x][y][z], where each block in the world has a type (i.e BlockType.Empty = 0, BlockType.Dirt = 1 etc.) Now, I am assuming to make this sort of world perform well you would need to: A) Use a tree of some variety (oct/kd/bsp) to split all the cubes out; it seems like an oct/kd would be the better option as you can just partition on a per cube level not a per triangle level. B) Use some algorithm to work out which blocks can currently be seen, as blocks closer to the user could obfuscate the blocks behind, making it pointless to render them. C) Keep the block object themselves lightweight, so it is quick to add and remove them from the trees. I guess there is no right answer to this, but I would be interested to see peoples' opinions on the subject. How would you improve performance in a large voxel-based world?

    Read the article

  • MCP 1.7.10 Java class navigation

    - by Elias Benevedes
    So, I'm new to the Minecraft modding community and trying to understand where to start. I've attempted to do it before, but dropped it to the complexity of starting and the lack of a site like this to help (Mind that I'm also semi-new to Java, but have worked extensively in Javascript and Python. I understand how Java is different from the two). I have downloaded MCP 9.08 (Decompiles 1.7.10), and decompiled Minecraft. I'm looking to mod client, so I didn't supply it with a server jar. Everything seemed to work fine in decompile (Only error was it couldn't find the server jar). I can find my files in /mcp908/src/minecraft/net/minecraft. However, if I open up one of the classes in, say, block, I see a bunch of variables starting with p_ and ending with _. Is there any way to make these variables more decipherable, to understand what's going on so I can learn by example? Thank you.

    Read the article

  • How do I get the point coords of a rotated SFML shaperect?

    - by user15498
    I am trying to get collisions of bullets working, and am using SFML. I am using code to get the position of the points of the rectangle for collisions, however I think there's a way to do this without having to get points but by simply getting the points from SFML, since the shape is a rectangle and the points are stored in that way. Is there a way to do that? Through a combination of getPoint() and getGlobalBounds() maybe? While on this topic, is it better to use shapeRects or sprites? I used to only use sprites, however with the addition of textures and more low level stuff I think it would be best to switch to using rectangles and setting their size.

    Read the article

  • Subdividing a polygon into boxes of varying size

    - by Michael Trouw
    I would like to be pointed to information / resources for creating algorithms like the one illustrated on this blog, which is a subdivision of a polygon (in my case a voronoi cell) into several boxes of varying size: http://procworld.blogspot.nl/2011/07/city-lots.html In the comments a paper by among others the author of the blog can be found, however the only formula listed is about candidate location suitability: http://www.groenewegen.de/delft/thesis-final/ProceduralCityLayoutGeneration-Preprint.pdf Any language will do, but if examples can be given Javascript is preferred (as it is the language i am currently working with) A similar question is this one: What is an efficient packing algorithm for packing rectangles into a polygon?

    Read the article

  • How does one specify raster operations in XNA?

    - by Corey Ogburn
    I'm looking for a way to add a sprite using a particular logic operation (like XOR). I can't find anything on Google and I'm not sure where to look in the documentation. I've looked into SpriteBatch.Begin(...) and its Draw method and several options in the GraphicsDevice class, but I'm not recognizing anything capable of this. I'm still pretty new to XNA so I may just not have recognized the terminology to do this.

    Read the article

  • Create a kind of Interface c++ [migrated]

    - by Liuka
    I'm writing a little 2d rendering framework with managers for input and resources like textures and meshes (for 2d geometry models, like quads) and they are all contained in a class "engine" that interacts with them and with a directX class. So each class have some public methods like init or update. They are called by the engine class to render the resources, create them, but a lot of them should not be called by the user: //in pseudo c++ //the textures manager class class TManager { private: vector textures; .... public: init(); update(); renderTexture(); //called by the "engine class" loadtexture(); gettexture(); //called by the user } class Engine { private: Tmanager texManager; public: Init() { //initialize all the managers } Render(){...} Update(){...} Tmanager* GetTManager(){return &texManager;} //to get a pointer to the manager //if i want to create or get textures } In this way the user, calling Engine::GetTmanager will have access to all the public methods of Tmanager, including init update and rendertexture, that must be called only by Engine inside its init, render and update functions. So, is it a good idea to implement a user interface in the following way? //in pseudo c++ //the textures manager class class TManager { private: vector textures; .... public: init(); update(); renderTexture(); //called by the "engine class" friend class Tmanager_UserInterface; operator Tmanager_UserInterface*(){return reinterpret_cast<Tmanager_UserInterface*>(this)} } class Tmanager_UserInterface : private Tmanager { //delete constructor //in this class there will be only methods like: loadtexture(); gettexture(); } class Engine { private: Tmanager texManager; public: Init() Render() Update() Tmanager_UserInterface* GetTManager(){return texManager;} } //in main function //i need to load a texture //i always have access to Engine class engine-GetTmanger()-LoadTexture(...) //i can just access load and get texture; In this way i can implement several interface for each object, keeping visible only the functions i (and the user) will need. There are better ways to do the same?? Or is it just useless(i dont hide the "framework private functions" and the user will learn to dont call them)? Before i have used this method: class manager { public: //engine functions userfunction(); } class engine { private: manager m; public: init(){//call manager init function} manageruserfunciton() { //call manager::userfunction() } } in this way i have no access to the manager class but it's a bad way because if i add a new feature to the manager i need to add a new method in the engine class and it takes a lot of time. sorry for the bad english.

    Read the article

  • Moving objects colliding when using unalligned collision avoidance (steering)

    - by James Bedford
    I'm having trouble with unaligned collision avoidance for what I think is a rare case. I have set two objects to move towards each other but with a slight offset, so one of the objects is moving slightly upwards, and one of the objects is moving slightly downwards. In my unaligned collision avoidance steering algorithm I'm finding the points on the object's forward line and the other object's forward line where these two lines are the closest. If these closest points are within a collision avoidance distance, and if the distance between them is smaller than the two radii of the two object's bounding spheres, then the objects should steer away in the appropriate direction. The problem is that for my case, the closest points on the lines are calculated to be really far away from the actual collision point. This is because the two forward lines for each object are moving away from each other as the objects pass. The problem is that because of this, no steering takes place, and the two objects partially collide. Does anyone have any suggestions as to how I can correctly calculate the point of collision? Perhaps by somehow taking into account the size of the two objects?

    Read the article

  • Need help drawings planets in Java.

    - by d33j
    I am looking for help/links/notes/agorithms/URLs/examples on drawing/rendering spheres in pure Java (so that I can hopefully, one day, generate/render planets with various surfaces & atmospheres) So for the moment, i'd be pretty happy to be able to start off with just drawing a wireframed sphere(s). ps: I don't want to use external libraries like Java3D, JOGL or aftermarket engines like JMonkeyEngine, Would rather keep it as straight Java.

    Read the article

  • ssao implementation

    - by Irbis
    I try to implement a ssao based on this tutorial: link I use a deferred rendering and world coordinates for shading calculations. When saving gbuffer a vertex shader output looks like this: worldPosition = vec3(ModelMatrix * vec4(inPosition, 1.0)); normal = normalize(normalModelMatrix * inNormal); gl_Position = ProjectionMatrix * ViewMatrix * ModelMatrix * vec4(inPosition, 1.0); Next for a ssao calculations I render a scene as a full screen quad and I save an occlusion parameter in a texture. (Vertex positions in the world space: link Normals in the world space: link) SSAO implementation: subroutine (RenderPassType) void ssao() { vec2 texCoord = CalcTexCoord(); vec3 worldPos = texture(texture0, texCoord).xyz; vec3 normal = normalize(texture(texture1, texCoord).xyz); vec2 noiseScale = vec2(screenSize.x / 4, screenSize.y / 4); vec3 rvec = texture(texture2, texCoord * noiseScale).xyz; vec3 tangent = normalize(rvec - normal * dot(rvec, normal)); vec3 bitangent = cross(normal, tangent); mat3 tbn = mat3(tangent, bitangent, normal); float occlusion = 0.0; float radius = 4.0; for (int i = 0; i < kernelSize; ++i) { vec3 pix = tbn * kernel[i]; pix = pix * radius + worldPos; vec4 offset = vec4(pix, 1.0); offset = ProjectionMatrix * ViewMatrix * offset; offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; float sample_depth = texture(texture0, offset.xy).z; float range_check = abs(worldPos.z - sample_depth) < radius ? 1.0 : 0.0; occlusion += (sample_depth <= pix.z ? 1.0 : 0.0); } outputColor = vec4(occlusion, occlusion, occlusion, 1); } That code gives following results: camera looking towards -z world space: link camera looking towards +z world space: link I wonder if it is possible to use world coordinates in the above code ? When I move camera I get different results because world space positions don't change. Can I treat worldPos.z as a linear depth ? What should I change to get a correct results ? I except the white areas in place of occlusion, so the ground should has the white areas only near to the object.

    Read the article

  • The right way to add images to Monogame/Windows

    - by ashes999
    I'm starting out with MonoGame. For now, I'm only targeting Windows (desktop -- not Windows 8 specifically). I've used a couple of XNA products in the past (raw XNA, FlatRedBall, SilverSprite), so I may have a misunderstanding about how I should add images to my content. How do I add images to my project? Currently, I created a new Monogame project, added a folder called "Content," and added images under there; the only caveat is that I need to set the Copy to Output Directory action to one of the Copy ones. It seems strange, because my "raw" XNA project just last week had a Content project in it (XNA Framework Content Pipeline, according to VS2010), which compiled my images to XNB (I think). It seems like Monogame doesn't use the same content pipeline, but I'm not sure. Edit: My question is not about "how do I get the XNA content pipeline to work with Monogame." My question is "why would I want to use the XNA content pipeline in Monogame?" Because there are (at least) two solutions (that I see today): Add the images to the Monogame project and set the Copy to Output Directory options to copy. Add a XNA content pipeline project and add my images to that instead; reference it from my MOnogame project. Which solution should I use, and why? I currently have a working version with the first option.

    Read the article

  • Directional and orientation problem

    - by Ahmed Saleh
    I have drawn 5 tentacles which are shown in red. I have drew those tentacles on a 2D Circle, and positioned them on 5 vertices of the that circle. BTW, The circle is never be drawn, I have used it to simplify the problem. Now I wanted to attached that circle with tentacles underneath the jellyfish. There is a problem with the current code but I don't know what is it. You can see that the circle is parallel to the base of the jelly fish. I want it to be shifted so that it be inside the jelly fish. but I don't know how. I tried to multiply the direction vector to extend it but that didn't work. // One tentacle is constructed from nodes // Get the direction of the first tentacle's node 0 to node 39 of that tentacle; Vec3f dir = m_tentacle[0]->geNodesPos()[0] - m_tentacle[0]->geNodesPos()[39]; // Draw the circle with tentacles on it Vec3f pos = m_SpherePos; drawCircle(pos,dir,30,m_tentacle.size()); for (int i=0; i<m_tentacle.size(); i++) { m_tentacle[i]->Draw(); } // Draw the jelly fish, and orient it on the 2D Circle gl::pushMatrices(); Quatf q; // assign quaternion to rotate the jelly fish around the tentacles q.set(Vec3f(0,-1,0),Vec3f(dir.x,dir.y,dir.z)); // tanslate it to the position of the whole creature per every frame gl::translate(m_SpherePos.x,m_SpherePos.y,m_SpherePos.z); gl::rotate(q); // draw the jelly fish at center 0,0,0 drawHemiSphere(Vec3f(0,0,0),m_iRadius,90); gl::popMatrices();

    Read the article

  • What is the difference between Constant Vertex Attributes and Uniforms?

    - by Samaursa
    According to the OpenGL ES 2.0 Programming Guide: A constant vertex attribute is the same for all vertices of a primitive, and therefore only one value needs to be specified for all the vertices of a primitive. For uniforms the book states: ...any parameter to a shader that is constant across either all vertices or fragments (but that is not known at compile time) should be passed in as a uniform. I've always used uniforms for data that is constant for a primitive but now it appears that attributes can also be used in the same way. Is there more to constant vertex attribute than simply 'they are the same as uniforms'?

    Read the article

  • Collision checking problem on a Tiled map

    - by nosferat
    I'm working on a pacman styled dungeon crawler, using the free oryx sprites. I've created the map using Tiled, separating the floor, walls and treasure in three different layers. After importing the map in libGDX, it renders fine. I also added the player character, for now it just moves into one direction, the player cannot control it yet. I wanted to add collision and I was planning to do this by checking if the player's new position is on a wall tile. Therefore as you can see in the following code snippet, I get the tile type of the appropriate tile and if it is not zero (since on that layer there is nothing except the wall tile) it is a collision and the player cannot move further: final Vector2 newPos = charController.move(warrior.getX(), warrior.getY()); if(!collided(newPos)) { warrior.setPosition(newPos.x, newPos.y); warrior.flip(charController.flipX(), charController.flipY()); } [..] private boolean collided(Vector2 newPos) { int row = (int) Math.floor((newPos.x / 32)); int col = (int) Math.floor((newPos.y / 32)); int tileType = tiledMap.layers.get(1).tiles[row][col]; if (tileType == 0) { return false; } return true; } The character only moves one tile with this code: If I reduce the col value by two it two more tiles. I think the problem will be around indexing, but I'm totally confused because the zero in the coordinate system of libGDX is in the bottom left corner of the screen, and I don't know the tiles array's indexing is similair or not. The size of the map is 19x21 tiles and looks like the following (the starting position of the player is marked with blue:

    Read the article

  • How do I get my polygons to be lighted by either side?

    - by Molmasepic
    Okay, I am using Ogre3D and Gorilla(2D library for ogre3D) and I am making Gorilla::Screenrenderables in the open scene. The problem that I am having is that when I make a light and have my SR(screenrenderable) near it, it does not light up unless the face of the SR is facing the light... I am wondering if there is a way to maybe set the material or code(which would be harder) so the SR is lit up whether the vertices of the polygon are facing the light or not. I feel it is possible but the main obstacle is how I would go about doing this.

    Read the article

  • How can I get accurate collision resolution on the corners of rectangles?

    - by ssb
    I have a working collision system implemented, and it's based on minimum translation vectors. This works fine in most cases except when the minimum translation vector is not actually in the direction of the collision. For example: When a rectangle is on the far edge on any side of another rectangle, a force can be applied, in this example down, the pushes one rectangle into the other, particularly a static object like a wall or a floor. As in the picture, the collision is coming from above, but because it's on the very edge, it translates to the left instead of back up. I've searched for a while to find an approach but everything I can find deals with general corner collisions where my problem is only in this one limited case. Any suggestions?

    Read the article

  • Creating a 2D Line Branch

    - by Danran
    I'm looking into creating a 2D line branch, something for a "lightning effect". I did ask this question before on creating a "lightning effect" (mainly though referring to the process of the glow & after effects the lightning has & to whether it was a good method to use or not); Methods of Creating a "Lightning" effect in 2D However i never did get around to getting it working. So i've been trying today to get a seconded attempt going but i'm getting now-were :/. So to be clear on what i'm trying to-do, in this article posted; http://drilian.com/2009/02/25/lightning-bolts/ I'm trying to create the line segments seen in the images on the site. I'm confused mainly by this line in the pseudo code; // Offset the midpoint by a random amount along the normal. midPoint += Perpendicular(Normalize(endPoint-startPoint))*RandomFloat(-offsetAmount,offsetAmount); If someone could explain this to me it would be really grateful :).

    Read the article

  • How to attach two XNA models together?

    - by jeangil
    I go back on unsolved question I asked about attaching two models together, could you give me some help on this ? For example, If I want to attach together Model1 (= Main model) & Model2 ? I have to get the transformation matrix of Model1 and after get the Bone index on Model1 where I want to attach Model2 and then apply some transformation to attach Model2 to Model1 I wrote some code below about this, but It does not work at all !! (6th line of my code seems to be wrong !?) Model1TransfoMatrix=New Matrix[Model1.Bones.Count]; Index=Model1.bone[x].Index; foreach (ModelMesh mesh in Model2.Meshes) { foreach(BasicEffect effect in mesh.effects) { matrix model2Transform = Matrix.CreateScale(0.1.0f)*Matrix.CreateFromYawPitchRoll(x,y,z); effect.World= model2Transform *Model1TransfoMatrix[Index]; effect.view = camera.View; effect.Projection= camera.Projection; } mesh.draw(); }

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >