Search Results

Search found 16410 results on 657 pages for 'game component'.

Page 339/657 | < Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >

  • Examples of good Javascript/HTML5 based games

    - by Zuch
    Now that Flash is largely being replaced with HTML5 elements (video, audio, canvas, etc.) are there any good examples of web-based games built on completely open standards (meaning Javascript, HTML and CSS)? I see a lot of examples of pure HTML5 implementations of what was once only in Flash (like stuff here: http://www.html5rocks.com/) but not many games, a domain which still seem dominated by Flash. I'm curious what's possible and what the limitations are.

    Read the article

  • Collision within a poly

    - by G1i1ch
    For an html5 engine I'm making, for speed I'm using a path poly. I'm having trouble trying to find ways to get collision with the walls of the poly. To make it simple I just have a vector for the object and an array of vectors for the poly. I'm using Cartesian vectors and they're 2d. Say poly = [[550,0],[169,523],[-444,323],[-444,-323],[169,-523]], it's just a pentagon I generated. The object that will collide is object, object.pos is it's position and object.vel is it's velocity. They're both 2d vectors too. I've had some success to get it to find a collision, but it's just black box code I ripped from a c++ example. It's very obscure inside and all it does though is return true/false and doesn't return what vertices are collided or collision point, I'd really like to be able to understand this and make my own so I can have more meaningful collision. I'll tackle that later though. Again the question is just how does one find a collision to walls of a poly given you know the poly vertices and the object's position + velocity? If more info is needed please let me know. And if all anyone can do is point me to the right direction that's great.

    Read the article

  • How to handle animations?

    - by Bane
    I am coding a simple 2D engine to be used with HTML5. I already have classes such as Picture, Scene, Camera and Renderer, but now I need to work on Animations. Picture is basocally a wrapper for a normal image object, with it's own draw method, but this is unrelated, I'm interested in how animation in 2D games is usually done. What I planned to do, is to have the Animation class as well act like a wrapper for a few image objects, and then have methods such as getCurrentImage, next and animate (which would use intervals to quickly change the current image). I meant to feed the animation a couple of PNG's at inicialisation. Is quickly swapping PNG images acceptable for 2D animation? Are there some standard ways of doing this, or are there flaws in my ways?

    Read the article

  • Scaling along an arbitrary axis (Dealing with non-uniform scale)

    - by Jon
    I'm trying to build my own little engine to get more familiar with the concepts of 3D programming. I have a transform class that on each frame it creates a Scaling Matrix (S), a Rotation Matrix from a Quaternion (R) and concatenates them together (S*R). Once i have SR, I insert the translation values into the bottom of the three columns. So i end up with a transformation matrix that looks like: [SR SR SR 0] [SR SR SR 0] [SR SR SR 0] [tx ty tz 1] This works perfectly in all cases except when rotating an object that has a non-uniform scale. For example a unit cube with ScaleX = 4, ScaleY = 2, ScaleZ = 1 will give me a rectangular box that is 4 times as wide as the depth and twice as high as the depth. If i then translate this around, the box stays the same and looks normal. The problem happens whenever I try to rotate this scaled box. The shape itself becomes distorted and it appears as though the Scale factors are affecting the object on the World X,Y,Z axis rather than the local X,Y,Z axis of the object. I've done some pretty extensive research through a variety of textbooks (Eberly, Moller/Hoffman, Phar etc) and there isn't a ton there to go off of. Online, most of the answers say to avoid non-uniform scaling which I understand the desire to avoid it, but I'd still like to figure out how to support it. The only thing I can think off is that when constructing a Scale Matrix: [sx 0 0 0] [0 sy 0 0] [0 0 sz 0] [0 0 0 1] This is scaling along the World Axis instead of the object's local Direction, Up and Right vectors or it's local Z, Y, X axis. Does anyone have any tips or ideas on how to handle construction a transformation matrix that allows for non-uniform scaling and rotation? Thanks!

    Read the article

  • Permanently Sync a wiimote with a computer

    - by Adam Geisweit
    i have tried to look up many ways to sync up my wiimotes to my computer so that i can program games with it, but every time it only syncs them up temporarily, or if it says it can permanently sync it, it doesn't actually do it. it gets tiresome when i have to keep on reconnecting it every time i want to save battery life. how would i be able to sync up my wiimote to my computer so that if i turn off my wiimote, i can just hit any button and it will automatically sync it up?

    Read the article

  • OpenGL setup on Windows

    - by kevin james
    I have been trying to use OpenGL for two days now. First on Mac, then on Windows. The problem with Mac is that it doesn't support the newer versions of OpenGL. I ran a tutorial that actually did get some things working, but it only works in XCode (i.e., I can't create a new file, paste in the same code, and get it to work). Because of these issues, I moved to Windows. My Windows 7 has OpenGL 4.3, which is the same that is used in alot of other tutorials. However, not one of these tutorials gives any instruction on how to set it up for the first time. I have come across some vague posts saying that some libraries need to be linked. But WHAT libraries, and HOW do I link them? Please help. I am pretty desperate to set this up as this project is due for work soon. I have actually used OpenGL before at my university, but the computers already had everything set up. The project itself is very easy, but setting up OpenGL is not something I know how to do.

    Read the article

  • Omni-directional light shadow mapping with cubemaps in WebGL

    - by Winged
    First of all I must say, that I have read a lot of posts describing an usage of cubemaps, but I'm still confused about how to use them. My goal is to achieve a simple omni-directional (point) light type shading in my WebGL application. I know that there is a lot more techniques (like using Two-Hemispheres or Camera Space Shadow Mapping) which are way more efficient, but for an educational purpose cubemaps are my primary goal. Till now, I have adapted a simple shadow mapping which works with spotlights (with one exception: I don't know how to cut off the glitchy part beyond the reach of a single shadow map texture): glitchy shadow mapping<<< So for now, this is how I understand the usage of cubemaps in shadow mapping: Setup a framebuffer (in case of cubemaps - 6 framebuffers; 6 instead of 1 because every usage of framebufferTexture2D slows down an execution which is nicely described here <<<) and a texture cubemap. Also in WebGL depth components are not well supported, so I need to render it to RGBA first. this.texture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_CUBE_MAP, this.texture); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MAG_FILTER, gl.LINEAR); for (var face = 0; face < 6; face++) gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, gl.RGBA, this.size, this.size, 0, gl.RGBA, gl.UNSIGNED_BYTE, null); gl.bindTexture(gl.TEXTURE_CUBE_MAP, null); this.framebuffer = []; for (face = 0; face < 6; face++) { this.framebuffer[face] = gl.createFramebuffer(); gl.bindFramebuffer(gl.FRAMEBUFFER, this.framebuffer[face]); gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, this.texture, 0); gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.RENDERBUFFER, this.depthbuffer); var e = gl.checkFramebufferStatus(gl.FRAMEBUFFER); // Check for errors if (e !== gl.FRAMEBUFFER_COMPLETE) throw "Cubemap framebuffer object is incomplete: " + e.toString(); } Setup the light and the camera (I'm not sure if should I store all of 6 view matrices and send them to shaders later, or is there a way to do it with just one view matrix). Render the scene 6 times from the light's position, each time in another direction (X, -X, Y, -Y, Z, -Z) for (var face = 0; face < 6; face++) { gl.bindFramebuffer(gl.FRAMEBUFFER, shadow.buffer.framebuffer[face]); gl.viewport(0, 0, shadow.buffer.size, shadow.buffer.size); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); camera.lookAt( light.position.add( cubeMapDirections[face] ) ); scene.draw(shadow.program); } In a second pass, calculate the projection a a current vertex using light's projection and view matrix. Now I don't know If should I calculate 6 of them, because of 6 faces of a cubemap. ScaleMatrix pushes the projected vertex into the 0.0 - 1.0 region. vDepthPosition = ScaleMatrix * uPMatrixFromLight * uVMatrixFromLight * vWorldVertex; In a fragment shader calculate the distance between the current vertex and the light position and check if it's deeper then the depth information read from earlier rendered shadow map. I know how to do it with a 2D Texture, but I have no idea how should I use cubemap texture here. I have read that texture lookups into cubemaps are performed by a normal vector instead of a UV coordinate. What vector should I use? Just a normalized vector pointing to the current vertex? For now, my code for this part looks like this (not working yet): float shadow = 1.0; vec3 depth = vDepthPosition.xyz / vDepthPosition.w; depth.z = length(vWorldVertex.xyz - uLightPosition) * linearDepthConstant; float shadowDepth = unpack(textureCube(uDepthMapSampler, vWorldVertex.xyz)); if (depth.z > shadowDepth) shadow = 0.5; Could you give me some hints or examples (preferably in WebGL code) how I should build it?

    Read the article

  • Circle collision detection and Vector math: HELP?

    - by Griffin
    Hey so i'm currently going through the wildbunny blog to learn about collision detection, but i'm a bit confused on how the vectors he's talking about come into play QUOTED BLOG: p = ||A-B|| – (r1+r2) The two spheres are penetrating by distance p. We would also like the penetration vector so that we can correct the penetration once we discover it. This is the vector that moves both circles to the point where they just touch, correcting the penetration. Importantly it is not only just a vector that does this, it is the only vector which corrects the penetration by moving the minimum amount. This is important because we only want to correct the error, not introduce more by moving too much when we correct, or too little. N = (A-B) / ||A-B|| P = N*p Here we have calculated the normalised vector N between the two centres and the penetration vector P by multiplying our unit direction by the penetration distance. Ok so i understand that p is the distance each circle is penetrating each other, but i don't get what exactly N and P is. it seems to me N is just the coordinates of the 3rd point of the right trianlge formed by point A and B (A-B) then being divided by the hypotenuse of that triangle or distance between A and B (||A-B||) Whats the significance of this? Also, what is the penetration vector used for? It seems to me like a movement that one of the circles would perform to get un-penetrated.

    Read the article

  • Calculate the intersection depth between a rectangle and a right triangle

    - by Celarix
    all. I'm working on a 2D platformer built in C#/XNA, and I'm having a lot of problems calculating the intersection depth between a standard rectangle (used for sprites) and a right triangle (used for sloping tiles). Ideally, the rectangle will collide with the solid edges of the triangle, and its bottom-center point will collide with the sloped edge. I've been fighting with this for a couple of days now, and I can't make sense of it. So far, the method detects intersections (somewhat), but it reports wildly wrong depths. How does one properly calculate the depth? Is there something I'm missing? Thanks!

    Read the article

  • How to implement explosion in OpenGL with a particle effect?

    - by Chan
    I'm relatively new to OpenGL and I'm clueless how to implement explosion. So could anyone give me some ideas how to start? Suppose the explosion occurs at location $(x, y, z)$, then I'm thinking of randomly generate a collection of vectors with $(x, y, z)$ as origin, then draw some particle (glutSolidCube) which move along this vector for some period of time, says after 1000 updates, it disappear. Is this approach feasible? A minimal example would be greatly appreciated.

    Read the article

  • Creating a voxel chunk with a VBO - How to translate the coordinates of each block and add it to the VBO chunk?

    - by sunsunsunsunsun
    Im trying to make a voxel engine similar to minecraft as a little learning experience and a way to learn some opengl. I have created a chunk class and I want to put all of the vertices for the whole chunk into a single VBO. I was previously only putting each block into a vbo and making a call to render each block. Anyways, I am a bit confused about how I can translate the coordinates of each block in the chunk when I'm putting all vertices into one vbo. This is what I have at the moment. public void putVertices(float tx, float ty, float tz) { float l_length = 1.0f; float l_height = 1.0f; float l_width = 1.0f; vertexPositionData.put(new float[]{ xOffset + l_length + tx, l_height + ty, zOffset + -l_width + tz, xOffset + -l_length + tx, l_height + ty, zOffset + -l_width + tz, xOffset + -l_length + tx, l_height + ty, zOffset + l_width + tz, xOffset + l_length + tx, l_height + ty, zOffset + l_width + tz, xOffset + l_length + tx, -l_height + ty, zOffset + l_width + tz, xOffset + -l_length + tx, -l_height + ty, zOffset + l_width + tz, xOffset + -l_length + tx, -l_height + ty, zOffset + -l_width + tz, xOffset + l_length + tx, -l_height + ty, zOffset + -l_width + tz, xOffset + l_length + tx, l_height + ty, zOffset + l_width + tz, xOffset + -l_length + tx, l_height + ty,zOffset + l_width + tz, xOffset + -l_length + tx, -l_height + ty,zOffset + l_width + tz, xOffset + l_length + tx, -l_height + ty, zOffset + l_width + tz, xOffset + l_length + tx, -l_height + ty, zOffset + -l_width + tz, xOffset + -l_length + tx, -l_height + ty,zOffset + -l_width + tz, xOffset + -l_length + tx, l_height + ty, zOffset + -l_width + tz, xOffset + l_length + tx, l_height + ty, zOffset + -l_width + tz, xOffset + -l_length + tx, l_height + ty, zOffset + l_width + tz, xOffset + -l_length + tx, l_height + ty, zOffset + -l_width + tz, xOffset + -l_length + tx, -l_height + ty, zOffset + -l_width + tz, xOffset + -l_length + tx, -l_height + ty,zOffset + l_width + tz, xOffset + l_length + tx, l_height + ty,zOffset + -l_width + tz, xOffset + l_length + tx, l_height + ty, zOffset + l_width + tz, xOffset + l_length + tx, -l_height + ty, zOffset + l_width + tz, xOffset + l_length + tx, -l_height + ty, zOffset + -l_width + tz }); } public void createChunk() { vertexPositionData = BufferUtils.createFloatBuffer((24*3)*activateBlocks); Random random = new Random(); for (int x = 0; x < CHUNK_SIZE; x++) { for (int y = 0; y < CHUNK_SIZE; y++) { for (int z = 0; z < CHUNK_SIZE; z++) { if(blocks[x][y][z].getActive()) { putVertices(x*2.0f, y*2.0f, z*2.0f); } } } } Whats any easy way to translate the vertices of each block into its correct position? I was previously using glTranslatef with each call to render block but this wont work now. What I am doing now also does not work, the blocks all render in stacks on top of each other and it looks like this: http://i.imgur.com/NyFtBTI.png Thanks

    Read the article

  • How change LOD in geometry?

    - by ChaosDev
    Im looking for simple algorithm of LOD, for change geometry vertexes and decrease frame time. Im created octree, but now I want model or terrain vertex modify algorithm,not for increase(looking on tessellation later) but for decrease. I want something like this Questions: Is same algorithm can apply either to model and terrain correctly? Indexes need to be modified ? I must use octree or simple check distance between camera and object for desired effect ? New value of indexcount for DrawIndexed function needed ? Code: //m_LOD == 10 in the beginning //m_RawVerts - array of 3d Vector filled with values from vertex buffer. void DecreaseLOD() { m_LOD--; if(m_LOD<1)m_LOD=1; RebuildGeometry(); } void IncreaseLOD() { m_LOD++; if(m_LOD>10)m_LOD=10; RebuildGeometry(); } void RebuildGeometry() { void* vertexRawData = new byte[m_VertexBufferSize]; void* indexRawData = new DWORD[m_IndexCount]; auto context = mp_D3D->mp_Context; D3D11_MAPPED_SUBRESOURCE data; ZeroMemory(&data,sizeof(D3D11_MAPPED_SUBRESOURCE)); context->Map(mp_VertexBuffer->mp_buffer,0,D3D11_MAP_READ,0,&data); memcpy(vertexRawData,data.pData,m_VertexBufferSize); context->Unmap(mp_VertexBuffer->mp_buffer,0); context->Map(mp_IndexBuffer->mp_buffer,0,D3D11_MAP_READ,0,&data); memcpy(indexRawData,data.pData,m_IndexBufferSize); context->Unmap(mp_IndexBuffer->mp_buffer,0); DWORD* dwI = (DWORD*)indexRawData; int sz = (m_VertexStride/sizeof(float));//size of vertex element //algorithm must be here. std::vector<Vector3d> vertices; int i = 0; for(int j = 0; j < m_VertexCount; j++) { float x1 = (((float*)vertexRawData)[0+i]); float y1 = (((float*)vertexRawData)[1+i]); float z1 = (((float*)vertexRawData)[2+i]); Vector3d lv = Vector3d(x1,y1,z1); //my useless attempts if(j+m_LOD+1<m_RawVerts.size()) { float v1 = VECTORHELPER::Distance(m_RawVerts[dwI[j]],m_RawVerts[dwI[j+m_LOD]]); float v2 = VECTORHELPER::Distance(m_RawVerts[dwI[j]],m_RawVerts[dwI[j+m_LOD+1]]); if(v1>v2) lv = m_RawVerts[dwI[j+1]]; else if(v2<v1) lv = m_RawVerts[dwI[j+2]]; } (((float*)vertexRawData)[0+i]) = lv.x; (((float*)vertexRawData)[1+i]) = lv.y; (((float*)vertexRawData)[2+i]) = lv.z; i+=sz;//pass others vertex format values without change } for(int j = 0; j < m_IndexCount; j++) { //indices ? } //set vertexes to device UpdateVertexes(vertexRawData,mp_VertexBuffer->getSize()); delete[] vertexRawData; delete[] indexRawData; }

    Read the article

  • How do I simulate the mouse and keyboard using C# or C++?

    - by Art
    I want to start develop for Kinect, but hardest theme for it - how to send keyboard and mouse input to any application. In previous question I got an advice to develop my own driver for this devices, but this will take a while. I imagine application like a gate, that can translate SendMessage's into system wide input or driver application with API to send this inputs. So I wonder, is there are drivers or simulators that can interact with C# or C++? Small edition: SendMessage, PostMessage, keybd_event will work only on Windows application with common messages loop. So I need driver application that will work on low, kernel, level.

    Read the article

  • What should I worry about when changing OpenGL origin to upper left of screen?

    - by derivative
    For self education, I'm writing a 2D platformer engine in C++ using SDL / OpenGL. I initially began with pure SDL using the tutorials on sdltutorials.com and lazyfoo.net, but I'm now rendering in an OpenGL context (specifically immediate mode but I'm learning about VAOs/VBOs) and using SDL for interface, audio, etc. SDL uses a coordinate system with the origin in the upper left of the screen and the positive y-axis pointing down. It's easy to set up my orthographic projection in OpenGL to mirror this. I know that texture coordinates are a right-hand system with values from 0 to 1 -- flipping the texture vertically before rendering (well, flip the file before loading) yields textures that render correctly... which is fine if I'm drawing the entire texture, but ultimately I'll be using tilesets and can imagine problems. What should I be concerned about in terms of rendering when I do this? If anybody has any advice or they've done this themselves and can point out future pitfalls, that would be great, but really any thoughts would be appreciated.

    Read the article

  • How to optimise mesh data

    - by Wardy
    So i have some procedurally generated mesh data and i want to reduce it down to its minimum number of verts. In case it matters this is a unity project. Working on the basis of a simple example, lets assume a typical flat surface of points 2 by 3. The point / vertex at [1,1] is used in many triangles. I've generated mesh for a voxel type engine that adds verts to a list based on face visiblility and now I want to remove all the duplicates. Can anyone come up with an efficient way of doing this because what i have is sooo bad its not even funny (and i don't even think it's logically correct) ... private void Optimize() { Vector3 v; Vector3 v2; for (int i = 0; i < Vertices.Count; i++) { v = Vertices[i]; for (int j = i+1; j < Vertices.Count; j++) { v2 = Vertices[j]; if (v.x == v2.x && v.y == v2.y && v.z == v2.z) { for (int ind = 0; ind < Indices.Count; ind++) { if (Indices[ind] == j) { Indices[ind] = i; } else if (Indices[ind] > j && Indices[ind] > 0) Indices[ind]--; } Vertices.RemoveAt(j); Uvs.RemoveAt(j); Normals.RemoveAt(j); } } } } EDIT: Ok i managed to get this (code sample above updated) to render an "optimised" set of verts but the UV data is all wrong now, which would make sense because i'm basically just removing any UV Vector that represents a UV coord for a removed vert and not actually considering what I need to do to "fix the tri" so to speak. The code now seemingly does work but its quite time consuming, still looking to further optimise.

    Read the article

  • Drawing 2D Grid in 3D View - Need help with method

    - by Deukalion
    I'm trying to draw a simple 2D grid for an editor, to able to navigate more clearly around the 3D space, but I can't render it: Grid2D class, creates a grid of a certain size at a location and should just draw lines. public class Grid2D : IShape { private VertexPositionColor[] _vertices; private Vector2 _size; private Vector3 _location; private int _faces; public Grid2D(Vector2 size, Vector3 location, Color color) { float x = 0, y = 0; if (size.X < 1f) { size.X = 1f; } if (size.Y < 1f) { size.Y = 1f; } _size = size; _location = location; List<VertexPositionColor> vertices = new List<VertexPositionColor>(); _faces = 0; for (y = -size.Y; y <= size.Y; y++) { vertices.Add(new VertexPositionColor(location + new Vector3(-size.X, y, 0), color)); vertices.Add(new VertexPositionColor(location + new Vector3(size.X, y, 0), color)); _faces++; } for (x = -size.X; x <= size.X; x++) { vertices.Add(new VertexPositionColor(location + new Vector3(x, -size.Y, 0), color)); vertices.Add(new VertexPositionColor(location + new Vector3(x, size.Y, 0), color)); _faces++; } _vertices = vertices.ToArray(); } public void Render(GraphicsDevice device) { device.DrawUserPrimitives<VertexPositionColor>(PrimitiveType.LineList, _vertices, 0, _faces); } } Like this: +----+----+----+----+ | | | | | +----+----+----+----+ | | | | | +----+----+----+----+ | | | | | +----+----+----+----+ | | | | | +----+----+----+----+ Anyone knows what I'm doing wrong? If I add a Shape without texture, it's set automatically to VertexColorEnabled and TextureEnabled = false. This is how I render it: foreach (RenderObject render in _renderObjects) { render.Effect.Projection = projection; render.Effect.View = view; render.Effect.World = world; foreach (EffectPass pass in render.Effect.CurrentTechnique.Passes) { pass.Apply(); try { // Could be a Grid2D render.Shape.Render(_device); } catch { throw; } } } Exception is thrown: The current vertex shader declaration does not include all the elements required by the current Vertex Shader. Normal0 is missing. Simply put, I can't figure out how to draw a few lines. I want to draw them one at a time and I guess that's the problem I haven't figured out, and even when I tried rendering vertices[i], vertices[i+1] and primitiveCount = 1, vertices = 2, and so on it didn't work either. Any suggestions?

    Read the article

  • Calculate velocity of a bullet ricocheting on a circle

    - by SteveL
    I made a picture to demostrate what I need,basecaly I have a bullet with velocity and I want it to bounce with the correct angle after it hits a circle Solved(look the accepted answer for explain): Vector.vector.set(bullet.vel); //->v Vector.vector2.setDirection(pos, bullet.pos); //->n normal from center of circle to bullet float dot=Vector.vector.dot(Vector.vector2); //->dot product Vector.vector2.mul(dot).mul(2); Vector.vector.sub(Vector.vector2); Vector.vector.y=-Vector.vector.y; //->for some reason i had to invert the y bullet.vel.set(Vector.vector);

    Read the article

  • Behaviour tree code example?

    - by jokoon
    http://altdevblogaday.org/2011/02/24/introduction-to-behavior-trees/ Obviously the most interesting article I found on this website. What do you think about it ? It lacks some code example, don't you know any ? I also read that state machines are not very flexible compared to behaviour trees... On top of that I'm not sure if there is a true link between state machines and the state pattern... is there ?

    Read the article

  • Isometric algorithm [fixed]

    - by David
    so i've been toying with isometric and i just cant get the tiles to be in the right order. im probably missing something obvious and i just cant see it... but even at the risk of looking stupid, heres my code. for (int i = 0; i < Tile.MapSize; i++) { for (int j = 0; j < Tile.MapSize; j++) { spriteBatch.Draw( Tile.TileSetTexture, new Rectangle( (-j * Tile.TileWidth / 2) + (i * Tile.TileWidth / 2), (i * (Tile.TileHeight - 9) / 2) - (-j * (Tile.TileHeight - 9) / 2), Tile.TileWidth, Tile.TileHeight), Tile.GetSourceRectangle(tileID), Color.White, 0.0f, new Vector2(-350, -60), SpriteEffects.None, 1.0f); } } and heres what i end up with delicious messed up map yep, bit of an issue. so if anyone could help, i'd appreciate it. edit* works now <_<

    Read the article

  • Which opcodes are faster at the CPU level?

    - by Geotarget
    In every programming language there are sets of opcodes that are recommended over others. I've tried to list them here, in order of speed. Bitwise Integer Addition / Subtraction Integer Multiplication / Division Comparison Control flow Float Addition / Subtraction Float Multiplication / Division Where you need high-performance code, C++ can be hand optimized in assembly, to use SIMD instructions or more efficient control flow, data types, etc. So I'm trying to understand if the data type (int32 / float32 / float64) or the operation used (*, +, &) affects performance at the CPU level. Is a single multiply slower on the CPU than an addition? In MCU theory you learn that speed of opcodes is determined by the number of CPU cycles it takes to execute. So does it mean that multiply takes 4 cycles and add takes 2? Exactly what are the speed characteristics of the basic math and control flow opcodes? If two opcodes take the same number of cycles to execute, then both can be used interchangeably without any performance gain / loss? Any other technical details you can share regarding x86 CPU performance is appreciated

    Read the article

  • How do I keep a 3D model on the screen in OpenGL?

    - by NoobScratcher
    I'm trying to keep a 3D model on the screen by placing my glDrawElement functions inside the draw function with the declarations at the top of .cpp. When I render the model, the model attaches it self to the current vertex buffer object. This is because my whole graphical user interface is in 2D quads except the window frame. Is there a way to avoid this from happening? or any common causes of this? Creating the file object: int index = IndexAssigner(1, 1); //make a fileobject and store list and the index of that list in a c string ifstream file (list[index].c_str() ); //Make another string //string line; points.push_back(Point()); Point p; int face[4]; Model rendering code: int numfloats = 4; float* point=reinterpret_cast<float*>(&points[0]); int num_bytes=numfloats*sizeof(float); cout << "Size Of Point" << sizeof(Point) << endl; GLuint vertexbuffer; glGenVertexArrays(1, &vao[3]); glGenBuffers(1, &vertexbuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBufferData(GL_ARRAY_BUFFER, points.size()*sizeof(points), points.data(), GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, num_bytes, &points[0]); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, points.size(), &points[0]); glEnableClientState(GL_INDEX_ARRAY); glIndexPointer(GL_FLOAT, faces.size(), faces.data()); glEnableVertexAttribArray(0); glDrawElements(GL_QUADS, points.size(), GL_UNSIGNED_INT, points.data()); glDrawElements(GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data());

    Read the article

  • What would be a good filter to create 'magnetic deformers' from a depth map?

    - by sebf
    In my project, I am creating a system for deforming a highly detailed mesh (clothing) so that it 'fits' a convex mesh. To do this I use depth maps of the item and the 'hull' to determine at what point in world space the deviation occurs and the extent. Simply transforming all occluded vertices to the depths as defined by the 'hull' is fairly effective, and has good performance, but it suffers the problem of not preserving the features of the mesh and requires extensive culling to avoid false-positives. I would like instead to generate from the depth deviation map a set of simple 'deformers' which will 'push'* all vertices of the deformed mesh outwards (in world space). This way, all features of the mesh are preserved and there is no need to have complex heuristics to cull inappropriate vertices. I am not sure how to go about generating this deformer set however. I am imagining something like an algorithm that attempts to match a spherical surface to each patch of contiguous deviations within a certain range, but do not know where to start doing this. Can anyone suggest a suitable filter or algorithm for generating deformers? Or to put it another way 'compressing' a depth map? (*Push because its fitting to a convex 'bulgy' humanoid so transforms are likely to be 'spherical' from the POV of the surface.)

    Read the article

  • Incorrect results for frustum cull

    - by DeadMG
    Previously, I had a problem with my frustum culling producing too optimistic results- that is, including many objects that were not in the view volume. Now I have refactored that code and produced a cull that should be accurate to the actual frustum, instead of an axis-aligned box approximation. The problem is that now it never returns anything to be in the view volume. As the mathematical support library I'm using does not provide plane support functions, I had to code much of this functionality myself, and I'm not really the mathematical type, so it's likely that I've made some silly error somewhere. As follows is the relevant code: class Plane { public: Plane() { r0 = Math::Vector(0,0,0); normal = Math::Vector(0,1,0); } Plane(Math::Vector p1, Math::Vector p2, Math::Vector p3) { r0 = p1; normal = Math::Cross((p2 - p1), (p3 - p1)); } Math::Vector r0; Math::Vector normal; }; This class represents one plane as a point and a normal vector. class Frustum { public: Frustum( const std::array<Math::Vector, 8>& points ) { planes[0] = Plane(points[0], points[1], points[2]); planes[1] = Plane(points[4], points[5], points[6]); planes[2] = Plane(points[0], points[1], points[4]); planes[3] = Plane(points[2], points[3], points[6]); planes[4] = Plane(points[0], points[2], points[4]); planes[5] = Plane(points[1], points[3], points[5]); } Plane planes[6]; }; The points are passed in order where (the inverse of) each bit of the index of each point indicates whether it's the left, top, and back of the frustum, respectively. As such, I just picked any three points where they all shared one bit in common to define the planes. My intersection test is as follows (based on this): bool Intersects(Math::AABB lhs, const Frustum& rhs) const { for(int i = 0; i < 6; i++) { Math::Vector pvertex = lhs.TopRightFurthest; Math::Vector nvertex = lhs.BottomLeftClosest; if (rhs.planes[i].normal.x <= -0.0f) { std::swap(pvertex.x, nvertex.x); } if (rhs.planes[i].normal.y <= -0.0f) { std::swap(pvertex.y, nvertex.y); } if (rhs.planes[i].normal.z <= -0.0f) { std::swap(pvertex.z, nvertex.z); } if (Math::Dot(rhs.planes[i].r0, nvertex) < 0.0f) { return false; } } return true; } Also of note is that because I'm using a left-handed co-ordinate system, I wrote my Cross function to return the negative of the formula given on Wikipedia. Any suggestions as to where I've made a mistake?

    Read the article

  • Map format for 3d open world

    - by Pacha
    I am making an open world 3d platformer in Ogre3D, and I have no idea on what kind of 3d map file format I should use for it. I want to make low-polygon blocky-style objects. Probably rectangles and other geometrical figures that don't have circular edges. Some of those blocks will have properties, like climbable or they might move. I was wondering what would be the best thing to do to make the map (just one level, as it is open).

    Read the article

  • How to snap a 2D Quad to the mouse cursor using OpenGL 3.0/WIN32?

    - by NoobScratcher
    I've been having issues trying to snap a 2D Quad to the mouse cursor position I'm able : 1.) To get values into posX, posY, posZ 2.) Translate with the values from those 3 variables But the quad positioning I'm not able to do correctly in such a way that the 2D Quad is near the mouse cursor using those values from those 3 variables eg."posX, posY, posZ" I need the mouse cursor in the center of the 2D Quad. I'm hoping someone can help me achieve this. I've tried searching around with no avail. Heres the function that is ment to do the snapping but instead creates weird flicker or shows nothing at all only the 3d models show up : void display() { glClearColor(0.0,0.0,0.0,1.0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); for(std::vector<GLuint>::iterator I = cube.begin(); I != cube.end(); ++I) { glCallList(*I); } if(DrawArea == true) { glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ); cerr << winZ << endl; glGetDoublev(GL_MODELVIEW_MATRIX, modelview); glGetDoublev(GL_PROJECTION_MATRIX, projection); glGetIntegerv(GL_VIEWPORT, viewport); gluUnProject(winX, winY, winZ , modelview, projection, viewport, &posX, &posY, & posZ); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL); glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, DrawAreaSurface->w, DrawAreaSurface->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, DrawAreaSurface->pixels); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glTranslatef(posX , posY, posZ); glBegin(GL_QUADS); glTexCoord2f (0.0, 0.0); glVertex3f(0.5, 0.5, 0); glTexCoord2f (1.0, 0.0); glVertex3f(0, 0.5, 0); glTexCoord2f (1.0, 1.0); glVertex3f(0, 0, 0); glTexCoord2f (0.0, 1.0); glVertex3f(0.5, 0, 0); glEnd(); } SwapBuffers(hDC); } I'm using : OpenGL 3.0 WIN32 API C++ GLSL if you really want the full source here it is - http://pastebin.com/1Ncm9HNf , Its pretty messy.

    Read the article

< Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >