Search Results

Search found 21563 results on 863 pages for 'game testing'.

Page 413/863 | < Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >

  • Any significant performance cost to using BlendState.Premultiplied?

    - by Donutz
    Normally I guess you'd use BlendState.AlphaBlend because normally when you load your textures through the pipeline they're already premultiplied. However, if you're loading textures at runtime from PNGs or some such, you have to loop through the pixels and premultiply them, which can take a long time if you've got a lot of textures to load. So it looks (haven't tried it) like using BlendState.Premultiplied instead of BlendState.AlphaBlend should handle non-premultiplied textures and produce the same visual result, without all the startup costs. I have to wonder if there's a non-obvious cost to doing this, like a huge drop in performance or something. Anyone know?

    Read the article

  • Check for bodies within a specific circle in Box2D

    - by ltjax
    I'm trying to find positions to insert new bodies into my world. For that, I'd like to have a "free" spot where this body wouldn't overlap with anything else. So my plan was to sample "random" positions and check whether they overlap with my "potential" new body. Since my bodies are always circular, I'd need to test within a given circle. So far, the only way to use box2d for this seems to use b2World::QueryAABB around my circle and manually doing an overlap test with all the fixtures it gives me (Box2D doesn't event seem to allow me to tap into its overlapping tests?!). It seems to me like Box2D should already provide such functionality - is there a way that lets me do this without reinventing most of the wheel again?

    Read the article

  • Quake 3 Bot Programming Example

    - by Manni
    I would like to implement an intelligent bot for Quake-3. I downloaded the and built the code successfully under Linux. My problem is that I couldn't find any complete tutorial telling me how to build an agent; telling which files to use( as there are many files in the source code). Can you give me a website or piece of source code telling me how to start? Or something like an example source code for a bot.

    Read the article

  • LWJGL - Mixing 2D and 3D

    - by nathan
    I'm trying to mix 2D and 3D using LWJGL. I have wrote 2D little method that allow me to easily switch between 2D and 3D. protected static void make2D() { glEnable(GL_BLEND); GL11.glMatrixMode(GL11.GL_PROJECTION); GL11.glLoadIdentity(); glOrtho(0.0f, SCREEN_WIDTH, SCREEN_HEIGHT, 0.0f, 0.0f, 1.0f); GL11.glMatrixMode(GL11.GL_MODELVIEW); GL11.glLoadIdentity(); } protected static void make3D() { glDisable(GL_BLEND); GL11.glMatrixMode(GL11.GL_PROJECTION); GL11.glLoadIdentity(); // Reset The Projection Matrix GLU.gluPerspective(45.0f, ((float) SCREEN_WIDTH / (float) SCREEN_HEIGHT), 0.1f, 100.0f); // Calculate The Aspect Ratio Of The Window GL11.glMatrixMode(GL11.GL_MODELVIEW); glLoadIdentity(); } The in my rendering code i would do something like: make2D(); //draw 2D stuffs here make3D(); //draw 3D stuffs here What i'm trying to do is to draw a 3D shape (in my case a quad) and i 2D image. I found this example and i took the code from TextureLoader, Texture and Sprite to load and render a 2D image. Here is how i load the image. TextureLoader loader = new TextureLoader(); Sprite s = new Sprite(loader, "player.png") And how i render it: make2D(); s.draw(0, 0); It works great. Here is how i render my quad: glTranslatef(0.0f, 0.0f, 30.0f); glScalef(12.0f, 9.0f, 1.0f); DrawUtils.drawQuad(); Once again, no problem, the quad is properly rendered. DrawUtils is a simple class i wrote containing utility method to draw primitives shapes. Now my problem is when i want to mix both of the above, loading/rendering the 2D image, rendering the quad. When i try to load my 2D image with the following: s = new Sprite(loader, "player.png); My quad is not rendered anymore (i'm not even trying to render the 2D image at this point). Only the fact of creating the texture create the issue. After looking a bit at the code of Sprite and TextureLoader i found that the problem appears after the call of the glTexImage2d. In the TextureLoader class: glTexImage2D(target, 0, dstPixelFormat, get2Fold(bufferedImage.getWidth()), get2Fold(bufferedImage.getHeight()), 0, srcPixelFormat, GL_UNSIGNED_BYTE, textureBuffer); Commenting this like make the problem disappear. My question is then why? Is there anything special to do after calling this function to do 3D? Does this function alter the render part, the projection matrix?

    Read the article

  • determine collision angle on a rotating body

    - by jorb
    update: new diagram and updated description I have a contact listener set up to try and determine the side that a collision happened at relative to the a bodies rotation. One way to solve this is to find the value of the yellow angle between the red and blue vectors drawn above. The angle can be found by taking the arc cosine of the dot product of the two vectors (Evan pointed this out). One of my points of confusion is the difference in domain of the atan2 function html canvas coordinates and the Box2d rotation information. I know I have to account for this somehow... SS below questions: Does Box2D provide these angles more directly in the collision information? Am I even on the right track? If so, any hints? I have the following javascript so far: Ship.prototype.onCollide = function (other_ent,cx,cy) { var pos = this.body.GetPosition(); //collision position relative to body var d_cx = pos.x - cx; var d_cy = pos.y - cy; //length of initial vector var len = Math.sqrt(Math.pow(pos.x -cx,2) + Math.pow(pos.y-cy,2)); //body angle - can over rotate hence mod 2*Pi var ang = this.body.GetAngle() % (Math.PI * 2); //vector representing body's angle - same magnitude as the first var b_vx = len * Math.cos(ang); var b_vy = len * Math.sin(ang); //dot product of the two vectors var dot_prod = d_cx * b_vx + d_cy * b_vy; //new calculation of difference in angle - NOT WORKING! var d_ang = Math.acos(dot_prod); var side; if (Math.abs(d_ang) < Math.PI/2 ) side = "front"; else side = "back"; console.log("length",len); console.log("pos:",pos.x,pos.y); console.log("offs:",d_cx,d_cy); console.log("body vec",b_vx,b_vy); console.log("body angle:",ang); console.log("dot product",dot_prod); console.log("result:",d_ang); console.log("side",side); console.log("------------------------"); }

    Read the article

  • Generated 3d tree meshes

    - by Jari Komppa
    I did not find a question on these lines yet, correct me if I'm wrong. Trees (and fauna in general) are common in games. Due to their nature, they are a good candidate for procedural generation. There's SpeedTree, of course, if you can afford it; as far as I can tell, it doesn't provide the possibility of generating your tree meshes at runtime. Then there's SnappyTree, an online webgl based tree generator based on the proctree.js which is some ~500 lines of javascript. One could use either of above (or some other tree generator I haven't stumbled upon) to create a few dozen tree meshes beforehand - or model them from scratch in a 3d modeller - and then randomly mirror/scale them for a few more variants.. But I'd rather have a free, linkable tree mesh generator. Possible solutions: Port proctree.js to c++ and deal with the open source license (doesn't seem to be gpl, so could be doable; the author may also be willing to co-operate to make the license even more free). Roll my own based on L-systems. Don't bother, just use offline generated trees. Use some other method I haven't found yet.

    Read the article

  • Ray Intersecting Plane Formula in C++/DirectX

    - by user4585
    I'm developing a picking system that will use rays that intersect volumes and I'm having trouble with ray intersection versus a plane. I was able to figure out spheres fairly easily, but planes are giving me trouble. I've tried to understand various sources and get hung up on some of the variables used within their explanations. Here is a snippet of my code: bool Picking() { D3DXVECTOR3 vec; D3DXVECTOR3 vRayDir; D3DXVECTOR3 vRayOrig; D3DXVECTOR3 vROO, vROD; // vect ray obj orig, vec ray obj dir D3DXMATRIX m; D3DXMATRIX mInverse; D3DXMATRIX worldMat; // Obtain project matrix D3DXMATRIX pMatProj = CDirectXRenderer::GetInstance()->Director()->Proj(); // Obtain mouse position D3DXVECTOR3 pos = CGUIManager::GetInstance()->GUIObjectList.front().pos; // Get window width & height float w = CDirectXRenderer::GetInstance()->GetWidth(); float h = CDirectXRenderer::GetInstance()->GetHeight(); // Transform vector from screen to 3D space vec.x = (((2.0f * pos.x) / w) - 1.0f) / pMatProj._11; vec.y = -(((2.0f * pos.y) / h) - 1.0f) / pMatProj._22; vec.z = 1.0f; // Create a view inverse matrix D3DXMatrixInverse(&m, NULL, &CDirectXRenderer::GetInstance()->Director()->View()); // Determine our ray's direction vRayDir.x = vec.x * m._11 + vec.y * m._21 + vec.z * m._31; vRayDir.y = vec.x * m._12 + vec.y * m._22 + vec.z * m._32; vRayDir.z = vec.x * m._13 + vec.y * m._23 + vec.z * m._33; // Determine our ray's origin vRayOrig.x = m._41; vRayOrig.y = m._42; vRayOrig.z = m._43; D3DXMatrixIdentity(&worldMat); //worldMat = aliveActors[0]->GetTrans(); D3DXMatrixInverse(&mInverse, NULL, &worldMat); D3DXVec3TransformCoord(&vROO, &vRayOrig, &mInverse); D3DXVec3TransformNormal(&vROD, &vRayDir, &mInverse); D3DXVec3Normalize(&vROD, &vROD); When using this code I'm able to detect a ray intersection via a sphere, but I have questions when determining an intersection via a plane. First off should I be using my vRayOrig & vRayDir variables for the plane intersection tests or should I be using the new vectors that are created for use in object space? When looking at a site like this for example: http://www.tar.hu/gamealgorithms/ch22lev1sec2.html I'm curious as to what D is in the equation AX + BY + CZ + D = 0 and how does it factor in to determining a plane intersection? Any help will be appreciated, thanks.

    Read the article

  • Can I animate render targets or the swap chain?

    - by Eric F.
    I want to animate some synthetic video bits to fullscreen w/o tearing. Can I set up D3D 9/10/11 in exclusive mode, and have it present a series of buffers that I'm writing to? I know how to copy system memory bits into a texture, then draw that texture as a fullscreen quad, but it seems like overkill. Why should I use the triangle rasterizer when I want to do something so simple? All I want to do is set up a long (4-8 buffer) swapchain and set the bits of the back buffer that is about to be displayed. Or, I want to allocate 4-8 RenderTargets, and on each frame, copy the bits from system memory to the RenderTarget, then set it as the next thing to display. I've never seen or heard about anybody doing this, but it seems so dead simple!

    Read the article

  • Normal map lighting bug in bottom right quadrant

    - by Ryan Capote
    I am currently working on getting normal maps working in my project, and have run into a problem with lighting. As you can see, the normals in the bottom right quadrant of the lighting isn't calculating the correct direction to the light or something. Best seen by the red light If I use flat normals (z normal = 1.0), it seems to be working fine: normals for the tile sheet: Shader: #version 330 uniform sampler2D uDiffuseTexture; uniform sampler2D uNormalsTexture; uniform sampler2D uSpecularTexture; uniform sampler2D uEmissiveTexture; uniform sampler2D uWorldNormals; uniform sampler2D uShadowMap; uniform vec4 uLightColor; uniform float uConstAtten; uniform float uLinearAtten; uniform float uQuadradicAtten; uniform float uColorIntensity; in vec2 TexCoords; in vec2 GeomSize; out vec4 FragColor; float sample(vec2 coord, float r) { return step(r, texture2D(uShadowMap, coord).r); } float occluded() { float PI = 3.14; vec2 normalized = TexCoords.st * 2.0 - 1.0; float theta = atan(normalized.y, normalized.x); float r = length(normalized); float coord = (theta + PI) / (2.0 * PI); vec2 tc = vec2(coord, 0.0); float center = sample(tc, r); float sum = 0.0; float blur = (1.0 / GeomSize.x) * smoothstep(0.0, 1.0, r); sum += sample(vec2(tc.x - 4.0*blur, tc.y), r) * 0.05; sum += sample(vec2(tc.x - 3.0*blur, tc.y), r) * 0.09; sum += sample(vec2(tc.x - 2.0*blur, tc.y), r) * 0.12; sum += sample(vec2(tc.x - 1.0*blur, tc.y), r) * 0.15; sum += center * 0.16; sum += sample(vec2(tc.x + 1.0*blur, tc.y), r) * 0.15; sum += sample(vec2(tc.x + 2.0*blur, tc.y), r) * 0.12; sum += sample(vec2(tc.x + 3.0*blur, tc.y), r) * 0.09; sum += sample(vec2(tc.x + 4.0*blur, tc.y), r) * 0.05; return sum * smoothstep(1.0, 0.0, r); } float calcAttenuation(float distance) { float linearAtten = uLinearAtten * distance; float quadAtten = uQuadradicAtten * distance * distance; float attenuation = 1.0 / (uConstAtten + linearAtten + quadAtten); return attenuation; } vec3 calcFragPosition(void) { return vec3(TexCoords*GeomSize, 0.0); } vec3 calcLightPosition(void) { return vec3(GeomSize/2.0, 0.0); } float calcDistance(vec3 fragPos, vec3 lightPos) { return length(fragPos - lightPos); } vec3 calcLightDirection(vec3 fragPos, vec3 lightPos) { return normalize(lightPos - fragPos); } vec4 calcFinalLight(vec2 worldUV, vec3 lightDir, float attenuation) { float diffuseFactor = dot(normalize(texture2D(uNormalsTexture, worldUV).rgb), lightDir); vec4 diffuse = vec4(0.0); vec4 lightColor = uLightColor * uColorIntensity; if(diffuseFactor > 0.0) { diffuse = vec4(texture2D(uDiffuseTexture, worldUV.xy).rgb, 1.0); diffuse *= diffuseFactor; lightColor *= diffuseFactor; } else { discard; } vec4 final = (diffuse + lightColor); if(texture2D(uWorldNormals, worldUV).g > 0.0) { return final * attenuation; } else { return final * occluded(); } } void main(void) { vec3 fragPosition = calcFragPosition(); vec3 lightPosition = calcLightPosition(); float distance = calcDistance(fragPosition, lightPosition); float attenuation = calcAttenuation(distance); vec2 worldPos = gl_FragCoord.xy / vec2(1024, 768); vec3 lightDir = calcLightDirection(fragPosition, lightPosition); lightDir = (lightDir*0.5)+0.5; float atten = calcAttenuation(distance); vec4 emissive = texture2D(uEmissiveTexture, worldPos); FragColor = calcFinalLight(worldPos, lightDir, atten) + emissive; }

    Read the article

  • How to make my simple round sprite look right in XNA

    - by Joshua Perina
    Ok, I'm very new to graphics programming (but not new to coding). I'm trying to load a simple image in XNA which I can do fine. It is a simple round circle which I made in photoshop. The problem is the edges show up rough when I draw it on the screen even with the exact size. The anti-aliasing is missing. I'm sure I'm missing something very simple: GraphicsDevice.Clear(Color.Black); // TODO: Add your drawing code here spriteBatch.Begin(); spriteBatch.Draw(circle, new Rectangle(10, 10, 10, 10), Color.White); spriteBatch.End(); Couldn't post picture because I'm a first time poster. But my smooth png circle has rough edges. So I found that if I added: spriteBatch.Begin(SpriteSortMode.FrontToBack, BlendState.NonPremultiplied); I can get a smooth image when the image is the same size as the original png. But if I want to scale that image up or down then the rough edges return. How do I get XNA to smoothly resize my simple round image to a smaller size without getting the rough edges?

    Read the article

  • Position Reconstruction from Depth by inverting Perspective Projection

    - by user1294203
    I had some trouble reconstructing position from depth sampled from the depth buffer. I use the equivalent of gluPerspective in GLM. The code in GLM is: template GLM_FUNC_QUALIFIER detail::tmat4x4 perspective ( valType const & fovy, valType const & aspect, valType const & zNear, valType const & zFar ) { valType range = tan(radians(fovy / valType(2))) * zNear; valType left = -range * aspect; valType right = range * aspect; valType bottom = -range; valType top = range; detail::tmat4x4 Result(valType(0)); Result[0][0] = (valType(2) * zNear) / (right - left); Result[1][2] = (valType(2) * zNear) / (top - bottom); Result[2][3] = - (zFar + zNear) / (zFar - zNear); Result[2][4] = - valType(1); Result[3][5] = - (valType(2) * zFar * zNear) / (zFar - zNear); return Result; } There doesn't seem to be any errors in the code. So I tried to invert the projection, the formula for the z and w coordinates after projection are: and dividing z' with w' gives the post-projective depth (which lies in the depth buffer), so I need to solve for z, which finally gives: Now, the problem is I don't get the correct position (I have compared the one reconstructed with a rendered position). I then tried using the respective formula I get by doing the same for this Matrix. The corresponding formula is: For some reason, using the above formula gives me the correct position. I really don't understand why this is the case. Have I done something wrong? Could someone enlighten me please?

    Read the article

  • CW/CCW Rotation of a Vector

    - by user23132
    Considering that I have a vector A, and after an arbitrary rotation I get vector B. I want to use this rotation operation in others vectors as well, but I'm having problems in doing that. My idea do that is to calculate the perpendicular vector C of the plane AB (by calculating AxB). This vector C is the axis that I'll need to rotate. To discover the angle I used the dot product between A and B, the acos of the dot product will return the lowest angle between A and B, the angle ang. The rotation I need to do is then: -rotate *ang*º around the C axis. The problem is that I dont know if this rotation is a CW or CCW rotation, since the cos of the dot product does not give me information of the sign of the angle. There's a tip discover that in 2D ( A.x * B.y - A.y * B.x) that you can use to discover if the vector A is at left/right of vector B. But I dont know how to do this in 3D space. Can anyone help me?

    Read the article

  • Smooth waypoint traversing

    - by TheBroodian
    There are a dozen ways I could word this question, but to keep my thoughts in line, I'm phrasing it in line with my problem at hand. So I'm creating a floating platform that I would like to be able to simply travel from one designated point to another, and then return back to the first, and just pass between the two in a straight line. However, just to make it a little more interesting, I want to add a few rules to the platform. I'm coding it to travel multiples of whole tile values of world data. So if the platform is not stationary, then it will travel at least one whole tile width or tile height. Within one tile length, I would like it to accelerate from a stop to a given max speed. Upon reaching one tile length's distance, I would like it to slow to a stop at given tile coordinate and then repeat the process in reverse. The first two parts aren't too difficult, essentially I'm having trouble with the third part. I would like the platform to stop exactly at a tile coordinate, but being as I'm working with acceleration, it would seem easy to simply begin applying acceleration in the opposite direction to a value storing the platform's current speed once it reaches one tile's length of distance (assuming that the tile is traveling more than one tile-length, but to keep things simple, let's just assume it is)- but then the question is what would the correct value be for acceleration to increment from to produce this effect? How would I find that value?

    Read the article

  • Smooth pixels while rotating sprite

    - by goodm
    I just started with andengine, so this maybe gonna be silly question. How to make my sprites more smooth while I rotate them? Or maybe it because this is screenshot from tablet? Thanks JohnEye it works: Just need to change my BitmapTextureAtlas from: BitmapTextureAtlas carAtlas = new BitmapTextureAtlas(this.getTextureManager(),100, 63); to: BitmapTextureAtlas carAtlas = new BitmapTextureAtlas(this.getTextureManager(),100, 63, TextureOptions.BILINEAR);

    Read the article

  • Set a drawing viewport while using camera

    - by Mariano
    I'm working with XNA. I already have a basic world made of tiles and a camera using a transform matrix. I have a character moving around and the camera follows. What I want to do now is draw the map only on a certain part of the screen as shown on the figure below. This way I can move the map to the left of the screen and have the other fixed parts shift to the right. Do I need to modify the camera matrix? Make a new viewport?

    Read the article

  • Shadowmap first phase and shaders

    - by KaiserJohaan
    I am using OpenGL 3.3 and am tryin to implement shadow mapping using cube maps. I have a framebuffer with a depth attachment and a cube map texture. My question is how to design the shaders for the first pass, when creating the shadowmap. This is my vertex shader: in vec3 position; uniform mat4 lightWVP; void main() { gl_Position = lightWVP * vec4(position, 1.0); } Now, do I even need a fragment shader in this shader pass? from what I understand after reading http://www.opengl.org/wiki/Fragment_Shader, by default gl_FragCoord.z is written to the currently attached depth component (to which my cubemap texture is bound to). Thus I shouldnt even need a fragment shader for this pass and from what I understand, there is no other work to do in the fragment shader other than writing this value. Is this correct?

    Read the article

  • Drawing different per-pixel data on the screen

    - by Amir Eldor
    I want to draw different per-pixel data on the screen, where each pixel has a specific value according to my needs. An example may be a random noise pattern where each pixel is randomly generated. I'm not sure what is the correct and fastest way to do this. Locking a texture/surface and manipulating the raw pixel data? How is this done in modern graphics programming? I'm currently trying to do this in Pygame but realized I will face the same problem if I go for C/SDL or OpenGL/DirectX.

    Read the article

  • SRV from UAV on the same texture in directx

    - by notabene
    I'm programming gpgpu raymarching (volumetric raytracing) in directx11. I succesfully perform compute shader and save raymarched volume data to texture. Then i want to use same texture as SRV in normal graphic pipeline. But it doesnt work, texture is not visible. Texture is ok, when i save it file it is what i expect. Texture rendering is ok too, when i render another SRV, it is ok. So problem is only in UAV-SRV. I also triple checked if pointers are ok. Please help, i'm getting mad about this. Here is some code: //before dispatch D3D11_TEXTURE2D_DESC textureDesc; ZeroMemory( &textureDesc, sizeof( textureDesc ) ); textureDesc.Width = xr; textureDesc.Height = yr; textureDesc.MipLevels = 1; textureDesc.ArraySize = 1; textureDesc.SampleDesc.Count = 1; textureDesc.SampleDesc.Quality = 0; textureDesc.Usage = D3D11_USAGE_DEFAULT; textureDesc.BindFlags = D3D11_BIND_UNORDERED_ACCESS | D3D11_BIND_SHADER_RESOURCE ; textureDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; D3D->CreateTexture2D( &textureDesc, NULL, &pTexture ); D3D11_UNORDERED_ACCESS_VIEW_DESC viewDescUAV; ZeroMemory( &viewDescUAV, sizeof( viewDescUAV ) ); viewDescUAV.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; viewDescUAV.ViewDimension = D3D11_UAV_DIMENSION_TEXTURE2D; viewDescUAV.Texture2D.MipSlice = 0; D3DD->CreateUnorderedAccessView( pTexture, &viewDescUAV, &pTextureUAV ); //the getSRV function after dispatch. D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc ; ZeroMemory( &srvDesc, sizeof( srvDesc ) ); srvDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D; srvDesc.Texture2D.MipLevels = 1; D3DD->CreateShaderResourceView( pTexture, &srvDesc, &pTextureSRV);

    Read the article

  • Rendering order in an Entity System

    - by Daedalus
    Say I use a basic ES approach, and also inside Systems I hold lists of all entities that Systems are required to process. How do I maintain this list of entities in desired rendering order, i.e. for a dumb 2D RenderingSystem? I saw this discussion, and what they suggest is to do something like Z ordering - what I would probably do is just to store a "layer" int in DrawableComponent and then, inside RenderingSystem, just sort entities by mentioned "layer" whenever the entity list for RenderingSystem changes. They also say we could just delete and recreate the entity whenever we want it on the top, but it seems too inflexible to me. How is this problem usually solved?

    Read the article

  • Interpolating between two networked states?

    - by Vaughan Hilts
    I have many entities on the client side that are simulated (their velocities are added to their positions on a per frame basis) and I let them dead reckon themselves. They send updates about where they were last seen and their velocity changes. This works great and other players see this work find. However, after a while these players begin to desync after some time. This is because of latency. I'd like to know how I can interpolate between states so they appear to be in the correct position. I know where the player was LAST seen and their current velocity but interpolating to the last seen state causes the player to actually move -backwards-. I could not use velocity at all for other clients and simply 'lerp' them towards the appropriate direction but I feel this would cause jaggy movement. What are the alternatives?

    Read the article

  • Reacting to rectangle on rectangle collisions

    - by mcjohnalds45
    I don't know how to react to collisions between two axis aligned rectangles that have x, y, width and height values (x and y are from the centre of the box) to make them simply not overlap. I figured I'd just make them move away from each other depending on how far they intersect in the opposite direction (left, right, up or down) of where they collided. If I check for collisions only on the x axis or only on the y axis it works fine, but when checking for both collisions crazy stuff happens. This code executes when the first box collides with the second. It's in lua but feel free to answer in anything that isn't to too counter-intuitive. if box1.x < box2.x then box1.x = box1.x + box2.x - box1.x - (box1.width / 2) - (box2.width / 2) end if box1.x > box2.x then box1.x = box1.x - (box1.x - box2.x - (box1.width / 2) - (box2.width / 2)) end if box1.y < box2.y then box1.y = box1.y + box2.y - box1.y - (box1.height / 2) - (box2.height / 2) end if box1.y > box2.y then box1.y = box1.y - (box1.y - box2.y - (box1.height / 2) - (box2.height / 2)) end

    Read the article

  • Sprite Animation in Android with OpenGL ES

    - by lijo john
    How to do a sprite animation in android using OpenGL ES? What i have done : Now I am able to draw a rectangle and apply my texture(Spritesheet) to it What I need to know : Now the rectangle shows the whole sprite sheet as a whole How to show a single action from sprite sheet at a time and make the animation It will be very help full if anyone can share any idea's , links to tutorials and suggestions. Advanced Thanks to All

    Read the article

  • simple collision detection

    - by Rob
    Imagine 2 squares sitting side by side, both level with the ground: http://img19.imageshack.us/img19/8085/sqaures2.jpg A simple way to detect if one is hitting the other is to compare the location of each side. They are touching if ALL of the following are NOT true: The right square's left side is to the right of the left square's right side. The right square's right side is to the left of the left square's left side. The right square's bottom side is above the left square's top side. The right square's top side is below the left square's bottom side. If any of those are true, the squares are not touching. If all of those are false, the squares are touching. But consider a case like this, where one square is at a 45 degree angle: http://img189.imageshack.us/img189/4236/squaresb.jpg Is there an equally simple way to determine if those squares are touching?

    Read the article

  • Managing constant buffers without FX interface

    - by xcrypt
    I am aware that there is a sample on working without FX in the samplebrowser, and I already checked that one. However, some questions arise: In the sample: D3DXMATRIXA16 mWorldViewProj; D3DXMATRIXA16 mWorld; D3DXMATRIXA16 mView; D3DXMATRIXA16 mProj; mWorld = g_World; mView = g_View; mProj = g_Projection; mWorldViewProj = mWorld * mView * mProj; VS_CONSTANT_BUFFER* pConstData; g_pConstantBuffer10->Map( D3D10_MAP_WRITE_DISCARD, NULL, ( void** )&pConstData ); pConstData->mWorldViewProj = mWorldViewProj; pConstData->fTime = fBoundedTime; g_pConstantBuffer10->Unmap(); They are copying their D3DXMATRIX'es to D3DXMATRIXA16. Checked on msdn, these new matrices are 16 byte aligned and optimised for intel pentium 4. So as my first question: 1) Is it necessary to copy matrices to D3DXMATRIXA16 before sending them to the constant buffer? And if no, why don't we just use D3DXMATRIXA16 all the time? I have another question about managing multiple constant buffers within one shader. Suppose that, within your shader, you have multiple constant buffers that need to be updated at different times: cbuffer cbNeverChanges { matrix View; }; cbuffer cbChangeOnResize { matrix Projection; }; cbuffer cbChangesEveryFrame { matrix World; float4 vMeshColor; }; Then how would I set these buffers all at different times? g_pd3dDevice->VSSetConstantBuffers( 0, 1, &g_pConstantBuffer10 ); gives me the possibility to set multiple buffers, but that is within one call. 2) Is that okay even if my constant buffers are updated at different times? And do I suppose I have to make sure the constantbuffers are in the same position in the array as the order they appear in the shader?

    Read the article

  • Why is chunk size often a power of two?

    - by danijar
    There are many Minecraft clones out there and I am working on my own implementation. A principle of terrain rendering is tiling the whole world in fixed size chunks to reduce the effort of localized changes. In Minecraft the chunk size is 16 x 16 x 256 as far as I now. And in clones I also always saw chunk sizes of a power of the number 2. Is there any reason for that, maybe performance or memory related? I know that powers of 2 play a special role in binary computers but what has that to do with the chunk size?

    Read the article

< Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >