Search Results

Search found 26043 results on 1042 pages for 'development trunk'.

Page 405/1042 | < Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >

  • Calculate an AABB for bone animated model

    - by Byte56
    I have a model that has its initial bounding box calculated by finding the maximum and minimum on the x, y and z axes. Producing a correct result like so: The vertices are then stored in a VBO and only altered with matrices for rotation and bone animation. Currently the bounds are not updated when the model is altered. So the animated and rotated model has bounds like so: (Maybe it's hard to tell, but the bounds are the same as before, and don't accurately represent the rotated/animated model) So my question is, how can I calculate the bounding box using the armature matrices and rotation/translation matrices for each model? Keep in mind the modified vertex data is not available because those calculations are performed on the GPU in the shader. The end result I want is to have an accurate AABB the represents the animated model for picking/basic collision checks.

    Read the article

  • How to I access "Deny" message from a Lidgren client?

    - by TJ Mott
    I'm using the Lidgren v3 network for a UDP client/server networking model. On the server end, I'm initializing a NetServer object with the NetIncomingMessage.ConnectionApproval message type enabled. So the client is able to successfully connect and the first packet it sends is a login packet, containing a username and password supplied by the user. The server is receiving that and doing some black magic to authenticate, and everything works up to that point. If the login fails, the server calling NetIncomingMessage.SenderConnection.Deny("Invalid Login Credentials"). I want to know how to properly receive this deny message on the client. I'm getting the message, it shows up with a message type of NetIncomingMessage.StatusChanged. If I call ReadString on that message, I get a corrupted version of the string I passed to the Deny method on the server. The type of corruption varies, I've seen odd characters in there but in every case it's truncated and is way shorter than the string I entered. Any ideas? The official documentation is sparse on this topic. I could use pointers from anyone who has successfully used the Lidgren library and uses the Accept or Deny methods. Also, if I don't do any authentication and just Approve() the connection every time, stuff actually works just fine and I'm getting reliable two-way UDP traffic. (And lastly, Stack Exchange said I don't have enough reputation to use the "Lidgren" tag....???)

    Read the article

  • Playing repeated sound in Java

    - by Diogo Schneider
    I'm trying to play sounds in a Java game with the following code: AudioStream audioStream = new AudioStream(stream); AudioPlayer.player.start(audioStream); The stream variable is just an InputStream to the resource. By the first time this code is called, the sound is played as expected, but by the second time the program just hangs, not even an exception is thrown. I don't know what's going on or how to prevent this. If I try closing either stream or audioStream after the above code, the program doesn't hang, but no sound is ever played at all. Any tips are welcome, thanks.

    Read the article

  • LibGdx drawing weird behaviour

    - by Ryckes
    I am finding strange behaviour while rendering TextureRegions in my game, only when pausing it. I am making a game for Android, in Java with LibGdx. When I comment out the line "drawLevelPaused()" everything seems to work fine, both running and paused. When it's not commented, everything works fine until I pause the screen, then it draws those two rectangles, but maybe ships are not shown, and if I comment out drawShips() and drawTarget() (just trying) maybe one of the planets disappears, or if I change the order, other things disappear and those that disappeared before now are rendered again. I can't find the way to fix this behaviour I beg your help, and I hope it's my mistake and not a LibGdx issue. I use OpenGL ES 2.0, stated in AndroidManifest.xml, if it is of any help. Thank you in advance. My Screen render method(game loop) is as follows: @Override public void render(float delta) { Gdx.gl.glClearColor(0.1f, 0.1f, 0.1f, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); controller.update(delta); renderer.render(); } When world state is PAUSED controller.update does nothing at all, there is a switch in it. And renderer.render() is as follows: public void render() { int worldState=this.world.getWorldState(); updateCamera(); spriteBatch.begin(); drawPlanets(); drawTarget(); drawShips(); if(worldState==World.PAUSED) { drawLevelPaused(); } else if(worldState==World.LEVEL_WON) { drawLevelWin(); } spriteBatch.end(); } And those methods are: private void updateCamera() { this.offset=world.getCameraOffset(); } private void drawPlanets() { for(Planet planet : this.world.getPlanets()) { this.spriteBatch.draw(this.textures.getTexture(planet.getTexture()), (planet.getPosition().x - this.offset[0]) * ppuX, (planet.getPosition().y - this.offset[1]) * ppuY); } } private void drawTarget() { Target target=this.world.getTarget(); this.spriteBatch.draw(this.textures.getTexture(target.getTexture()), (target.getPosition().x - this.offset[0]) * ppuX, (target.getPosition().y - this.offset[1]) * ppuY); } private void drawShips() { for(Ship ship : this.world.getShips()) { this.spriteBatch.draw(this.textures.getTexture(ship.getTexture()), (ship.getPosition().x - this.offset[0]) * ppuX, (ship.getPosition().y - this.offset[1]) * ppuY, ship.getBounds().width*ppuX/2, ship.getBounds().height*ppuY/2, ship.getBounds().width*ppuX, ship.getBounds().height*ppuY, 1.0f, 1.0f, ship.getAngle()-90.0f); } if(this.world.getStillShipVisibility()) { Ship ship=this.world.getStillShip(); Arrow arrow=this.world.getArrow(); this.spriteBatch.draw(this.textures.getTexture(ship.getTexture()), (ship.getPosition().x - this.offset[0]) * ppuX, (ship.getPosition().y - this.offset[1]) * ppuY, ship.getBounds().width*ppuX/2, ship.getBounds().height*ppuY/2, ship.getBounds().width*ppuX, ship.getBounds().height*ppuY, 1f, 1f, ship.getAngle() - 90f); this.spriteBatch.draw(this.textures.getTexture(arrow.getTexture()), (ship.getCenter().x - this.offset[0] - arrow.getBounds().width/2) * ppuX, (ship.getCenter().y - this.offset[1]) * ppuY, arrow.getBounds().width*ppuX/2, 0, arrow.getBounds().width*ppuX, arrow.getBounds().height*ppuY, 1f, arrow.getRate(), ship.getAngle() - 90f); } } private void drawLevelPaused() { this.shapeRenderer.begin(ShapeType.FilledRectangle); this.shapeRenderer.setColor(0f, 0f, 0f, 0.8f); this.shapeRenderer.filledRect(0, 0, this.width/this.ppuX, PAUSE_MARGIN_HEIGHT/this.ppuY); this.shapeRenderer.filledRect(0, (this.height-PAUSE_MARGIN_HEIGHT)/this.ppuY, this.width/this.ppuX, PAUSE_MARGIN_HEIGHT/this.ppuY); this.shapeRenderer.end(); for(Button button : this.world.getPauseButtons()) { this.spriteBatch.draw(this.textures.getTexture(button.getTexture()), (button.getPosition().x - this.offset[0]) * this.ppuX, (button.getPosition().y - this.offset[1]) * this.ppuY); } }

    Read the article

  • Direct3d - Code structure

    - by marcg11
    I'm learning directx in a master's degree and they taught us to have a GraphicsLayer class which is the one connecting with the direct3d library. That way this class is completly independent from the other classes (my game classes), meaning changing the renderer to OpenGL wouldn't require much effort but only changing the graphicLayer. This classe has it's LoadAssets, Paint methods, but I have a question, they told us to load all the assets inside this class. This means all these methods will be in the loadAssets method: D3DXCreateTextureFromFileEx(g_pD3DDevice,"tiles.png",0,0,1,0,D3DFMT_UNKNOWN,D3DPOOL_DEFAULT,D3DX_FILTER_NONE,D3DX_FILTER_NONE,NULL,NULL,NULL,&texTiles); // And more resources to load //... texTiles as you see is a LPDIRECT3DTEXTURE9 instance which is declared in the graphicLayer.h. So my question is, how do you manage all the resources? Do I have to declare in the .h all my game textures even if I'm not using them? How would you load only those resources there are in a scene and draw them in a code-strucured way?

    Read the article

  • DirectWrite Producing Strange Artifacts?

    - by smoth190
    I've written the basis to my UI system around Direct2D. I like it because it's fast and easy to use (even if I had to do some messy work to get it to work with DirectX11). However, I notice when using DirectWrite I'm getting strange problems with my text. As you can see, the e is a little screwwed up, and it overall looks a little bumpy. This only happens with certain fonts in certain sizes, and with certain arrangements of letters. This particular example is Verdana in size 16.0 font. Can I fix this? It's pretty annoying to change all my words and fonts because of this problem.

    Read the article

  • XNA windows phone release black textures

    - by Lukasz Kajstura
    i just made a 3d game in XNA for windows phone 7. I build it in release mode on visual studio 2010 and suddenly when I run game there is no textures on models - 2 models are black and one is transparent. Models are in .X format exported from 3dsmax and have textures in .jpg also added to game content. I set build action to none and all worked fine in debug mode. When I change to release mode - black textures. When I set build action to compile it gives me warning: Asset was built 2 times with different settings: using TextureImporter and TextureProcessor using TextureImporter and TextureProcessor, referenced by... and still no textures. What can I do?

    Read the article

  • Trying to setup first DirectX project (don't understand the error) [on hold]

    - by user1157885
    I've just started learning DirectX with the book "3D Game Programming with DirectX". I just finished setting up all the paths and adding the code to the project which I think I did correctly, but I get this massive error which I don't really understand and is hard to google. Could someone tell me what it means and how to fix it? Error 1 error TRK0002: Failed to execute command: ""C:\Program Files (x86)\Microsoft DirectX SDK (June 2010)\Utilities\bin\x64\fxc.exe" /nologo /Emain /Fo "C:\Desktop\DirectX 11 Projects\box\Win32Project2\Debug\color.cso" /Od /Zi "....\Book Files\3DGameProg\DVD\Code\Chapter 6 Drawing in Direct3D\Box\FX\color.fx"". The handle is invalid.

    Read the article

  • Stuck at enemy movement

    - by Syed
    I am making a TD game in unity, Initially I made all of my enemy movements frame rate dependent say: I had a grid point1 at -22.65 and other at -21.1, diagrammatically: (-22.65) _________(-21.1)_______(-21.1+1.55) ...... so the distance on x axis between two points is 1.55, divided it by 25 jumps with each enemy jump of 0.062 of each frame. On reaching on next point of grid the enemy find again its path. All went fine until I have requirement of FastForward and Pause feature. I used timeScale property of unity but it wont work as they are frame dependent. I also tried double speed of enemy on clicking fast forward button at any time, it has some issues that enemy jumps are now less and it fails to reach on next grid point. Could someone suggest me solution to my problem. Do I need to change the enemy movement code to make it frame independent ?? I need the enemy to reach on the grid specific point I also need later to slow down any one enemy's speed when tower fires on it. Thnx

    Read the article

  • Z Order in 2D with orthographic projection and texture atlas

    - by Carbon Crystal
    I am working with a 2D game in OpenGL ES and have a question about z-order together with a texture atlas. I am using an orthographic projection because I want pixel-perfect rendering of 2D sprites, however from what I can determine the draw order is really the only thing that will determine which textures (sprites) appear above or below their neighbors. That is, the "z-index" is a function of the order in which the textures are drawn as opposed to the z coordinate on the vertex array being drawn. So.. I have a texture atlas to save binding multiple textures for each draw call but this immediately creates a problem if there is more than one atlas in play. If I need to draw textures from more than one atlas (typically the case if I have too many sprites to fit in a single atlas of a reasonable size), then I can't maintain a "draw order" across atlases unless I want to bind/unbind the atlas textures more than once.. which kinda defeats the purpose. Does anyone have any clues as to what the best approach is here? Currently I'm running under an assumption that I will have to declare different fixed "depths" (e.g foreground, background etc) in my 2D scene and assume that the z-order for sprites at a given depth is the same. Then I can have as many atlases as I need at each depth and simply draw the depths in order (along with their associated atlases) I'd love to hear what other people are doing.

    Read the article

  • Best practices with Vertices in Open GL

    - by Darestium
    What is the best practice in regards to storing vertex data in Open GL? I.e: struct VertexColored { public: GLfloat position[]; GLfloat normal[]; byte colours[]; } class Terrian { private: GLuint vbo_vertices; GLuint vbo_normals; GLuint vbo_colors; GLuint ibo_elements; VertexColored vertices[]; } or having them stored seperatly in the required class like: class Terrian { private: GLfloat vertices[]; GLfloat normals[]; GLfloat colors[]; GLuint vbo_vertices; GLuint vbo_normals; GLuint vbo_colors; GLuint ibo_elements; }

    Read the article

  • How do I make my rain effect look more like rain and less like snowfall?

    - by Nikhil Lamba
    I am making a game in that game I want a rain effect. I am little bit far from this right now. I am creating the rain effect like below: particleSystem.addParticleInitializer(new ColorInitializer(1, 1, 1)); particleSystem.addParticleInitializer(new AlphaInitializer(0)); particleSystem.setBlendFunction(GL10.GL_SRC_ALPHA, GL10.GL_ONE); particleSystem.addParticleInitializer(new VelocityInitializer(2, 2, 20, 10)); particleSystem.addParticleInitializer(new RotationInitializer(0.0f, 30.0f)); particleSystem.addParticleModifier(new ScaleModifier(1.0f, 2.0f, 0, 150)); particleSystem.addParticleModifier(new ColorModifier(1, 1, 1, 1f, 1, 1, 1, 3)); particleSystem.addParticleModifier(new ColorModifier(1, 1, 1f, 1, 1, 1, 1, 6)); particleSystem.addParticleModifier(new AlphaModifier(0, 1, 0, 3)); particleSystem.addParticleModifier(new AlphaModifier(1, 0, 1, 125)); particleSystem.addParticleModifier(new ExpireModifier(50, 50)); scene.attachChild(particleSystem); But it looks like snowfall! What changes can I do for it to look more like rain? EDIT Here is a screenshot:

    Read the article

  • Blur gets displaced compared to original image

    - by user1294203
    I have implemented a SSAO and I'm using a blur step to smooth it out. The problem is that the blurred texture is slightly displaced compared to the original. I'm blurring using a 4x4 kernel since that was my noise kernel in SSAO. The following is the blurring shader: float result = 0.0; for(int i = 0; i < 4; i++){ for(int j = 0; j < 4; j++){ vec2 offset = vec2(TEXEL_SIZE.x * i, TEXEL_SIZE.y * j); result += texture(aoSampler, TexCoord + offset).r; } } out_AO = vec4(vec3(0.0), result * 0.0625); Where TEXEL_SIZE is one over my window resolution. I was thinking that this is was an error based on how OpenGL counts the Texel center, so I tried displacing the texture coordinate I was using by 0.5 * TEXEL_SIZE, but there was still a slight displacement. The texture input to my blur shader, has wrap parameters: glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); When I tell the blur shader to just output the the value of the pixel, the result is not displaced, so it must have something to do with how neighboring pixels are sampled. Any thoughts?

    Read the article

  • Picture rendered from above and below using an Orthographic camera do not match

    - by Roy T.
    I'm using an orthographic camera to render slices of a model (in order to voxelize it). I render each slice both from above and below in order to determine what is inside each slice. I am using an orthographic camera The model I render is a simple 'T' shape constructed from two cubes. The cubes have the same dimensions and have the same Y (height) coordinate. See figure 1 for a render of it in Blender. I render this model once directly from above and once directly from below. My expectation was that I would get exactly the same image (except for mirroring over the y-axis). However when I render using a very low resolution render target (25x25) the position (in pixels) of the 'T' is different when rendered from above as opposed to rendered from below. See figure 2 and 3. The pink blocks are not part of the original rendering but I've added them so you can easily count/see the differences. Figure 2: the T rendered from above Figure 3: the T rendered from below This is probably due to what I've read about pixel and texel coordinates which might be biased to the top-left as seen from the camera. Since I'm using the same 'up' vector for both of my camera's my bias only shows on the x-axis. I've tried to change the position of the camera and it's look-at by, what I thought, should be half a pixel. I've tried both shifting a single camera and shifting both cameras and while I see some effect I am not able to get a pixel-by-pixel perfect copy from both camera's. Here I initialize the camera and compute, what I believe to be, half pixel. boundsDimX and boundsDimZ is a slightly enlarged bounding box around the model which I also use as the width and height of the view volume of the orthographic camera. Matrix projection = Matrix.CreateOrthographic(boundsDimX, boundsDimZ, 0.5f, sliceHeight + 0.5f); Vector3 halfPixel = new Vector3(boundsDimX / (float)renderTarget.Width, 0, boundsDimY / (float)renderTarget.Height) * 0.5f; This is the code where I set the camera position and camera look ats // Position camera if (downwards) { float cameraHeight = bounds.Max.Y + 0.501f - (sliceHeight * i); Vector3 cameraPosition = new Vector3 ( boundsCentre.X, // possibly adjust by half a pixel? cameraHeight, boundsCentre.Z ); camera.Position = cameraPosition; camera.LookAt = new Vector3(cameraPosition.X, cameraHeight - 1.0f, cameraPosition.Z); } else { float cameraHeight = bounds.Max.Y - 0.501f - (sliceHeight * i); Vector3 cameraPosition = new Vector3 ( boundsCentre.X, cameraHeight, boundsCentre.Z ); camera.Position = cameraPosition; camera.LookAt = new Vector3(cameraPosition.X, cameraHeight + 1.0f, cameraPosition.Z); } Main Question Now you've seen all the problems and code you can guess it. My main question is. How do I align both camera's so that they each render exactly the same image (mirrored along the Y axis)? Figure 1 the original model rendered in blender

    Read the article

  • Seeking an C/C++ OBJ geometry read/write that does not modify the representation

    - by Blake Senftner
    I am seeking a means to read and write OBJ geometry files with logic that does not modify the geometry representation. i.e. read geometry, immediately write it, and a diff of the source OBJ and the one just written will be identical. Every OBJ writing utility I've been able to find online fails this test. I am writing small command line tools to modify my OBJ geometries, and I need to write my results, not just read the geometry for rendering purposes. Simply needing to write the geometry knocks out 95% of the OBJ libraries on the web. Also, many of the popular libraries modify the geometry representation. For example, Nat Robbin's GLUT library includes the GLM library, which both converts quads to triangles, as well as reverses the topology (face ordering) of the geometry. It's still the same geometry, but if your tool chain expects a given topology, such as for rigging or morph targets, then GLM is useless. I'm not rendering in these tools, so dependencies like OpenGL or GLUT make no sense. And god forbid, do not "optimize" the geometry! Redundant vertices are on purpose for maintaining oneself on cache with our weird little low memory mobile devices.

    Read the article

  • Drawing isometric map in canvas / javascript

    - by Dave
    I have a problem with my map design for my tiles. I set player position which is meant to be the middle tile that the canvas is looking at. How ever the calculation to put them in x:y pixel location is completely messed up for me and i don't know how to fix it. This is what i tried: var offset_x = 0; //used for scrolling on x var offset_y = 0; //used for scrolling on y var prev_mousex = 0; //for movePos function var prev_mousey = 0; //for movePos function function movePos(e){ if (prev_mousex === 0 && prev_mousey === 0) { prev_mousex = e.pageX; prev_mousey = e.pageY; } offset_x = offset_x + (e.pageX - prev_mousex); offset_y = offset_y + (e.pageY - prev_mousey); prev_mousex = e.pageX; prev_mousey = e.pageY; run = true; } player_posx = 5; player_posy = 55; ct = 19; for (i = (player_posx-ct); i < (player_posx+ct); i++){ //horizontal for (j=(player_posy-ct); j < (player_posy+ct); j++){ // vertical //img[0] is 64by64 but the graphic is 64by32 the rest is alpha space var x = (i-j)*(img[0].height/2) + (canvas.width/2)-(img[0].width/2); var y = (i+j)*(img[0].height/4); var abposx = x - offset_x; var abposy = y - offset_y; ctx.drawImage(img[0],abposx,abposy); } } Now based on these numbers the first render-able tile is I = 0 & J = 36. As numbers in the negative are not in the array. But for I=0 and J= 36 the position it calculates is : -1120 : 592 Does any one know how to center it to canvas view properly?

    Read the article

  • Why do meshes show up as bones in the Model class?

    - by Itamar Marom
    Right now I'm working on a 3D game and I've come across something very weird. When I created the model in Blender, I added an armature named "MyBone" to the stage and attached a cube ("MyCube") to it, so that when I move the armature, the cube moves with it. I exported this as an FBX and loaded it as a Model object. What I expected to see was: But what I got was this: I'm really confused. Why is the mesh I created showing up in the bone list? And what's Root Node? Here are the .blend and .fbx files: here or here. Thanks.

    Read the article

  • AndEngine doesn't fill correctly an image on my device

    - by Guille
    I'm learning about AndEngine a little bit, I'm trying to follow a tutorial but I don't get to fill the background image correctly, so, it's just appear in one side of my screen. My device is a Galaxy Nexus (1270x768 I think...). The image is 800x480. The code is: public EngineOptions onCreateEngineOptions() { camera = new Camera(0, 0, 800, 480); EngineOptions engineOptions = new EngineOptions(true, ScreenOrientation.LANDSCAPE_FIXED, new FillResolutionPolicy(), this.camera); engineOptions.getAudioOptions().setNeedsMusic(true).setNeedsSound(true); engineOptions.getRenderOptions().setMultiSampling(true);//.getConfigChooserOptions().setRequestedMultiSampling(true); engineOptions.setWakeLockOptions(WakeLockOptions.SCREEN_ON); return engineOptions; } I have been trying with several values in the camera, but it doesn't fill in all the screen, why?

    Read the article

  • how to define a field of view for the entire map for shadow?

    - by Mehdi Bugnard
    I recently added "Shadow Mapping" in my XNA games to include shadows. I followed the nice and famous tutorial from "Riemers" : http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Shadow_map.php . This code work nice and I can see my source of light and shadow. But the problem is that my light source does not match the field of view that I created. I want the light covers the entire map of my game. I don't know why , but the light only affect 2-3 cubes of my map. ScreenShot: (the emission of light illuminates only 2-3 blocks and not the full map) Here is my code i create the fieldOfView for LightviewProjection Matrix: Vector3 lightDir = new Vector3(10, 52, 10); lightPos = new Vector3(10, 52, 10); Matrix lightsView = Matrix.CreateLookAt(lightPos, new Vector3(105, 50, 105), new Vector3(0, 1, 0)); Matrix lightsProjection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver2, 1f, 20f, 1000f); lightsViewProjectionMatrix = lightsView * lightsProjection; As you can see , my nearPlane and FarPlane are set to 20f and 100f . So i don't know why the light stop after 2 cubes. it's should be bigger Here is set the value to my custom effect HLSL in the shader file /* SHADOW VALUE */ effectWorld.Parameters["LightDirection"].SetValue(lightDir); effectWorld.Parameters["xLightsWorldViewProjection"].SetValue(Matrix.Identity * .lightsViewProjectionMatrix); effectWorld.Parameters["xWorldViewProjection"].SetValue(Matrix.Identity * arcadia.camera.View * arcadia.camera.Projection); effectWorld.Parameters["xLightPower"].SetValue(1f); effectWorld.Parameters["xAmbient"].SetValue(0.3f); Here is my custom HLSL shader effect file "*.fx" // This sample uses a simple Lambert lighting model. float3 LightDirection = normalize(float3(-1, -1, -1)); float3 DiffuseLight = 1.25; float3 AmbientLight = 0.25; uniform const float3 DiffuseColor = 1; uniform const float Alpha = 1; uniform const float3 EmissiveColor = 0; uniform const float3 SpecularColor = 1; uniform const float SpecularPower = 16; uniform const float3 EyePosition; // FOG attribut uniform const float FogEnabled ; uniform const float FogStart ; uniform const float FogEnd ; uniform const float3 FogColor ; float3 cameraPos : CAMERAPOS; texture Texture; sampler Sampler = sampler_state { Texture = (Texture); magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = mirror; AddressV = mirror; }; texture xShadowMap; sampler ShadowMapSampler = sampler_state { Texture = <xShadowMap>; magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = clamp; AddressV = clamp; }; /* *************** */ /* SHADOW MAP CODE */ /* *************** */ struct SMapVertexToPixel { float4 Position : POSITION; float4 Position2D : TEXCOORD0; }; struct SMapPixelToFrame { float4 Color : COLOR0; }; struct SSceneVertexToPixel { float4 Position : POSITION; float4 Pos2DAsSeenByLight : TEXCOORD0; float2 TexCoords : TEXCOORD1; float3 Normal : TEXCOORD2; float4 Position3D : TEXCOORD3; }; struct SScenePixelToFrame { float4 Color : COLOR0; }; float DotProduct(float3 lightPos, float3 pos3D, float3 normal) { float3 lightDir = normalize(pos3D - lightPos); return dot(-lightDir, normal); } SSceneVertexToPixel ShadowedSceneVertexShader(float4 inPos : POSITION, float2 inTexCoords : TEXCOORD0, float3 inNormal : NORMAL) { SSceneVertexToPixel Output = (SSceneVertexToPixel)0; Output.Position = mul(inPos, xWorldViewProjection); Output.Pos2DAsSeenByLight = mul(inPos, xLightsWorldViewProjection); Output.Normal = normalize(mul(inNormal, (float3x3)World)); Output.Position3D = mul(inPos, World); Output.TexCoords = inTexCoords; return Output; } SScenePixelToFrame ShadowedScenePixelShader(SSceneVertexToPixel PSIn) { SScenePixelToFrame Output = (SScenePixelToFrame)0; float2 ProjectedTexCoords; ProjectedTexCoords[0] = PSIn.Pos2DAsSeenByLight.x / PSIn.Pos2DAsSeenByLight.w / 2.0f + 0.5f; ProjectedTexCoords[1] = -PSIn.Pos2DAsSeenByLight.y / PSIn.Pos2DAsSeenByLight.w / 2.0f + 0.5f; float diffuseLightingFactor = 0; if ((saturate(ProjectedTexCoords).x == ProjectedTexCoords.x) && (saturate(ProjectedTexCoords).y == ProjectedTexCoords.y)) { float depthStoredInShadowMap = tex2D(ShadowMapSampler, ProjectedTexCoords).r; float realDistance = PSIn.Pos2DAsSeenByLight.z / PSIn.Pos2DAsSeenByLight.w; if ((realDistance - 1.0f / 100.0f) <= depthStoredInShadowMap) { diffuseLightingFactor = DotProduct(xLightPos, PSIn.Position3D, PSIn.Normal); diffuseLightingFactor = saturate(diffuseLightingFactor); diffuseLightingFactor *= xLightPower; } } float4 baseColor = tex2D(Sampler, PSIn.TexCoords); Output.Color = baseColor*(diffuseLightingFactor + xAmbient); return Output; } SMapVertexToPixel ShadowMapVertexShader(float4 inPos : POSITION) { SMapVertexToPixel Output = (SMapVertexToPixel)0; Output.Position = mul(inPos, xLightsWorldViewProjection); Output.Position2D = Output.Position; return Output; } SMapPixelToFrame ShadowMapPixelShader(SMapVertexToPixel PSIn) { SMapPixelToFrame Output = (SMapPixelToFrame)0; Output.Color = PSIn.Position2D.z / PSIn.Position2D.w; return Output; } /* ******************* */ /* END SHADOW MAP CODE */ /* ******************* */ / For rendering without instancing. technique ShadowMap { pass Pass0 { VertexShader = compile vs_2_0 ShadowMapVertexShader(); PixelShader = compile ps_2_0 ShadowMapPixelShader(); } } technique ShadowedScene { /* pass Pass0 { VertexShader = compile vs_2_0 VSBasicTx(); PixelShader = compile ps_2_0 PSBasicTx(); } */ pass Pass1 { VertexShader = compile vs_2_0 ShadowedSceneVertexShader(); PixelShader = compile ps_2_0 ShadowedScenePixelShader(); } } technique SimpleFog { pass Pass0 { VertexShader = compile vs_2_0 VSBasicTx(); PixelShader = compile ps_2_0 PSBasicTx(); } } I edited my fx file , for show you only information and functions about the shadow ;-)

    Read the article

  • Coarse Collision Detection in highly dynamic environment

    - by Millianz
    I'm currently working a 3D space game with A LOT of dynamic objects that are all moving (there is pretty much no static environment). I have the collision detection and resolution working just fine, but I am now trying to optimize the collision detection (which is currently O(N^2) -- linear search). I thought about multiple options, a bounding volume hierarchy, a Binary Spatial Partitioning tree, an Octree or a Grid. I however need some help with deciding what's best for my situation. A grid seems unfeasible simply due to the space requirements and cache coherence problems. Since everything is so dynamic however, it seems to be that trees aren't ideal either, since they would have to be completely rebuilt every frame. I must admit I never implemented a physics engine that required spatial partitioning, do I indeed need to rebuild the tree every frame (assuming that everything is constantly moving) or can I update the trees after integrating? Advice is much appreciated - to give some more background: You're flying a space ship in an asteroid field, and there are lots and lots of asteroids and some enemy ships, all of which shoot bullets. EDIT: I came across the "Sweep an Prune" algorithm, which seems like the right thing for my purposes. It appears like the right mixture of fast building of the data structures involved and detailed enough partitioning. This is the best resource I can find: http://www.codercorner.com/SAP.pdf If anyone has any suggestions whether or not I'm going in the right direction, please let me know.

    Read the article

  • Branch by abstraction: Are there "examples" of how it can be done?

    - by Philipp Keller
    Having read Martin Fowlers "Feature Branch" and Flickrs "Flipping Out" (http://www.liip.to/flippingout) I guess there are a few guys out there who do: all (or most) development on Trunk release Trunk regularly (assuming updating your web site) not-yet-approved or not-yet-finished features should not be visible/have no impact on the regular user I've got 2 questions: granted - Flickr's article seems to work for "frontend code". But how is it cleaned up? Don't the ifs pile up? how does this work for the more "backend part"? Thinking of database changes, or model refactoring. Working with ifs doesn't seem to work - and copy-pasting classes for small adaptions also seems awkward. Are there any articles out there answering these 2 questions?

    Read the article

  • matrix 4x4 position data

    - by freefallr
    I understand that a 4x4 matrix holds rotation and position data. The rotation data is held in the 3x3 sub-matrix at the top left of the matrix. The position data is held in the last column of the matrix. e.g. glm::vec3 vParentPos( mParent[3][0], mParent[3][1], mParent[3][2] ); My question is - am I accessing the parent matrix correctly in the example above? I know that opengl uses a different matrix ordering that directx, (row order instead of column order or something), so, should the mParent be accessed as follows instead? glm::vec3 vParentPos( mParent[0][3], mParent[1][3], mParent[2][3] ); thanks!

    Read the article

  • Shadow mapping: what is the light looking at?

    - by PgrAm
    I'm all set to set up shadow mapping in my 3d engine but there is one thing I am struggling to understand. The scene needs to be rendered from the light's point of view so I simply first move my camera to the light's position but then I need to find out which direction the light is looking. Since its a point light its not shining in any particular direction. How do I figure out what the orientation for the light point of view is?

    Read the article

  • Can somebody guide me asto how I can make a game for playing cards [closed]

    - by user2558
    In college me and my friends use to play cards all the time. I want to make a game for that. It's quite similar to hearts, a kind of modified hearts which we made up. I want to make a multiplayer game which could be played over the internet. Plus there should also be an option for computer to play if less players availiable at the time. I don't want to make a exe. I want to play in browser. How should I go about it.

    Read the article

  • Determining explosion radius damage - Circle to Rectangle 2D

    - by Paul Renton
    One of the Cocos2D games I am working on has circular explosion effects. These explosion effects need to deal a percentage of their set maximum damage to all game characters (represented by rectangular bounding boxes as the objects in question are tanks) within the explosion radius. So this boils down to circle to rectangle collision and how far away the circle's radius is from the closest rectangle edge. I took a stab at figuring this out last night, but I believe there may be a better way. In particular, I don't know the best way to determine what percentage of damage to apply based on the distance calculated. Note : All tank objects have an anchor point of (0,0) so position is according to bottom left corner of bounding box. Explosion point is the center point of the circular explosion. TankObject * tank = (TankObject*) gameSprite; float distanceFromExplosionCenter; // IMPORTANT :: All GameCharacter have an assumed (0,0) anchor if (explosionPoint.x < tank.position.x) { // Explosion to WEST of tank if (explosionPoint.y <= tank.position.y) { //Explosion SOUTHWEST distanceFromExplosionCenter = ccpDistance(explosionPoint, tank.position); } else if (explosionPoint.y >= (tank.position.y + tank.contentSize.height)) { // Explosion NORTHWEST distanceFromExplosionCenter = ccpDistance(explosionPoint, ccp(tank.position.x, tank.position.y + tank.contentSize.height)); } else { // Exp center's y is between bottom and top corner of rect distanceFromExplosionCenter = tank.position.x - explosionPoint.x; } // end if } else if (explosionPoint.x > (tank.position.x + tank.contentSize.width)) { // Explosion to EAST of tank if (explosionPoint.y <= tank.position.y) { //Explosion SOUTHEAST distanceFromExplosionCenter = ccpDistance(explosionPoint, ccp(tank.position.x + tank.contentSize.width, tank.position.y)); } else if (explosionPoint.y >= (tank.position.y + tank.contentSize.height)) { // Explosion NORTHEAST distanceFromExplosionCenter = ccpDistance(explosionPoint, ccp(tank.position.x + tank.contentSize.width, tank.position.y + tank.contentSize.height)); } else { // Exp center's y is between bottom and top corner of rect distanceFromExplosionCenter = explosionPoint.x - (tank.position.x + tank.contentSize.width); } // end if } else { // Tank is either north or south and is inbetween left and right corner of rect if (explosionPoint.y < tank.position.y) { // Explosion is South distanceFromExplosionCenter = tank.position.y - explosionPoint.y; } else { // Explosion is North distanceFromExplosionCenter = explosionPoint.y - (tank.position.y + tank.contentSize.height); } // end if } // end outer if if (distanceFromExplosionCenter < explosionRadius) { /* Collision :: Smaller distance larger the damage */ int damageToApply; if (self.directHit) { damageToApply = self.explosionMaxDamage + self.directHitBonusDamage; [tank takeDamageAndAdjustHealthBar:damageToApply]; CCLOG(@"Explsoion-> DIRECT HIT with total damage %d", damageToApply); } else { // TODO adjust this... turning out negative for some reason... damageToApply = (1 - (distanceFromExplosionCenter/explosionRadius) * explosionMaxDamage); [tank takeDamageAndAdjustHealthBar:damageToApply]; CCLOG(@"Explosion-> Non direct hit collision with tank"); CCLOG(@"Damage to apply is %d", damageToApply); } // end if } else { CCLOG(@"Explosion-> Explosion distance is larger than explosion radius"); } // end if } // end if Questions: 1) Can this circle to rect collision algorithm be done better? Do I have too many checks? 2) How to calculate the percentage based damage? My current method generates negative numbers occasionally and I don't understand why (Maybe I need more sleep!). But, in my if statement, I ask if distance < explosion radius. When control goes through, distance/radius must be < 1 right? So 1 - that intermediate calculation should not be negative. Appreciate any help/advice!

    Read the article

< Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >