Search Results

Search found 43935 results on 1758 pages for 'development process'.

Page 516/1758 | < Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >

  • How would I be able to get a game over screen using the pause function?

    - by Joachim Velzel
    I am having problems with my snake game, when the snake collides with itself it draws a "game over" image in the background, but only while it's colliding with itself. I want it to behave like the pause function, so that as soon as the snake collides with itself it draws an image on the screen and stops the game play. And then how would you be able to restart or to quit the game? I just have this for the detection at the moment: if (snakeHeadRectangle.Intersects(snakeBodyRectangleArray[bodyNumber])) { spriteBatch.Draw(textureGameOver, gameOverPosition, Color.White); } Thanks

    Read the article

  • Tessellation Texture Coordinates

    - by Stuart Martin
    Firstly some info - I'm using DirectX 11 , C++ and I'm a fairly good programmer but new to tessellation and not a master graphics programmer. I'm currently implementing a tessellation system for a terrain model, but i have reached a snag. My current system produces a terrain model from a height map complete with multiple texture coordinates, normals, binormals and tangents for rendering. Now when i was using a simple vertex and pixel shader combination everything worked perfectly but since moving to include a hull and domain shader I'm slightly confused and getting strange results. My terrain is a high detail model but the textured results are very large patches of solid colour. My current setup passes the model data into the vertex shader then through the hull into the domain and then finally into the pixel shader for use in rendering. My only thought is that in my hull shader i pass the information into the domain shader per patch and this is producing the large areas of solid colour because each patch has identical information. Lighting and normal data are also slightly off but not as visibly as texturing. Below is a copy of my hull shader that does not work correctly because i think the way that i am passing the data through is incorrect. If anyone can help me out but suggesting an alternative way to get the required data into the pixel shader? or by showing me the correct way to handle the data in the hull shader id be very thankful! cbuffer TessellationBuffer { float tessellationAmount; float3 padding; }; struct HullInputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; }; struct ConstantOutputType { float edges[3] : SV_TessFactor; float inside : SV_InsideTessFactor; }; struct HullOutputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; float4 depthPosition : TEXCOORD2; }; ConstantOutputType ColorPatchConstantFunction(InputPatch<HullInputType, 3> inputPatch, uint patchId : SV_PrimitiveID) { ConstantOutputType output; output.edges[0] = tessellationAmount; output.edges[1] = tessellationAmount; output.edges[2] = tessellationAmount; output.inside = tessellationAmount; return output; } [domain("tri")] [partitioning("integer")] [outputtopology("triangle_cw")] [outputcontrolpoints(3)] [patchconstantfunc("ColorPatchConstantFunction")] HullOutputType ColorHullShader(InputPatch<HullInputType, 3> patch, uint pointId : SV_OutputControlPointID, uint patchId : SV_PrimitiveID) { HullOutputType output; output.position = patch[pointId].position; output.tex = patch[pointId].tex; output.tex2 = patch[pointId].tex2; output.normal = patch[pointId].normal; output.tangent = patch[pointId].tangent; output.binormal = patch[pointId].binormal; return output; } Edited to include the domain shader:- [domain("tri")] PixelInputType ColorDomainShader(ConstantOutputType input, float3 uvwCoord : SV_DomainLocation, const OutputPatch<HullOutputType, 3> patch) { float3 vertexPosition; PixelInputType output; // Determine the position of the new vertex. vertexPosition = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.position = mul(float4(vertexPosition, 1.0f), worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); output.depthPosition = output.position; output.tex = patch[0].tex; output.tex2 = patch[0].tex2; output.normal = patch[0].normal; output.tangent = patch[0].tangent; output.binormal = patch[0].binormal; return output; }

    Read the article

  • DirectWrite Producing Strange Artifacts?

    - by smoth190
    I've written the basis to my UI system around Direct2D. I like it because it's fast and easy to use (even if I had to do some messy work to get it to work with DirectX11). However, I notice when using DirectWrite I'm getting strange problems with my text. As you can see, the e is a little screwwed up, and it overall looks a little bumpy. This only happens with certain fonts in certain sizes, and with certain arrangements of letters. This particular example is Verdana in size 16.0 font. Can I fix this? It's pretty annoying to change all my words and fonts because of this problem.

    Read the article

  • Calculate an AABB for bone animated model

    - by Byte56
    I have a model that has its initial bounding box calculated by finding the maximum and minimum on the x, y and z axes. Producing a correct result like so: The vertices are then stored in a VBO and only altered with matrices for rotation and bone animation. Currently the bounds are not updated when the model is altered. So the animated and rotated model has bounds like so: (Maybe it's hard to tell, but the bounds are the same as before, and don't accurately represent the rotated/animated model) So my question is, how can I calculate the bounding box using the armature matrices and rotation/translation matrices for each model? Keep in mind the modified vertex data is not available because those calculations are performed on the GPU in the shader. The end result I want is to have an accurate AABB the represents the animated model for picking/basic collision checks.

    Read the article

  • XNA windows phone release black textures

    - by Lukasz Kajstura
    i just made a 3d game in XNA for windows phone 7. I build it in release mode on visual studio 2010 and suddenly when I run game there is no textures on models - 2 models are black and one is transparent. Models are in .X format exported from 3dsmax and have textures in .jpg also added to game content. I set build action to none and all worked fine in debug mode. When I change to release mode - black textures. When I set build action to compile it gives me warning: Asset was built 2 times with different settings: using TextureImporter and TextureProcessor using TextureImporter and TextureProcessor, referenced by... and still no textures. What can I do?

    Read the article

  • GetData() error creating framebuffer

    - by Lelezeus
    I'm currently porting a game written in C# with XNA library to Android with Monogame. I have a Texture2D and i'm trying to get an array of uint in this way: Texture2d textureDeform = game.Content.Load<Texture2D>("Texture/terrain"); uint[] pixelDeformData = new uint[textureDeform.Width * textureDeform.Height]; textureDeform.GetData(pixelDeformData, 0, textureDeform.Width * textureDeform.Height); I get the following exception: System.Exception: Error creating framebuffer: Zero at Microsoft.Xna.Framework.Graphics.Texture2D.GetTextureData (Int32 ThreadPriorityLevel) [0x00000] in :0 I found that the problem is in private byte[] GetTextureData(int ThreadPriorityLevel) creating the framebuffer: private byte[] GetTextureData(int ThreadPriorityLevel) { int framebufferId = -1; int renderBufferID = -1; GL.GenFramebuffers(1, ref framebufferId); // framebufferId is still -1 , why can't be created? GraphicsExtensions.CheckGLError(); GL.BindFramebuffer(All.Framebuffer, framebufferId); GraphicsExtensions.CheckGLError(); //renderBufferIDs = new int[currentRenderTargets]; GL.GenRenderbuffers(1, ref renderBufferID); GraphicsExtensions.CheckGLError(); // attach the texture to FBO color attachment point GL.FramebufferTexture2D(All.Framebuffer, All.ColorAttachment0, All.Texture2D, this.glTexture, 0); GraphicsExtensions.CheckGLError(); // create a renderbuffer object to store depth info GL.BindRenderbuffer(All.Renderbuffer, renderBufferID); GraphicsExtensions.CheckGLError(); GL.RenderbufferStorage(All.Renderbuffer, All.DepthComponent24Oes, Width, Height); GraphicsExtensions.CheckGLError(); // attach the renderbuffer to depth attachment point GL.FramebufferRenderbuffer(All.Framebuffer, All.DepthAttachment, All.Renderbuffer, renderBufferID); GraphicsExtensions.CheckGLError(); All status = GL.CheckFramebufferStatus(All.Framebuffer); if (status != All.FramebufferComplete) throw new Exception("Error creating framebuffer: " + status); ... } The frameBufferId is still -1, seems that framebuffer could not be generated and I don't know why. Any help would be appreciated, thank you in advance.

    Read the article

  • Why do meshes show up as bones in the Model class?

    - by Itamar Marom
    Right now I'm working on a 3D game and I've come across something very weird. When I created the model in Blender, I added an armature named "MyBone" to the stage and attached a cube ("MyCube") to it, so that when I move the armature, the cube moves with it. I exported this as an FBX and loaded it as a Model object. What I expected to see was: But what I got was this: I'm really confused. Why is the mesh I created showing up in the bone list? And what's Root Node? Here are the .blend and .fbx files: here or here. Thanks.

    Read the article

  • Seeking an C/C++ OBJ geometry read/write that does not modify the representation

    - by Blake Senftner
    I am seeking a means to read and write OBJ geometry files with logic that does not modify the geometry representation. i.e. read geometry, immediately write it, and a diff of the source OBJ and the one just written will be identical. Every OBJ writing utility I've been able to find online fails this test. I am writing small command line tools to modify my OBJ geometries, and I need to write my results, not just read the geometry for rendering purposes. Simply needing to write the geometry knocks out 95% of the OBJ libraries on the web. Also, many of the popular libraries modify the geometry representation. For example, Nat Robbin's GLUT library includes the GLM library, which both converts quads to triangles, as well as reverses the topology (face ordering) of the geometry. It's still the same geometry, but if your tool chain expects a given topology, such as for rigging or morph targets, then GLM is useless. I'm not rendering in these tools, so dependencies like OpenGL or GLUT make no sense. And god forbid, do not "optimize" the geometry! Redundant vertices are on purpose for maintaining oneself on cache with our weird little low memory mobile devices.

    Read the article

  • Blur gets displaced compared to original image

    - by user1294203
    I have implemented a SSAO and I'm using a blur step to smooth it out. The problem is that the blurred texture is slightly displaced compared to the original. I'm blurring using a 4x4 kernel since that was my noise kernel in SSAO. The following is the blurring shader: float result = 0.0; for(int i = 0; i < 4; i++){ for(int j = 0; j < 4; j++){ vec2 offset = vec2(TEXEL_SIZE.x * i, TEXEL_SIZE.y * j); result += texture(aoSampler, TexCoord + offset).r; } } out_AO = vec4(vec3(0.0), result * 0.0625); Where TEXEL_SIZE is one over my window resolution. I was thinking that this is was an error based on how OpenGL counts the Texel center, so I tried displacing the texture coordinate I was using by 0.5 * TEXEL_SIZE, but there was still a slight displacement. The texture input to my blur shader, has wrap parameters: glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); When I tell the blur shader to just output the the value of the pixel, the result is not displaced, so it must have something to do with how neighboring pixels are sampled. Any thoughts?

    Read the article

  • Picture rendered from above and below using an Orthographic camera do not match

    - by Roy T.
    I'm using an orthographic camera to render slices of a model (in order to voxelize it). I render each slice both from above and below in order to determine what is inside each slice. I am using an orthographic camera The model I render is a simple 'T' shape constructed from two cubes. The cubes have the same dimensions and have the same Y (height) coordinate. See figure 1 for a render of it in Blender. I render this model once directly from above and once directly from below. My expectation was that I would get exactly the same image (except for mirroring over the y-axis). However when I render using a very low resolution render target (25x25) the position (in pixels) of the 'T' is different when rendered from above as opposed to rendered from below. See figure 2 and 3. The pink blocks are not part of the original rendering but I've added them so you can easily count/see the differences. Figure 2: the T rendered from above Figure 3: the T rendered from below This is probably due to what I've read about pixel and texel coordinates which might be biased to the top-left as seen from the camera. Since I'm using the same 'up' vector for both of my camera's my bias only shows on the x-axis. I've tried to change the position of the camera and it's look-at by, what I thought, should be half a pixel. I've tried both shifting a single camera and shifting both cameras and while I see some effect I am not able to get a pixel-by-pixel perfect copy from both camera's. Here I initialize the camera and compute, what I believe to be, half pixel. boundsDimX and boundsDimZ is a slightly enlarged bounding box around the model which I also use as the width and height of the view volume of the orthographic camera. Matrix projection = Matrix.CreateOrthographic(boundsDimX, boundsDimZ, 0.5f, sliceHeight + 0.5f); Vector3 halfPixel = new Vector3(boundsDimX / (float)renderTarget.Width, 0, boundsDimY / (float)renderTarget.Height) * 0.5f; This is the code where I set the camera position and camera look ats // Position camera if (downwards) { float cameraHeight = bounds.Max.Y + 0.501f - (sliceHeight * i); Vector3 cameraPosition = new Vector3 ( boundsCentre.X, // possibly adjust by half a pixel? cameraHeight, boundsCentre.Z ); camera.Position = cameraPosition; camera.LookAt = new Vector3(cameraPosition.X, cameraHeight - 1.0f, cameraPosition.Z); } else { float cameraHeight = bounds.Max.Y - 0.501f - (sliceHeight * i); Vector3 cameraPosition = new Vector3 ( boundsCentre.X, cameraHeight, boundsCentre.Z ); camera.Position = cameraPosition; camera.LookAt = new Vector3(cameraPosition.X, cameraHeight + 1.0f, cameraPosition.Z); } Main Question Now you've seen all the problems and code you can guess it. My main question is. How do I align both camera's so that they each render exactly the same image (mirrored along the Y axis)? Figure 1 the original model rendered in blender

    Read the article

  • How to implement physical effect, perspective effect on Android

    - by asedra_le
    I'm researching about 2D game for Android to implement an Android Game Project. My project looks nearly like PaperToss. Instance of throwing a page, my game will throw a coin. Suppose that I have a coin put in three-dimensional that have coordinates at A(x,y,z). I throw that point ahead, after 1/100 second, that coin move from A(x,y,z) to A'(x',y',z'). By this way, I have two problems need to solve. Determine the formulas can be used to compute the coordinates of the coin at time t. This problem is under-researching. I have no idea to solve this problem. Mapping three-dimensional points to a two-dimensional and use those new coordinates (a two-dimensional coordinates) to draw our coin on screen. I have found two solutions for this problem: Orthographic projection & Perspective projection However, my old friend said that OpenGL supports to solve problems like my problems. Any body have experiences about my problems? Help me please :) Thank for reading my question.

    Read the article

  • AndEngine doesn't fill correctly an image on my device

    - by Guille
    I'm learning about AndEngine a little bit, I'm trying to follow a tutorial but I don't get to fill the background image correctly, so, it's just appear in one side of my screen. My device is a Galaxy Nexus (1270x768 I think...). The image is 800x480. The code is: public EngineOptions onCreateEngineOptions() { camera = new Camera(0, 0, 800, 480); EngineOptions engineOptions = new EngineOptions(true, ScreenOrientation.LANDSCAPE_FIXED, new FillResolutionPolicy(), this.camera); engineOptions.getAudioOptions().setNeedsMusic(true).setNeedsSound(true); engineOptions.getRenderOptions().setMultiSampling(true);//.getConfigChooserOptions().setRequestedMultiSampling(true); engineOptions.setWakeLockOptions(WakeLockOptions.SCREEN_ON); return engineOptions; } I have been trying with several values in the camera, but it doesn't fill in all the screen, why?

    Read the article

  • Z Order in 2D with orthographic projection and texture atlas

    - by Carbon Crystal
    I am working with a 2D game in OpenGL ES and have a question about z-order together with a texture atlas. I am using an orthographic projection because I want pixel-perfect rendering of 2D sprites, however from what I can determine the draw order is really the only thing that will determine which textures (sprites) appear above or below their neighbors. That is, the "z-index" is a function of the order in which the textures are drawn as opposed to the z coordinate on the vertex array being drawn. So.. I have a texture atlas to save binding multiple textures for each draw call but this immediately creates a problem if there is more than one atlas in play. If I need to draw textures from more than one atlas (typically the case if I have too many sprites to fit in a single atlas of a reasonable size), then I can't maintain a "draw order" across atlases unless I want to bind/unbind the atlas textures more than once.. which kinda defeats the purpose. Does anyone have any clues as to what the best approach is here? Currently I'm running under an assumption that I will have to declare different fixed "depths" (e.g foreground, background etc) in my 2D scene and assume that the z-order for sprites at a given depth is the same. Then I can have as many atlases as I need at each depth and simply draw the depths in order (along with their associated atlases) I'd love to hear what other people are doing.

    Read the article

  • Best practices with Vertices in Open GL

    - by Darestium
    What is the best practice in regards to storing vertex data in Open GL? I.e: struct VertexColored { public: GLfloat position[]; GLfloat normal[]; byte colours[]; } class Terrian { private: GLuint vbo_vertices; GLuint vbo_normals; GLuint vbo_colors; GLuint ibo_elements; VertexColored vertices[]; } or having them stored seperatly in the required class like: class Terrian { private: GLfloat vertices[]; GLfloat normals[]; GLfloat colors[]; GLuint vbo_vertices; GLuint vbo_normals; GLuint vbo_colors; GLuint ibo_elements; }

    Read the article

  • Stuck at enemy movement

    - by Syed
    I am making a TD game in unity, Initially I made all of my enemy movements frame rate dependent say: I had a grid point1 at -22.65 and other at -21.1, diagrammatically: (-22.65) _________(-21.1)_______(-21.1+1.55) ...... so the distance on x axis between two points is 1.55, divided it by 25 jumps with each enemy jump of 0.062 of each frame. On reaching on next point of grid the enemy find again its path. All went fine until I have requirement of FastForward and Pause feature. I used timeScale property of unity but it wont work as they are frame dependent. I also tried double speed of enemy on clicking fast forward button at any time, it has some issues that enemy jumps are now less and it fails to reach on next grid point. Could someone suggest me solution to my problem. Do I need to change the enemy movement code to make it frame independent ?? I need the enemy to reach on the grid specific point I also need later to slow down any one enemy's speed when tower fires on it. Thnx

    Read the article

  • Coarse Collision Detection in highly dynamic environment

    - by Millianz
    I'm currently working a 3D space game with A LOT of dynamic objects that are all moving (there is pretty much no static environment). I have the collision detection and resolution working just fine, but I am now trying to optimize the collision detection (which is currently O(N^2) -- linear search). I thought about multiple options, a bounding volume hierarchy, a Binary Spatial Partitioning tree, an Octree or a Grid. I however need some help with deciding what's best for my situation. A grid seems unfeasible simply due to the space requirements and cache coherence problems. Since everything is so dynamic however, it seems to be that trees aren't ideal either, since they would have to be completely rebuilt every frame. I must admit I never implemented a physics engine that required spatial partitioning, do I indeed need to rebuild the tree every frame (assuming that everything is constantly moving) or can I update the trees after integrating? Advice is much appreciated - to give some more background: You're flying a space ship in an asteroid field, and there are lots and lots of asteroids and some enemy ships, all of which shoot bullets. EDIT: I came across the "Sweep an Prune" algorithm, which seems like the right thing for my purposes. It appears like the right mixture of fast building of the data structures involved and detailed enough partitioning. This is the best resource I can find: http://www.codercorner.com/SAP.pdf If anyone has any suggestions whether or not I'm going in the right direction, please let me know.

    Read the article

  • How to deal with large open worlds?

    - by Mr. Beast
    In most games the whole world is small enough to fit into memory, however there are games where this is not the case, how is this archived, how can the game still run fluid even though the world is so big and maybe even dynamic? How does the world change in memory while the player moves? Examples for this include the TES games (Skyrim, Oblivion, Morrowind), MMORPGs (World of Warcraft), Diablo, Titan Quest, Dwarf Fortress, Far Cry.

    Read the article

  • Ruby: implementing alpha-beta pruning for tic-tac-toe

    - by DerNalia
    So, alpha-beta pruning seems to be the most efficient algorithm out there aside from hard coding (for tic tac toe). However, I'm having problems converting the algorithm from the C++ example given in the link: http://www.webkinesia.com/games/gametree.php #based off http://www.webkinesia.com/games/gametree.php # (converted from C++ code from the alpha - beta pruning section) # returns 0 if draw LOSS = -1 DRAW = 0 WIN = 1 @next_move = 0 def calculate_ai_next_move score = self.get_best_move(COMPUTER, WIN, LOSS) return @next_move end def get_best_move(player, alpha, beta) best_score = nil score = nil if not self.has_available_moves? return false elsif self.has_this_player_won?(player) return WIN elsif self.has_this_player_won?(1 - player) return LOSS else best_score = alpha NUM_SQUARES.times do |square| if best_score >= beta break end if self.state[square].nil? self.make_move_with_index(square, player) # set to negative of opponent's best move; we only need the returned score; # the returned move is irrelevant. score = -get_best_move(1-player, -beta, -alpha) if (score > bestScore) @next_move = square best_score = score end undo_move(square) end end end return best_score end the problem is that this is returning nil. some support methods that are used above: WAYS_TO_WIN = [[0, 1, 2], [3, 4, 5], [6, 7, 8], [0, 3, 6], [1, 4, 7], [2, 5, 8],[0, 4, 8], [2, 4, 6]] def has_this_player_won?(player) result = false WAYS_TO_WIN.each {|solution| result = self.state[solution[0]] if contains_win?(solution) } return (result == player) end def contains_win?(ttt_win_state) ttt_win_state.each do |pos| return false if self.state[pos] != self.state[ttt_win_state[0]] or self.state[pos].nil? end return true end def make_move(x, y, player) self.set_square(x,y, player) end

    Read the article

  • Why does Unity3D crash in VirtualBox?

    - by FakeRainBrigand
    I'm running Unity3D in a virtual instance of Windows, using the Virtual Box software on Linux. I have guest additions installed with DirectX support. I've tried using Windows XP SP3 32-bit, and Windows 7 64bit. My host is Ubuntu 12.04 64bit. I installed and registered Unity on both. It loads up fine, and then crashes my entire VirtualBox instance (equivalent of a computer shutting off with no warning).

    Read the article

  • problem adding bumpmap to textured gluSphere in JOGL

    - by ChocoMan
    I currently have one texture on a gluSphere that represents the Earth being displayed perfectly, but having trouble figuring out how to implement a bumpmap as well. The bumpmap resides in "res/planet/earth/earthbump1k.jpg".Here is the code I have for the regular texture: gl.glTranslatef(xPath, 0, yPath + zPos); gl.glColor3f(1.0f, 1.0f, 1.0f); // base color for earth earthGluSphere = glu.gluNewQuadric(); colorTexture.enable(); // enable texture colorTexture.bind(); // bind texture // draw sphere... glu.gluDeleteQuadric(earthGluSphere); colorTexture.disable(); // texturing public void loadPlanetTexture(GL2 gl) { InputStream colorMap = null; try { colorMap = new FileInputStream("res/planet/earth/earthmap1k.jpg"); TextureData data = TextureIO.newTextureData(colorMap, false, null); colorTexture = TextureIO.newTexture(data); colorTexture.getImageTexCoords(); colorTexture.setTexParameteri(GL2.GL_TEXTURE_MAG_FILTER, GL2.GL_LINEAR); colorTexture.setTexParameteri(GL2.GL_TEXTURE_MIN_FILTER, GL2.GL_NEAREST); colorMap.close(); } catch(IOException e) { e.printStackTrace(); System.exit(1); } // Set material properties gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MAG_FILTER, GL2.GL_LINEAR); gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MIN_FILTER, GL2.GL_NEAREST); colorTexture.setTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_WRAP_S); colorTexture.setTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_WRAP_T); } How would I add the bumpmap as well to the same gluSphere?

    Read the article

  • Determining explosion radius damage - Circle to Rectangle 2D

    - by Paul Renton
    One of the Cocos2D games I am working on has circular explosion effects. These explosion effects need to deal a percentage of their set maximum damage to all game characters (represented by rectangular bounding boxes as the objects in question are tanks) within the explosion radius. So this boils down to circle to rectangle collision and how far away the circle's radius is from the closest rectangle edge. I took a stab at figuring this out last night, but I believe there may be a better way. In particular, I don't know the best way to determine what percentage of damage to apply based on the distance calculated. Note : All tank objects have an anchor point of (0,0) so position is according to bottom left corner of bounding box. Explosion point is the center point of the circular explosion. TankObject * tank = (TankObject*) gameSprite; float distanceFromExplosionCenter; // IMPORTANT :: All GameCharacter have an assumed (0,0) anchor if (explosionPoint.x < tank.position.x) { // Explosion to WEST of tank if (explosionPoint.y <= tank.position.y) { //Explosion SOUTHWEST distanceFromExplosionCenter = ccpDistance(explosionPoint, tank.position); } else if (explosionPoint.y >= (tank.position.y + tank.contentSize.height)) { // Explosion NORTHWEST distanceFromExplosionCenter = ccpDistance(explosionPoint, ccp(tank.position.x, tank.position.y + tank.contentSize.height)); } else { // Exp center's y is between bottom and top corner of rect distanceFromExplosionCenter = tank.position.x - explosionPoint.x; } // end if } else if (explosionPoint.x > (tank.position.x + tank.contentSize.width)) { // Explosion to EAST of tank if (explosionPoint.y <= tank.position.y) { //Explosion SOUTHEAST distanceFromExplosionCenter = ccpDistance(explosionPoint, ccp(tank.position.x + tank.contentSize.width, tank.position.y)); } else if (explosionPoint.y >= (tank.position.y + tank.contentSize.height)) { // Explosion NORTHEAST distanceFromExplosionCenter = ccpDistance(explosionPoint, ccp(tank.position.x + tank.contentSize.width, tank.position.y + tank.contentSize.height)); } else { // Exp center's y is between bottom and top corner of rect distanceFromExplosionCenter = explosionPoint.x - (tank.position.x + tank.contentSize.width); } // end if } else { // Tank is either north or south and is inbetween left and right corner of rect if (explosionPoint.y < tank.position.y) { // Explosion is South distanceFromExplosionCenter = tank.position.y - explosionPoint.y; } else { // Explosion is North distanceFromExplosionCenter = explosionPoint.y - (tank.position.y + tank.contentSize.height); } // end if } // end outer if if (distanceFromExplosionCenter < explosionRadius) { /* Collision :: Smaller distance larger the damage */ int damageToApply; if (self.directHit) { damageToApply = self.explosionMaxDamage + self.directHitBonusDamage; [tank takeDamageAndAdjustHealthBar:damageToApply]; CCLOG(@"Explsoion-> DIRECT HIT with total damage %d", damageToApply); } else { // TODO adjust this... turning out negative for some reason... damageToApply = (1 - (distanceFromExplosionCenter/explosionRadius) * explosionMaxDamage); [tank takeDamageAndAdjustHealthBar:damageToApply]; CCLOG(@"Explosion-> Non direct hit collision with tank"); CCLOG(@"Damage to apply is %d", damageToApply); } // end if } else { CCLOG(@"Explosion-> Explosion distance is larger than explosion radius"); } // end if } // end if Questions: 1) Can this circle to rect collision algorithm be done better? Do I have too many checks? 2) How to calculate the percentage based damage? My current method generates negative numbers occasionally and I don't understand why (Maybe I need more sleep!). But, in my if statement, I ask if distance < explosion radius. When control goes through, distance/radius must be < 1 right? So 1 - that intermediate calculation should not be negative. Appreciate any help/advice!

    Read the article

  • Interesting/Innovative Open Source tools for indie games [closed]

    - by Gastón
    Just out of curiosity, I want to know opensource tools or projects that can add some interesting features to indie games, preferably those that could only be found on big-budget games. EDIT: As suggested by The Communist Duck and Joe Wreschnig, I'm putting the examples as answers. EDIT 2: Please do not post tools like PyGame, Inkscape, Gimp, Audacity, Slick2D, Phys2D, Blender (except for interesting plugins) and the like. I know they are great tools/libraries and some would argue essential to develop good games, but I'm looking for more rare projects. Could be something really specific or niche, like generating realistic trees and plants, or realistic AI for animals.

    Read the article

  • 3d vertex translated onto 2d viewport

    - by Dan Leidal
    I have a spherical world defined by simple trigonometric functions to create triangles that are relatively similar in size and shape throughout. What I want to be able to do is use mouse input to target a range of vertices in the area around the mouse click in order to manipulate these vertices in real time. I read a post on this forum regarding translating 3d world coordinates into the 2d viewport.. it recommended that you should multiply the world vector coordinates by the viewport and then the projection, but they didn't include any code examples, and suffice to say i couldn't get any good results. Further information.. I am using a lookat method for the viewport. Does this cause a problem, and if so is there a solution? If this isn't the problem, does anyone have a simple code example illustrating translating one vertex in a 3d world into a 2d viewspace? I am using XNA.

    Read the article

  • how to define a field of view for the entire map for shadow?

    - by Mehdi Bugnard
    I recently added "Shadow Mapping" in my XNA games to include shadows. I followed the nice and famous tutorial from "Riemers" : http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Shadow_map.php . This code work nice and I can see my source of light and shadow. But the problem is that my light source does not match the field of view that I created. I want the light covers the entire map of my game. I don't know why , but the light only affect 2-3 cubes of my map. ScreenShot: (the emission of light illuminates only 2-3 blocks and not the full map) Here is my code i create the fieldOfView for LightviewProjection Matrix: Vector3 lightDir = new Vector3(10, 52, 10); lightPos = new Vector3(10, 52, 10); Matrix lightsView = Matrix.CreateLookAt(lightPos, new Vector3(105, 50, 105), new Vector3(0, 1, 0)); Matrix lightsProjection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver2, 1f, 20f, 1000f); lightsViewProjectionMatrix = lightsView * lightsProjection; As you can see , my nearPlane and FarPlane are set to 20f and 100f . So i don't know why the light stop after 2 cubes. it's should be bigger Here is set the value to my custom effect HLSL in the shader file /* SHADOW VALUE */ effectWorld.Parameters["LightDirection"].SetValue(lightDir); effectWorld.Parameters["xLightsWorldViewProjection"].SetValue(Matrix.Identity * .lightsViewProjectionMatrix); effectWorld.Parameters["xWorldViewProjection"].SetValue(Matrix.Identity * arcadia.camera.View * arcadia.camera.Projection); effectWorld.Parameters["xLightPower"].SetValue(1f); effectWorld.Parameters["xAmbient"].SetValue(0.3f); Here is my custom HLSL shader effect file "*.fx" // This sample uses a simple Lambert lighting model. float3 LightDirection = normalize(float3(-1, -1, -1)); float3 DiffuseLight = 1.25; float3 AmbientLight = 0.25; uniform const float3 DiffuseColor = 1; uniform const float Alpha = 1; uniform const float3 EmissiveColor = 0; uniform const float3 SpecularColor = 1; uniform const float SpecularPower = 16; uniform const float3 EyePosition; // FOG attribut uniform const float FogEnabled ; uniform const float FogStart ; uniform const float FogEnd ; uniform const float3 FogColor ; float3 cameraPos : CAMERAPOS; texture Texture; sampler Sampler = sampler_state { Texture = (Texture); magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = mirror; AddressV = mirror; }; texture xShadowMap; sampler ShadowMapSampler = sampler_state { Texture = <xShadowMap>; magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = clamp; AddressV = clamp; }; /* *************** */ /* SHADOW MAP CODE */ /* *************** */ struct SMapVertexToPixel { float4 Position : POSITION; float4 Position2D : TEXCOORD0; }; struct SMapPixelToFrame { float4 Color : COLOR0; }; struct SSceneVertexToPixel { float4 Position : POSITION; float4 Pos2DAsSeenByLight : TEXCOORD0; float2 TexCoords : TEXCOORD1; float3 Normal : TEXCOORD2; float4 Position3D : TEXCOORD3; }; struct SScenePixelToFrame { float4 Color : COLOR0; }; float DotProduct(float3 lightPos, float3 pos3D, float3 normal) { float3 lightDir = normalize(pos3D - lightPos); return dot(-lightDir, normal); } SSceneVertexToPixel ShadowedSceneVertexShader(float4 inPos : POSITION, float2 inTexCoords : TEXCOORD0, float3 inNormal : NORMAL) { SSceneVertexToPixel Output = (SSceneVertexToPixel)0; Output.Position = mul(inPos, xWorldViewProjection); Output.Pos2DAsSeenByLight = mul(inPos, xLightsWorldViewProjection); Output.Normal = normalize(mul(inNormal, (float3x3)World)); Output.Position3D = mul(inPos, World); Output.TexCoords = inTexCoords; return Output; } SScenePixelToFrame ShadowedScenePixelShader(SSceneVertexToPixel PSIn) { SScenePixelToFrame Output = (SScenePixelToFrame)0; float2 ProjectedTexCoords; ProjectedTexCoords[0] = PSIn.Pos2DAsSeenByLight.x / PSIn.Pos2DAsSeenByLight.w / 2.0f + 0.5f; ProjectedTexCoords[1] = -PSIn.Pos2DAsSeenByLight.y / PSIn.Pos2DAsSeenByLight.w / 2.0f + 0.5f; float diffuseLightingFactor = 0; if ((saturate(ProjectedTexCoords).x == ProjectedTexCoords.x) && (saturate(ProjectedTexCoords).y == ProjectedTexCoords.y)) { float depthStoredInShadowMap = tex2D(ShadowMapSampler, ProjectedTexCoords).r; float realDistance = PSIn.Pos2DAsSeenByLight.z / PSIn.Pos2DAsSeenByLight.w; if ((realDistance - 1.0f / 100.0f) <= depthStoredInShadowMap) { diffuseLightingFactor = DotProduct(xLightPos, PSIn.Position3D, PSIn.Normal); diffuseLightingFactor = saturate(diffuseLightingFactor); diffuseLightingFactor *= xLightPower; } } float4 baseColor = tex2D(Sampler, PSIn.TexCoords); Output.Color = baseColor*(diffuseLightingFactor + xAmbient); return Output; } SMapVertexToPixel ShadowMapVertexShader(float4 inPos : POSITION) { SMapVertexToPixel Output = (SMapVertexToPixel)0; Output.Position = mul(inPos, xLightsWorldViewProjection); Output.Position2D = Output.Position; return Output; } SMapPixelToFrame ShadowMapPixelShader(SMapVertexToPixel PSIn) { SMapPixelToFrame Output = (SMapPixelToFrame)0; Output.Color = PSIn.Position2D.z / PSIn.Position2D.w; return Output; } /* ******************* */ /* END SHADOW MAP CODE */ /* ******************* */ / For rendering without instancing. technique ShadowMap { pass Pass0 { VertexShader = compile vs_2_0 ShadowMapVertexShader(); PixelShader = compile ps_2_0 ShadowMapPixelShader(); } } technique ShadowedScene { /* pass Pass0 { VertexShader = compile vs_2_0 VSBasicTx(); PixelShader = compile ps_2_0 PSBasicTx(); } */ pass Pass1 { VertexShader = compile vs_2_0 ShadowedSceneVertexShader(); PixelShader = compile ps_2_0 ShadowedScenePixelShader(); } } technique SimpleFog { pass Pass0 { VertexShader = compile vs_2_0 VSBasicTx(); PixelShader = compile ps_2_0 PSBasicTx(); } } I edited my fx file , for show you only information and functions about the shadow ;-)

    Read the article

  • Drawing isometric map in canvas / javascript

    - by Dave
    I have a problem with my map design for my tiles. I set player position which is meant to be the middle tile that the canvas is looking at. How ever the calculation to put them in x:y pixel location is completely messed up for me and i don't know how to fix it. This is what i tried: var offset_x = 0; //used for scrolling on x var offset_y = 0; //used for scrolling on y var prev_mousex = 0; //for movePos function var prev_mousey = 0; //for movePos function function movePos(e){ if (prev_mousex === 0 && prev_mousey === 0) { prev_mousex = e.pageX; prev_mousey = e.pageY; } offset_x = offset_x + (e.pageX - prev_mousex); offset_y = offset_y + (e.pageY - prev_mousey); prev_mousex = e.pageX; prev_mousey = e.pageY; run = true; } player_posx = 5; player_posy = 55; ct = 19; for (i = (player_posx-ct); i < (player_posx+ct); i++){ //horizontal for (j=(player_posy-ct); j < (player_posy+ct); j++){ // vertical //img[0] is 64by64 but the graphic is 64by32 the rest is alpha space var x = (i-j)*(img[0].height/2) + (canvas.width/2)-(img[0].width/2); var y = (i+j)*(img[0].height/4); var abposx = x - offset_x; var abposy = y - offset_y; ctx.drawImage(img[0],abposx,abposy); } } Now based on these numbers the first render-able tile is I = 0 & J = 36. As numbers in the negative are not in the array. But for I=0 and J= 36 the position it calculates is : -1120 : 592 Does any one know how to center it to canvas view properly?

    Read the article

< Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >