Search Results

Search found 31839 results on 1274 pages for 'plugin development'.

Page 543/1274 | < Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >

  • Collision Detection with SAT: False Collision for Diagonal Movement Towards Vertical Tile-Walls?

    - by Macks
    Edit: Problem solved! Big thanks to Jonathan who pointed me in the right direction. Sean describes the method I used in a different thread. Also big thanks to him! :) Here is how I solved my problem: If a collision is registered by my SAT-method, only fire the collision-event on my character if there are no neighbouring solid tiles in the direction of the returned minimum translation vector. I'm developing my first tile-based 2D-game with Javascript. To learn the basics, I decided to write my own "game engine". I have successfully implemented collision detection using the separating axis theorem, but I've run into a problem that I can't quite wrap my head around. If I press the [up] and [left] arrow-keys simultaneously, my character moves diagonally towards the upper left. If he hits a horizontal wall, he'll just keep moving in x-direction. The same goes for [up] and [left] as well as downward-diagonal movements, it works as intended: http://i.stack.imgur.com/aiZjI.png Diagonal movement works fine for horizontal walls, for both left and right-movement However: this does not work for vertical walls. Instead of keeping movement in y-direction, he'll just stop as soon as he "enters" a new tile on the y-axis. So for some reason SAT thinks my character is colliding vertically with tiles from vertical walls: http://i.stack.imgur.com/XBEKR.png My character stops because he thinks that he is colliding vertically with tiles from the wall on the right. This only occurs, when: Moving into top-right direction towards the right wall Moving into top-left direction towards the left wall Bottom-right and bottom-left movement work: the character keeps moving in y-direction as intended. Is this inherited from the way SAT works or is there a problem with my implementation? What can I do to solve my problem? Oh yeah, my character is displayed as a circle but he's actually a rectangular polygon for the collision detection. Thank you very much for your help.

    Read the article

  • 3D Graphics with XNA Game Studio 4.0 bug in light map?

    - by Eibis
    i'm following the tutorials on 3D Graphics with XNA Game Studio 4.0 and I came up with an horrible effect when I tried to implement the Light Map http://i.stack.imgur.com/BUWvU.jpg this effect shows up when I look towards the center of the house (and it moves with me). it has this shape because I'm using a sphere to represent light; using other light shapes gives different results. I'm using a class PreLightingRenderer: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Dhpoware; using Microsoft.Xna.Framework.Content; namespace XNAFirstPersonCamera { public class PrelightingRenderer { // Normal, depth, and light map render targets RenderTarget2D depthTarg; RenderTarget2D normalTarg; RenderTarget2D lightTarg; // Depth/normal effect and light mapping effect Effect depthNormalEffect; Effect lightingEffect; // Point light (sphere) mesh Model lightMesh; // List of models, lights, and the camera public List<CModel> Models { get; set; } public List<PPPointLight> Lights { get; set; } public FirstPersonCamera Camera { get; set; } GraphicsDevice graphicsDevice; int viewWidth = 0, viewHeight = 0; public PrelightingRenderer(GraphicsDevice GraphicsDevice, ContentManager Content) { viewWidth = GraphicsDevice.Viewport.Width; viewHeight = GraphicsDevice.Viewport.Height; // Create the three render targets depthTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Single, DepthFormat.Depth24); normalTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); lightTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); // Load effects depthNormalEffect = Content.Load<Effect>(@"Effects\PPDepthNormal"); lightingEffect = Content.Load<Effect>(@"Effects\PPLight"); // Set effect parameters to light mapping effect lightingEffect.Parameters["viewportWidth"].SetValue(viewWidth); lightingEffect.Parameters["viewportHeight"].SetValue(viewHeight); // Load point light mesh and set light mapping effect to it lightMesh = Content.Load<Model>(@"Models\PPLightMesh"); lightMesh.Meshes[0].MeshParts[0].Effect = lightingEffect; this.graphicsDevice = GraphicsDevice; } public void Draw() { drawDepthNormalMap(); drawLightMap(); prepareMainPass(); } void drawDepthNormalMap() { // Set the render targets to 'slots' 1 and 2 graphicsDevice.SetRenderTargets(normalTarg, depthTarg); // Clear the render target to 1 (infinite depth) graphicsDevice.Clear(Color.White); // Draw each model with the PPDepthNormal effect foreach (CModel model in Models) { model.CacheEffects(); model.SetModelEffect(depthNormalEffect, false); model.Draw(Camera.ViewMatrix, Camera.ProjectionMatrix, Camera.Position); model.RestoreEffects(); } // Un-set the render targets graphicsDevice.SetRenderTargets(null); } void drawLightMap() { // Set the depth and normal map info to the effect lightingEffect.Parameters["DepthTexture"].SetValue(depthTarg); lightingEffect.Parameters["NormalTexture"].SetValue(normalTarg); // Calculate the view * projection matrix Matrix viewProjection = Camera.ViewMatrix * Camera.ProjectionMatrix; // Set the inverse of the view * projection matrix to the effect Matrix invViewProjection = Matrix.Invert(viewProjection); lightingEffect.Parameters["InvViewProjection"].SetValue(invViewProjection); // Set the render target to the graphics device graphicsDevice.SetRenderTarget(lightTarg); // Clear the render target to black (no light) graphicsDevice.Clear(Color.Black); // Set render states to additive (lights will add their influences) graphicsDevice.BlendState = BlendState.Additive; graphicsDevice.DepthStencilState = DepthStencilState.None; foreach (PPPointLight light in Lights) { // Set the light's parameters to the effect light.SetEffectParameters(lightingEffect); // Calculate the world * view * projection matrix and set it to // the effect Matrix wvp = (Matrix.CreateScale(light.Attenuation) * Matrix.CreateTranslation(light.Position)) * viewProjection; lightingEffect.Parameters["WorldViewProjection"].SetValue(wvp); // Determine the distance between the light and camera float dist = Vector3.Distance(Camera.Position, light.Position); // If the camera is inside the light-sphere, invert the cull mode // to draw the inside of the sphere instead of the outside if (dist < light.Attenuation) graphicsDevice.RasterizerState = RasterizerState.CullClockwise; // Draw the point-light-sphere lightMesh.Meshes[0].Draw(); // Revert the cull mode graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; } // Revert the blending and depth render states graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; // Un-set the render target graphicsDevice.SetRenderTarget(null); } void prepareMainPass() { foreach (CModel model in Models) foreach (ModelMesh mesh in model.Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) { // Set the light map and viewport parameters to each model's effect if (part.Effect.Parameters["LightTexture"] != null) part.Effect.Parameters["LightTexture"].SetValue(lightTarg); if (part.Effect.Parameters["viewportWidth"] != null) part.Effect.Parameters["viewportWidth"].SetValue(viewWidth); if (part.Effect.Parameters["viewportHeight"] != null) part.Effect.Parameters["viewportHeight"].SetValue(viewHeight); } } } } that uses three effect: PPDepthNormal.fx float4x4 World; float4x4 View; float4x4 Projection; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Depth : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 viewProjection = mul(View, Projection); float4x4 worldViewProjection = mul(World, viewProjection); output.Position = mul(input.Position, worldViewProjection); output.Normal = mul(input.Normal, World); // Position's z and w components correspond to the distance // from camera and distance of the far plane respectively output.Depth.xy = output.Position.zw; return output; } // We render to two targets simultaneously, so we can't // simply return a float4 from the pixel shader struct PixelShaderOutput { float4 Normal : COLOR0; float4 Depth : COLOR1; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; // Depth is stored as distance from camera / far plane distance // to get value between 0 and 1 output.Depth = input.Depth.x / input.Depth.y; // Normal map simply stores X, Y and Z components of normal // shifted from (-1 to 1) range to (0 to 1) range output.Normal.xyz = (normalize(input.Normal).xyz / 2) + .5; // Other components must be initialized to compile output.Depth.a = 1; output.Normal.a = 1; return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPLight.fx float4x4 WorldViewProjection; float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; sampler2D depthSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 LightColor; float3 LightPosition; float LightAttenuation; // Include shared functions #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 LightPosition : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WorldViewProjection); output.LightPosition = output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Find the pixel coordinates of the input position in the depth // and normal textures float2 texCoord = postProjToScreen(input.LightPosition) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; // Extract the normal from the normal map and move from // 0 to 1 range to -1 to 1 range float4 normal = (tex2D(normalSampler, texCoord) - .5) * 2; // Perform the lighting calculations for a point light float3 lightDirection = normalize(LightPosition - position); float lighting = clamp(dot(normal, lightDirection), 0, 1); // Attenuate the light to simulate a point light float d = distance(LightPosition, position); float att = 1 - pow(d / LightAttenuation, 6); return float4(LightColor * lighting * att, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPShared.vsi has some common functions: float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } and finally from the Game class I set up in LoadContent with: effect = Content.Load(@"Effects\PPModel"); models[0] = new CModel(Content.Load(@"Models\teapot"), new Vector3(-50, 80, 0), new Vector3(0, 0, 0), 1f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); house = new CModel(Content.Load(@"Models\house"), new Vector3(0, 0, 0), new Vector3((float)-Math.PI / 2, 0, 0), 35.0f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); models[0].SetModelEffect(effect, true); house.SetModelEffect(effect, true); renderer = new PrelightingRenderer(GraphicsDevice, Content); renderer.Models = new List(); renderer.Models.Add(house); renderer.Models.Add(models[0]); renderer.Lights = new List() { new PPPointLight(new Vector3(0, 120, 0), Color.White * .85f, 2000) }; where PPModel.fx is: float4x4 World; float4x4 View; float4x4 Projection; texture2D BasicTexture; sampler2D basicTextureSampler = sampler_state { texture = ; addressU = wrap; addressV = wrap; minfilter = anisotropic; magfilter = anisotropic; mipfilter = linear; }; bool TextureEnabled = true; texture2D LightTexture; sampler2D lightSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 AmbientColor = float3(0.15, 0.15, 0.15); float3 DiffuseColor; #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 worldViewProjection = mul(World, mul(View, Projection)); output.Position = mul(input.Position, worldViewProjection); output.PositionCopy = output.Position; output.UV = input.UV; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Sample model's texture float3 basicTexture = tex2D(basicTextureSampler, input.UV); if (!TextureEnabled) basicTexture = float4(1, 1, 1, 1); // Extract lighting value from light map float2 texCoord = postProjToScreen(input.PositionCopy) + halfPixel(); float3 light = tex2D(lightSampler, texCoord); light += AmbientColor; return float4(basicTexture * DiffuseColor * light, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } I don't have any idea on what's wrong... googling the web I found that this tutorial may have some bug but I don't know if it's the LightModel fault (the sphere) or in a shader or in the class PrelightingRenderer. Any help is very appreciated, thank you for reading!

    Read the article

  • sprite animation system height recalculating has some issues

    - by Nicolas Martel
    Basically, the way it works is that it update the frame to show every let's say 24 ticks and every time the frame update, it recalculates the height and width of the new sprite to render so that my gravity logics and stuff works well. But the problem i am having now is a bit hard to explain in words only, therefore i will use this picture to assist me The picture So what i need basically is that if let's say i froze the sprite at the first frame, then unfreeze it and freeze it at the second frame, have the second frame's sprite (let's say it's a prone move) simply stand on the foothold without starting the gravity and when switching back, have the first sprite go back on the foothold like normal without being under the foothold. I had 2 ideas on doing this but I'm not sure it's the most efficient ways to do it so i wanna hear your inputs.

    Read the article

  • Low-level game engine renderer design

    - by Mark Ingram
    I'm piecing together the beginnings of an extremely basic engine which will let me draw arbitrary objects (SceneObject). I've got to the point where I'm creating a few sensible sounding classes, but as this is my first outing into game engines, I've got the feeling I'm overlooking things. I'm familiar with compartmentalising larger portions of the code so that individual sub-systems don't overly interact with each other, but I'm thinking more of the low-level stuff, starting from vertices working up. So if I have a Vertex class, I can combine that with a list of indices to make a Mesh class. How does the engine determine identical meshes for objects? Or is that left to the level designer? Once we have a Mesh, that can be contained in the SceneObject class. And a list of SceneObject can be placed into the Scene to be drawn. Right now I'm only using OpenGL, but I'm aware that I don't want to be tying OpenGL calls right in to base classes (such as updating the vertices in the Mesh, I don't want to be calling glBufferData etc). Are there any good resources that discuss these issues? Are there any "common" heirachies which should be used?

    Read the article

  • glColor3f Setting colour

    - by Aaron
    This draws a white vertical line from 640 to 768 at x512: glDisable(GL_TEXTURE_2D); glBegin(GL_LINES); glColor3f((double)R/255,(double)G/255,(double)B/255); glVertex3f(SX, -SPosY, 0); // origin of the line glVertex3f(SX, -EPosY, 0); // ending point of the line glEnd(); glEnable(GL_TEXTURE_2D); This works, but after having a problem where it wouldn't draw it white (Or to any colour passed) I discovered that disabling GL_TEXTURE_2D Before drawing the line, and the re-enabling it afterwards for other things, fixed it. I want to know, is this a normal step a programmer might take? Or is it highly inefficient? I don't want to be causing any slow downs due to a mistake =) Thanks

    Read the article

  • How is constant buffer allocation handled in DX11?

    - by Marek
    I'm starting with DX11 and I'm not sure if I'm doing the things right. I want to have both pixel and vertex shader program in one file. Both use some shared and some different constant buffers. So it looks like this: Shader.fx cbuffer ForVS : register(b0) { float4x4 wvp; }; cbuffer ForVSandPS : register(b1) { float4 stuff; float4 stuff2; }; cbuffer ForVS2 : register(b2) { float4 stuff; float4 stuff2; }; cbuffer ForPS : register(b3) { float4 stuff; float4 stuff2; }; .... And in code I use mContext->VSSetConstantBuffers( 0, 1, bufferVS); mContext->VSSetConstantBuffers( 1, 1, bufferVS_PS); mContext->VSSetConstantBuffers( 2, 1, bufferVS2); mContext->PSSetConstantBuffers( 1, 1, bufferVS_PS); mContext->PSSetConstantBuffers( 3, 1, bufferPS); The numbering of buffers in PS is what bugs me, is it alright to bind random slots to shaders (in this example 1 and 3)? Does that mean it still uses just two buffers or does it initialize 0 and 2 buffer pointers to empty? Thank you.

    Read the article

  • Checking whether a specific key was pressed in enchantJS

    - by MxyL
    I am using enchantJS and would like to use the letters and numbers as well as numpad on a keyboard to do different things (eg: hotkeys). From this page http://users.csc.calpoly.edu/~foaad/enchant/guide/playerInput.html By default, enchant.js provides input listeners for six buttons: UP, DOWN, LEFT, RIGHT, A, and B. By default, the directions are bound to the arrow keys. Any of the six buttons may also be bound to any key with an ASCII value. We’ll address that later. So enchant provides the ability to bind keys to different input such as up, down, left, right...but how can I simply check whether the D or X key was pressed, and if so, perform certain actions based on that event?

    Read the article

  • Simple collision detection in Unity 2D

    - by N1ghtshade3
    I realise other posts exist with this topic yet none have gone into enough detail for me. I am attempting to create a 2D game in Unity using C# as my scripting language. Basically I have two objects, player and bomb. Both were created simply by dragging the respective PNG to the stage. I have set up touch controls to move player left and right; gravity of any kind is not needed as I only require it to move x units when I tap either the left or right side of the screen. This movement is stored in a script called playerController.cs and works just fine. I also have a variable health = 3 for player, which is stored in healthScript.cs. I am now at a point where I am stuck. I would like it so that when player collides with bomb, health decreases by one and the bomb object is destroyed. So what I tried doing is using a new script called playerPhysics.cs, I added the following: void OnCollisionEnter2D(Collision2D coll){ if(coll.gameObject.name=="bomb") GameObject.Destroy("bomb"); healthScript.health -= 1; } While I'm fairly sure I don't know the proper way to reference a variable in another script and that's why the health didn't decrease when I collided, bomb never disappeared from the stage so I'm thinking there's also a problem with my collision. Initially, I had simply attached playerPhysics.cs to player. After searching around though, it appeared as though player also needed a rigidBody attached to it, so I did that. Still no luck. I tried using a circleCollider (player is a circle), using a rigidBody2D, and using all manner of colliders on one and/or both of the objects. If you could please explain what colliders (if any) should be attached to which objects and whether I need to change my script(s), that would be much more helpful than pointing me to one of the generic documentation examples I've already read. Also, if it would be simple to fix the health thing not working that would be an added bonus but not exactly the focus of this question. Bear in mind that this game is 2D; I'm not sure if that changes anything. Thanks!

    Read the article

  • Rotating multiple points at once in 2D

    - by Deukalion
    I currently have an editor that creates shapes out of (X, Y) coordinates and then triangulate that to make up a shape of those points. What will I have to do to rotate all of those points simultaneously? Say I click the screen in my editor, it locates the point where I've clicked and if I move the mouse up or down from that point it calculates rotation on X and Y axis depending on new position relevant to first position, say I move up 10 on the Y axis it rotates that way and the same way for X. Or simply, somehow to enter rotation degree: 90, 180, 270, 360, for example. I use VertexPositionColor at the moment. What are the best algorithms or methods that I can look at to rotate multiple points in 2D at once? Also: Since this is an editor I do now want to rotate it on the Matrix, so if I want to rotate the whole shape 180 degree that's the new "position" of all the points, so that's the new rotation = 0 for example. Later on I probably will use World Matrix rotation for this, but not now.

    Read the article

  • Orthographic Projection with variable FOV

    - by cubrman
    We are building agame with orthographic view. The problem we face is the fact that with different resolution you can see different area of the game world. E.g. if you have higher resolution you can see more around you. To solve this we currently use a common scale factor that every model is scaled by, depending on resolution. But this has drawbacks when drawing shadows - I cannot set a higher view angle for the orthographic shadow camera, while when using the perspective shadow camera I get significantly worse shadow quality. So the question is is there any way to controll FOV when using orthographic projection, or, more specifically, what is the easiest way to scale the world uniformly up or down with orthographic projection matrix? I saw that in 3ds MAX you can control FOV for an orthographic camera I wonder how they implemented it.

    Read the article

  • Code Coverage for Maven Integrated in NetBeans IDE 7.2

    - by Geertjan
    In NetBeans IDE 7.2, JaCoCo is supported natively, i.e., out of the box, as a code coverage engine for Maven projects, since Cobertura does not work with JDK 7 language constructs. (Although, note that Cobertura is supported as well in NetBeans IDE 7.2.) It isn't part of NetBeans IDE 7.2 Beta, so don't even try there; you need some development build from after that. I downloaded the latest development build today. To enable JaCoCo features in NetBeans IDE, you need do no different to what you'd do when enabling JaCoCo in Maven itself, which is rather wonderful. In both cases, all you need to do is add this to the "plugins" section of your POM: <plugin> <groupId>org.jacoco</groupId> <artifactId>jacoco-maven-plugin</artifactId> <version>0.5.7.201204190339</version> <executions> <execution> <goals> <goal>prepare-agent</goal> </goals> </execution> <execution> <id>report</id> <phase>prepare-package</phase> <goals> <goal>report</goal> </goals> </execution> </executions> </plugin> Now you're done and ready to examine the code coverage of your tests, whether they are JUnit or TestNG. At this point, i.e., for no other reason than that you added the above snippet into your POM, you will have a new Code Coverage menu when you right-click on the project node: If you click Show Report above, the Code Coverage Report window opens. Here, once you've run your tests, you can actually see how many classes have been covered by your tests, which is pretty useful since 100% tests passing doesn't mean much when you've only tested one class, as you can see very graphically below: Then, when you click the bars in the Code Coverage Report window, the class under test is shown, with the methods for which tests exist highlighted in green and those that haven't been covered in red: (Note: Of course, striving for 100% code coverage is a bit nonsensical. For example, writing tests for your getters and setters may not be the most useful way to spend one's time. But being able to measure, and visualize, code coverage is certainly useful regardless of the percentage you're striving to achieve.) Best of all about all this is that everything you see above is available out of the box in NetBeans IDE 7.2. Take a look at what else NetBeans IDE 7.2 brings for the first time to the world of Maven: http://wiki.netbeans.org/NewAndNoteworthyNB72#Maven

    Read the article

  • Deterministic Multiplayer RTS game questions?

    - by Martin K
    I am working on a cross-platform multiplayer RTS game where the different clients and a server(flash and C#), all need to stay deterministically synchronised. To deal with Floatpoint inconsistencies, I've come across this method: http://joshblog.net/2007/01/30/flash-floating-point-number-errors/#comment-49912 which basically truncates off the nondeterministic part: return Math.round(1000 * float) / 1000; Howewer my concern is that every time there is a division, there is further chance of creating additional floatpoint errors, in essence making it worse? . So it occured to me, how about something like this: function floatSafe(number:Number) : Number {return Math.round(float* 1024) / 1024; } ie dividing with only powers of 2 ? What do you think? . Ironically with the above method I got less accurate results: trace( floatSafe(3/5) ) // 0.599609375 where as with the other method(dividing with 1000), or just tracing the raw value I am getting 3/5 = 0.6 or Maybe thats what 3/5 actually is, IE 0.6 cannot really be represented with a floatpoint datatype, and would be more consistent across different platforms?

    Read the article

  • OpenGL ES 2.0 texture distortion on large geometry

    - by Spruce
    OpenGL ES 2.0 has serious precision issues with texture sampling - I've seen topics with a similar problem, but I haven't seen a real solution to this "distorted OpenGL ES 2.0 texture" problem yet. This is not related to the texture's image format or OpenGL color buffers, it seems like it's a precision error. I don't know what specifically causes the precision to fail - it doesn't seem like it's just the size of geometry that causes this distortion, because simply scaling vertex position passed to the the vertex shader does not solve the issue. Here are some examples of the texture distortion: Distorted Texture (on OpenGL ES 2.0): http://i47.tinypic.com/3322h6d.png What the texture normally looks like (also on OpenGL ES 2.0): http://i49.tinypic.com/b4jc6c.png The texture issue is limited to small scale geometry on OpenGL ES 2.0, otherwise the texture sampling appears normal, but the grainy effect gradually worsens the further the vertex data is from the origin of XYZ(0,0,0) These texture issues do not occur on desktop OpenGL (works fine under Windows XP, Windows 7, and Mac OS X) I've only seen the problem occur on Android, iPhone, or WebGL(which is similar to OpenGL ES 2.0) All textures are power of 2 but the problem still occurs Scaling the vertex data - The values of a vertex's X Y Z location are in the range of: -65536 to +65536 floating point I realized this was large, so I tried dividing the vertex positions by 1024 to shrink the geometry and hopefully get more accurate floating point precision, but this didn't fix or lessen the texture distortion issue Scaling the modelview or scaling the projection matrix does not help Changing texture filtering options does not help Disabling mipmapping, or using GL_NEAREST/GL_LINEAR does nothing Enabling/disabling anisotropic does nothing The banding effect still occurs even when using GL_CLAMP Dividing the texture coords passed to the vertex shader and then multiplying them back to the correct values in the fragment shader, also does not work precision highp sampler2D, highp float, highp int - in the fragment or the vertex shader didn't change anything (lowp/mediump did not work either) I'm thinking this problem has to have been solved at one point - Seeing that OpenGL ES 2.0 -based games have been able to render large-scale, highly detailed geometry

    Read the article

  • Rotate 3D Model from a custom position

    - by Nipuna Silva
    I have a 3D Model like above in which i want to rotate it from a given location(pointed in red) but I can only rotate it from the middle. How can I rotate it from a custom point. Edit: I successfully able to rotate the model from the below position by getting the radius of the model and applying it to the world matrix Vector3 point = new Vector3(-radius, 0, 0); world = Matrix.CreateTranslation(-radius, 0, 0); But now I cannot change the position of the object and it always centered in middle of the screen. I think that's because i applied the above code. How can I place it anywhere I want?

    Read the article

  • Are there any preexisting maps for a Minecraft-like level I could use in my engine?

    - by Rishav Sharan
    I am working on a tiny cube-based engine like Minecraft. I was wondering if there is a way for me to get large blocky terrain in a text format that I can use for rendering on my engine? I don't want to start on procedural generation now, I just want a resource where I can get the coord list for a pretty looking terrain. Alternatively, is it possible for me to parse the Minecraft world files and use that data to generate terrain/buildings in my code?

    Read the article

  • efficient collision detection - tile based html5/javascript game

    - by Tom Burman
    Im building a basic rpg game and onto collisions/pickups etc now. Its tile based and im using html5 and javascript. i use a 2d array to create my tilemap. Im currently using a switch statement for whatever key has been pressed to move the player, inside the switch statement. I have if statements to stop the player going off the edge of the map and viewport and also if they player is about to land on a tile with tileID 3 then the player stops. Here is the statement: canvas.addEventListener('keydown', function(e) { console.log(e); var key = null; switch (e.which) { case 37: // Left if (playerX > 0) { playerX--; } if(board[playerX][playerY] == 3){ playerX++; } break; case 38: // Up if (playerY > 0) playerY--; if(board[playerX][playerY] == 3){ playerY++; } break; case 39: // Right if (playerX < worldWidth) { playerX++; } if(board[playerX][playerY] == 3){ playerX--; } break; case 40: // Down if (playerY < worldHeight) playerY++; if(board[playerX][playerY] == 3){ playerY--; } break; } viewX = playerX - Math.floor(0.5 * viewWidth); if (viewX < 0) viewX = 0; if (viewX+viewWidth > worldWidth) viewX = worldWidth - viewWidth; viewY = playerY - Math.floor(0.5 * viewHeight); if (viewY < 0) viewY = 0; if (viewY+viewHeight > worldHeight) viewY = worldHeight - viewHeight; }, false); My question is, is there a more efficient way of handling collisions, then loads of if statements for each key? The reason i ask is because i plan on having many items that the player will need to be able to pickup or not walk through like walls cliffs etc. Thanks for your time and help Tom

    Read the article

  • Resources on expected behaviour when manipulating 3D objects with the mouse

    - by sebf
    Hello, In my animation editor, I have a 3D gizmo that sits on the origin of a bone; the user drags the mesh around to rotate the bone. I've found that translating the 2D movements of the mouse into sensible 3D transforms is not near as simple as i'd hoped. For example what is intuitively 'up' or 'down'? How should the magnitude of rotations change with respect to dX/dY? How to implement this? What happens when the gizmo changes position or orientation with respect to the camera? ect. So far with trial and error i've written something (very) simple that works 70% of the time. I could probably continue to hack at it until I made something that works 99% of the time, but there must be someone who needed the same thing, and spent the time coming up with a much more elegant solution. Does anyone know of one?

    Read the article

  • SWF file not playing after being published

    - by rsquare
    I'm trying to run the "connector" example that comes bundled with the SmartFoxServer 2X downloads.. There it connects to the server and loads the correct configuration file. When I run it in Adobe Flash Professional 5, it runs correctly and connects to the server but after being published as SWF movie, it doesnt work. It loads the configuration file but can't connect and gives an error connection failure: ERROR 2048. This is the example I'm talking about.

    Read the article

  • Calculating an orbit and approach velocties

    - by Mob
    I have drones in my game that need to approach and orbit a node and shoot at it. Problem is I want to stay away from a real physics simulation, meaning I don't want to give the node and drone a mass and the drone's thrusters' a force. I just want to find the best way to approach and then enter orbit. There was a pretty good answer about using bezier curves and doing it that way, but that is essentially a tween between two fixed points. The nodes are also moving as the drones enter orbit.

    Read the article

  • Sending changes to a terrain heightmap over UDP

    - by Floomi
    This is a more conceptual, thinking-out-loud question than a technical one. I have a 3D heightmapped terrain as part of a multiplayer RTS that I would like to terraform over a network. The terraforming will be done by units within the gameworld; the player will paint a "target heightmap" that they'd like the current terrain to resemble and units will deform towards that on their own (a la Perimeter). Given my terrain is 257x257 vertices, the naive approach of sending heights when they change will flood the bandwidth very quickly - updating a quarter of the terrain every second will hit ~66kB/s. This is clearly way too much. My next thought was to move to a brush-based system, where you send e.g. the centre of a circle, its radius, and some function defining the influence of the brush from the centre going outwards. But even with reliable UDP the "start" and "stop" messages could still be delayed. I guess I could compare timestamps and compensate for this, although it'd likely mean that clients would deform verts too much on their local simulations and then have to smooth them back to the correct heights. I could also send absolute vert heights in the "start" and "stop" messages to guarantee correct data on the clients. Alternatively I could treat brushes in a similar way to units, and do the standard position + velocity + client-side prediction jazz on them, with the added stipulation that they deform terrain within a certain radius around them. The server could then intermittently do a pass and send (a subset of) recently updated verts to clients as and when there's bandwidth to spare. Any other suggestions, or indications that I'm on the right (or wrong!) track with any of these ideas would be greatly appreciated.

    Read the article

  • Best practices of texture size

    - by psal
    I wanted to know how should I determine a good texture size ? Currently, I always create UV texture that are 1024x1024px but if I create for example, a big house with a 1024px texture size, it will looks pretty bad. So, should I create different texture size (512, 1024, ...) for different mesh size like this ? : or is it better to always do high-resolution texture and then reduce it in the software (ie : increase the LODBias settings in UDK reduce the size of the texture) ? Thanks for your answer. ps : sorry for my english !

    Read the article

  • Making more complicated systems(entity-component-system model question)

    - by winch
    I'm using a model where entities are collections of components and components are just data. All the logic goes into systems which operate on components. Making basic systems(for Rendering and handling collision) was easy. But how do I do more compilcated systems? For example, in a CollisionSystem I can check if entity A collides with entity B. I have this code in CollisionSystem for checking if B damages A: if(collides(a, b)) { HealthComponent* hc = a->get<HealthComponent(); hc.reduceHealth(b->get<DamageComponent>()->getDamage()); But I feel that this code shouldn't belong to Collision system. Where should code like this be and which additional systems should I create to make this code generic?

    Read the article

  • Problems with moving 2D circle/box collision detection

    - by dario3004
    This is my first game ever and I'm a newbie in computer physics. I've got this code for the collision detection and it works fine for BOTTOM and TOP collision.It miss the collision detection with the paddle's edge and angles so I've (roughly) tried to implement it. Main method that is called for bouncing, it checks if it bounce with wall, or with top (+ right/left side) or with bottom (+ right/left side): protected void handleBounces(float px, float py) { handleWallBounce(px, py); if(mBall.y < getHeight()/4){ if (handleRedFastBounce(mRed, px, py)) return; if (handleRightSideBounce(mRed,px,py)) return; if (handleLeftSideBounce(mRed,px,py)) return; } if(mBall.y > getHeight()/4 * 3){ if (handleBlueFastBounce(mBlue, px, py)) return; if (handleRightSideBounce(mBlue,px,py)) return; if (handleLeftSideBounce(mBlue,px,py)) return; } } This is the code for the BOTTOM bounce: protected boolean handleRedFastBounce(Paddle paddle, float px, float py) { if (mBall.goingUp() == false) return false; // next position tx = mBall.x; ty = mBall.y - mBall.getRadius(); // actual position ptx = px; pty = py - mBall.getRadius(); dyp = ty - paddle.getBottom(); xc = tx + (tx - ptx) * dyp / (ty - pty); if ((ty < paddle.getBottom() && pty > paddle.getBottom() && xc > paddle.getLeft() && xc < paddle.getRight())) { mBall.x = xc; mBall.y = paddle.getBottom() + mBall.getRadius(); mBall.bouncePaddle(paddle); playSound(mPaddleSFX); increaseDifficulty(); return true; } else return false; } As long as I understood it should be something like this: So I tried to make the "left side" and "right side" bounce method: protected boolean handleLeftSideBounce(Paddle paddle, float px, float py){ // next position tx = mBall.x + mBall.getRadius(); ty = mBall.y; // actual position ptx = px + mBall.getRadius(); pty = py; dyp = tx - paddle.getLeft(); yc = ty + (pty - ty) * dyp / (ptx - tx); if (ptx < paddle.getLeft() && tx > paddle.getLeft()){ System.out.println("left side bounce1"); System.out.println("yc: " + yc + "top: " + paddle.getTop() + " bottom: " + paddle.getBottom()); if (yc > paddle.getTop() && yc < paddle.getBottom()){ System.out.println("left side bounce2"); mBall.y = yc; mBall.x = paddle.getLeft() - mBall.getRadius(); mBall.bouncePaddle(paddle); playSound(mPaddleSFX); increaseDifficulty(); return true; } } return false; } I think I'm quite near to the solution but I'm having big troubles with the new "yc" formula. I tried so many versions of it but since I don't know the theory behind it I can't adjust for the Y axis. Since the Y axis is inverted I even tried this: yc = ty - (pty - ty) * dyp / (ptx - tx); I tried Googling it but I can't seem to find a solution for it. Also this method fails when ball touches the angle and I don't think is a nice way because it just test "one" point of the ball and probably there will be many cases in which the ball won't bounce.

    Read the article

  • Initializing entities vs having a constructor parameter

    - by Vee
    I'm working on a turn-based tile-based puzzle game, and to create new entities, I use this code: Field.CreateEntity(10, 5, Factory.Player()); This creates a new Player at [10; 5]. I'm using a factory-like class to create entities via composition. This is what the CreateEntity method looks like: public void CreateEntity(int mX, int mY, Entity mEntity) { mEntity.Field = this; TileManager.AddEntity(mEntity, true); GetTile(mX, mY).AddEntity(mEntity); mEntity.Initialize(); InvokeOnEntityCreated(mEntity); } Since many of the components (and also logic) of the entities require to know what the tile they're in is, or what the field they belong to is, I need to have mEntity.Initialize(); to know when the entity knows its own field and tile. The Initialize(); method contains a call to an event handler, so that I can do stuff like this in the factory class: result.OnInitialize += () => result.AddTags(TDLibConstants.GroundWalkableTag, TDLibConstants.TrapdoorTag); result.OnInitialize += () => result.AddComponents(new RenderComponent(), new ElementComponent(), new DirectionComponent()); This works so far, but it is not elegant and it's very open to bugs. I'm also using the same idea with components: they have a parameterless constructor, and when you call the AddComponent(mComponent); method in an entity, it is the entity's job to set the component's entity to itself. The alternative would be having a Field, int, int parameters in the factory class, to do stuff like: new Entity(Field, 10, 5); But I also don't like the fact that I have to create new entities like this. I would prefer creating entities via the Field object itself. How can I make entity/component creation more elegant and less prone to bugs?

    Read the article

  • Custom extensible file format for 2d tiled maps

    - by Christian Ivicevic
    I have implemented much of my game logic right now, but still create my maps with nasty for-loops on-the-fly to be able to work with something. Now I wanted to move on and to do some research on how to (un)serialize this data. (I do not search for a map editor - I am speaking of the map file itself) For now I am looking for suggestions and resources, how to implement a custom file format for my maps which should provide the following functionality (based on MoSCoW method): Must have Extensibility and backward compatibility Handling of different layers Metadata on whether a tile is solid or can be passed through Special serialization of entities/triggers with associated properties/metadata Could have Some kind of inclusion of the tileset to prevent having scattered files/tilesets I am developing with C++ (using SDL) and targetting only Windows. Any useful help, tips, suggestions, ... would be appreciated!

    Read the article

< Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >