Search Results

Search found 28031 results on 1122 pages for 'personal development'.

Page 460/1122 | < Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >

  • Is there any way to enable the HiDef graphics profile property on a Silverlight 5 3d Web App?

    - by Daniel
    I have an XNA Windows Game that uses the HiDef profile to load complex fbx and obj files. Trying to move it over to a Silverlight 3d Web App, Silverlight seems to only want to use the Reach profile, and I get an error that the Reach profile does not support a sufficient number of primitive draws per call. Is there any way to change to HiDef in Silverlight 5? It is not in the project properties and attempting to change it in mainpage.xaml.cs only gives me the option of setting it to Reach.

    Read the article

  • 3D Graphics with XNA Game Studio 4.0 bug in light map?

    - by Eibis
    i'm following the tutorials on 3D Graphics with XNA Game Studio 4.0 and I came up with an horrible effect when I tried to implement the Light Map http://i.stack.imgur.com/BUWvU.jpg this effect shows up when I look towards the center of the house (and it moves with me). it has this shape because I'm using a sphere to represent light; using other light shapes gives different results. I'm using a class PreLightingRenderer: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Dhpoware; using Microsoft.Xna.Framework.Content; namespace XNAFirstPersonCamera { public class PrelightingRenderer { // Normal, depth, and light map render targets RenderTarget2D depthTarg; RenderTarget2D normalTarg; RenderTarget2D lightTarg; // Depth/normal effect and light mapping effect Effect depthNormalEffect; Effect lightingEffect; // Point light (sphere) mesh Model lightMesh; // List of models, lights, and the camera public List<CModel> Models { get; set; } public List<PPPointLight> Lights { get; set; } public FirstPersonCamera Camera { get; set; } GraphicsDevice graphicsDevice; int viewWidth = 0, viewHeight = 0; public PrelightingRenderer(GraphicsDevice GraphicsDevice, ContentManager Content) { viewWidth = GraphicsDevice.Viewport.Width; viewHeight = GraphicsDevice.Viewport.Height; // Create the three render targets depthTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Single, DepthFormat.Depth24); normalTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); lightTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); // Load effects depthNormalEffect = Content.Load<Effect>(@"Effects\PPDepthNormal"); lightingEffect = Content.Load<Effect>(@"Effects\PPLight"); // Set effect parameters to light mapping effect lightingEffect.Parameters["viewportWidth"].SetValue(viewWidth); lightingEffect.Parameters["viewportHeight"].SetValue(viewHeight); // Load point light mesh and set light mapping effect to it lightMesh = Content.Load<Model>(@"Models\PPLightMesh"); lightMesh.Meshes[0].MeshParts[0].Effect = lightingEffect; this.graphicsDevice = GraphicsDevice; } public void Draw() { drawDepthNormalMap(); drawLightMap(); prepareMainPass(); } void drawDepthNormalMap() { // Set the render targets to 'slots' 1 and 2 graphicsDevice.SetRenderTargets(normalTarg, depthTarg); // Clear the render target to 1 (infinite depth) graphicsDevice.Clear(Color.White); // Draw each model with the PPDepthNormal effect foreach (CModel model in Models) { model.CacheEffects(); model.SetModelEffect(depthNormalEffect, false); model.Draw(Camera.ViewMatrix, Camera.ProjectionMatrix, Camera.Position); model.RestoreEffects(); } // Un-set the render targets graphicsDevice.SetRenderTargets(null); } void drawLightMap() { // Set the depth and normal map info to the effect lightingEffect.Parameters["DepthTexture"].SetValue(depthTarg); lightingEffect.Parameters["NormalTexture"].SetValue(normalTarg); // Calculate the view * projection matrix Matrix viewProjection = Camera.ViewMatrix * Camera.ProjectionMatrix; // Set the inverse of the view * projection matrix to the effect Matrix invViewProjection = Matrix.Invert(viewProjection); lightingEffect.Parameters["InvViewProjection"].SetValue(invViewProjection); // Set the render target to the graphics device graphicsDevice.SetRenderTarget(lightTarg); // Clear the render target to black (no light) graphicsDevice.Clear(Color.Black); // Set render states to additive (lights will add their influences) graphicsDevice.BlendState = BlendState.Additive; graphicsDevice.DepthStencilState = DepthStencilState.None; foreach (PPPointLight light in Lights) { // Set the light's parameters to the effect light.SetEffectParameters(lightingEffect); // Calculate the world * view * projection matrix and set it to // the effect Matrix wvp = (Matrix.CreateScale(light.Attenuation) * Matrix.CreateTranslation(light.Position)) * viewProjection; lightingEffect.Parameters["WorldViewProjection"].SetValue(wvp); // Determine the distance between the light and camera float dist = Vector3.Distance(Camera.Position, light.Position); // If the camera is inside the light-sphere, invert the cull mode // to draw the inside of the sphere instead of the outside if (dist < light.Attenuation) graphicsDevice.RasterizerState = RasterizerState.CullClockwise; // Draw the point-light-sphere lightMesh.Meshes[0].Draw(); // Revert the cull mode graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; } // Revert the blending and depth render states graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; // Un-set the render target graphicsDevice.SetRenderTarget(null); } void prepareMainPass() { foreach (CModel model in Models) foreach (ModelMesh mesh in model.Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) { // Set the light map and viewport parameters to each model's effect if (part.Effect.Parameters["LightTexture"] != null) part.Effect.Parameters["LightTexture"].SetValue(lightTarg); if (part.Effect.Parameters["viewportWidth"] != null) part.Effect.Parameters["viewportWidth"].SetValue(viewWidth); if (part.Effect.Parameters["viewportHeight"] != null) part.Effect.Parameters["viewportHeight"].SetValue(viewHeight); } } } } that uses three effect: PPDepthNormal.fx float4x4 World; float4x4 View; float4x4 Projection; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Depth : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 viewProjection = mul(View, Projection); float4x4 worldViewProjection = mul(World, viewProjection); output.Position = mul(input.Position, worldViewProjection); output.Normal = mul(input.Normal, World); // Position's z and w components correspond to the distance // from camera and distance of the far plane respectively output.Depth.xy = output.Position.zw; return output; } // We render to two targets simultaneously, so we can't // simply return a float4 from the pixel shader struct PixelShaderOutput { float4 Normal : COLOR0; float4 Depth : COLOR1; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; // Depth is stored as distance from camera / far plane distance // to get value between 0 and 1 output.Depth = input.Depth.x / input.Depth.y; // Normal map simply stores X, Y and Z components of normal // shifted from (-1 to 1) range to (0 to 1) range output.Normal.xyz = (normalize(input.Normal).xyz / 2) + .5; // Other components must be initialized to compile output.Depth.a = 1; output.Normal.a = 1; return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPLight.fx float4x4 WorldViewProjection; float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; sampler2D depthSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 LightColor; float3 LightPosition; float LightAttenuation; // Include shared functions #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 LightPosition : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WorldViewProjection); output.LightPosition = output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Find the pixel coordinates of the input position in the depth // and normal textures float2 texCoord = postProjToScreen(input.LightPosition) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; // Extract the normal from the normal map and move from // 0 to 1 range to -1 to 1 range float4 normal = (tex2D(normalSampler, texCoord) - .5) * 2; // Perform the lighting calculations for a point light float3 lightDirection = normalize(LightPosition - position); float lighting = clamp(dot(normal, lightDirection), 0, 1); // Attenuate the light to simulate a point light float d = distance(LightPosition, position); float att = 1 - pow(d / LightAttenuation, 6); return float4(LightColor * lighting * att, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPShared.vsi has some common functions: float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } and finally from the Game class I set up in LoadContent with: effect = Content.Load(@"Effects\PPModel"); models[0] = new CModel(Content.Load(@"Models\teapot"), new Vector3(-50, 80, 0), new Vector3(0, 0, 0), 1f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); house = new CModel(Content.Load(@"Models\house"), new Vector3(0, 0, 0), new Vector3((float)-Math.PI / 2, 0, 0), 35.0f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); models[0].SetModelEffect(effect, true); house.SetModelEffect(effect, true); renderer = new PrelightingRenderer(GraphicsDevice, Content); renderer.Models = new List(); renderer.Models.Add(house); renderer.Models.Add(models[0]); renderer.Lights = new List() { new PPPointLight(new Vector3(0, 120, 0), Color.White * .85f, 2000) }; where PPModel.fx is: float4x4 World; float4x4 View; float4x4 Projection; texture2D BasicTexture; sampler2D basicTextureSampler = sampler_state { texture = ; addressU = wrap; addressV = wrap; minfilter = anisotropic; magfilter = anisotropic; mipfilter = linear; }; bool TextureEnabled = true; texture2D LightTexture; sampler2D lightSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 AmbientColor = float3(0.15, 0.15, 0.15); float3 DiffuseColor; #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 worldViewProjection = mul(World, mul(View, Projection)); output.Position = mul(input.Position, worldViewProjection); output.PositionCopy = output.Position; output.UV = input.UV; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Sample model's texture float3 basicTexture = tex2D(basicTextureSampler, input.UV); if (!TextureEnabled) basicTexture = float4(1, 1, 1, 1); // Extract lighting value from light map float2 texCoord = postProjToScreen(input.PositionCopy) + halfPixel(); float3 light = tex2D(lightSampler, texCoord); light += AmbientColor; return float4(basicTexture * DiffuseColor * light, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } I don't have any idea on what's wrong... googling the web I found that this tutorial may have some bug but I don't know if it's the LightModel fault (the sphere) or in a shader or in the class PrelightingRenderer. Any help is very appreciated, thank you for reading!

    Read the article

  • libgdx arrays onTouch() method and delays for objects

    - by johnny-b
    i am trying to create random bullets but it is not working for some reason. also how can i make a delay so the bullets come every 30 seconds or 1 minute???? also the onTouch method does not work and it is not taking the bullet away???? shall i put the array in the GameRender class? thanks public class GameWorld { public static Ball ball; private Bullet bullet1; private ScrollHandler scroller; private Array<Bullet> bullets = new Array<Bullet>(); public GameWorld() { ball = new Ball(280, 273, 32, 32); bullet = new Bullet(-300, 200); scroller = new ScrollHandler(0); bullets.add(new Bullet(bullet.getX(), bullet.getY())); bullets = new Array<Bullet>(); Bullet bullet = null; float bulletX = 0.0f; float bulletY = 0.0f; for (int i=0; i < 10; i++) { bulletX = MathUtils.random(-10, 10); bulletY = MathUtils.random(-10, 10); bullet = new Bullet(bulletX, bulletY); bullets.add(bullet); } } public void update(float delta) { ball.update(delta); bullet.update(delta); scroller.update(delta); } public static Ball getBall() { return ball; } public ScrollHandler getScroller() { return scroller; } public Bullet getBullet1() { return bullet1; } } i also tried this and it is not working, i used this in the GameRender class Array<Bullet> enemies=new Array<Bullet>(); //in the constructor of the class enemies.add(new Bullet(bullet.getX(), bullet.getY())); // this throws an exception for some reason??? this is in the render method for(int i=0; i<bullet.size; i++) bullet.get(i).draw(batcher); //this i am using in any method that will allow me from the constructor to update to render for(int i=0; i<bullet.size; i++) bullet.get(i).update(delta); this is not taking the bullet out @Override public boolean touchDown(int screenX, int screenY, int pointer, int button) { for(int i=0; i<bullet.size; i++) if(bullet.get(i).getBounds().contains(screenX,screenY)) bullet.removeIndex(i--); return false; } thanks for the help anyone.

    Read the article

  • Using Behavior Trees and Events together

    - by weichsem
    I am beginning to work with behavior trees and am unsure how events should be handled within the tree. Lets say we have a space game where the player is dogfighting with a handful of other ships, some friendly some not. The player destroys a ship and the rest of the hostile ships should then start to retreat. How was should the shipWasDestroyed event effect the other ship's behavior trees so that they start running the retreat behavior? One way I could think of doing this is have all the conditions I care about be high level nodes that effectively state change the ship. This would mean I'd have to check all of these state change conditions on every frame the behavior tree was run, even if they are very rare occurrences. I'd prefer not doing this for performance and complexity reasons. From looking at the Halo papers on behavior trees it seems that they handled this by dynamically placing nodes into the tree when the event occurred. It seems like calculating where the new node should go could be problematic depending on the current state of the running behavior. How is this normally handled?

    Read the article

  • How do GameEngines stop Pixel Seams appearing in adjacent mesh boundaries due to FP imprecision?

    - by ufomorace
    Graphics cards are mathematically imprecise. So when some meshes are joined by their borders, the graphics card often makes mistakes and decides that some pixels at the seam represent neither object, and unwanted pixels appear. It's a natural behaviour on all graphics cards. How are such worries avoided in Pro Games? Batching? Shaders? Different tangent vectors? Merging? Overlaping seams? Dark backgrounds? Extra vertices at borders? Z precision? Camera distance tweaks? Screencap of a fix that ended up not working:

    Read the article

  • Starting with text based MUD/MUCK game

    - by Scott Ivie
    I’ve had this idea for a video game in my head for a long time but I’ve never had the knowledge or time to get it done. I still don’t really, but I am willing to dedicate a chunk of my time to this before it’s too late. Recently I started studying Lua script for a program called “MUSH Client” which works for MU* telnet style text games. I want to use the GUI capabilities of Mush Client with a MU* server to create a basic game but here is my dilemma. I figured this could be a suitable starting place for me. BUT… Because I’m not very programmer savvy yet, I don’t know how to download/install/use the MU* server software. I was originally considering Protomuck because a few of the MU*s I were more impressed with began there. http://www.protomuck.org/ I downloaded it, but I guess I'm too used to GUI style programs so I'm having great difficulty figuring out what to do next. Does anyone have any suggestions? Does anyone even know what I'm talking about? heh..

    Read the article

  • Designing spawning system

    - by Vlad
    I played this game recently http://www.kongregate.com/games/JuicyBeast/knightmare-tower and I am amazed by the way how different monsters are beign spawned. I personally developed my own shooter game and I added time based but also count based spawing system. By count based I mean when there are 5 enemies on stage stop spawning. But this is one example. My question is how are these spawning mechanism built, is there some pattern or some theory how they are built? Are there some online materials/pages where I can improve my knowledge? To sumarize, let just say we have 6 types of monsters. I start the game and kill of monsters of type 1,2 and 3 all the time. Once I pass the first ceiling, like in the game above, monster type 4 appear. ANd so on. As I progress trough the game, the same system of 6 types of monsters stay, but they become more and more resilient and dangerous. So I must also improve to be able to destroy the same monsters but now stronger. My question is simple, are there some theories built or written for developing this type of inteligent systems? Note: This is a general question, not tied up with some game or how exactly should the game work. I am capable to program my own mechanisms but I think I need some help. Thanks.

    Read the article

  • SDL2 with OpenGL -- weird results, what's wrong?

    - by ber4444
    I'm porting an app to iOS, and therefore need to upgrade it to SDL2 from SDL1.2 (so far I'm testing it as an on OS X desktop app only). However, when running the code with SDL2, I'm getting weird results as shown on the second image below (the first image is how it looks with SDL, correctly). The single changeset that causes this is this one, do you see something obviously wrong there, or does SDL2 have some OpenGL nuances I'm unaware of? My SDL is based on changeset dd7e57847ea9 from HG (since then there is one "Allow specifying of OpenGL 3.2 Core Profile on Mac OS X" commit, not sure if that would help).

    Read the article

  • Handling window resize with arbitrary aspect ratios

    - by DormoTheNord
    I'm currently making a 2D game using SFML. I want the aspect ratio to be maintained when the user resizes the window. I also want the game to work with any arbitrary aspect ratio (like any media player would). Here is the code I have so far: void os::GameEngine::setCameraViewport() { sf::FloatRect tempViewport; float viewAspectRatio = (float)aspectRatio.x / aspectRatio.y; float screenAspectRatio = (float)gameWindow.getSize().x / gameWindow.getSize().y; if (viewAspectRatio > screenAspectRatio) { // Viewport is wider than screen, fit on X } else if (viewAspectRatio < screenAspectRatio) { // Screen is wider than viewport, fit on Y } else // window aspect ratio matches view aspect ratio { tempViewport.height = 1; tempViewport.width = 1; tempViewport.left = 0; tempViewport.top = 0; } viewport = tempViewport; camera.setViewport(viewport); gameWindow.setView(camera); } The problem is I'm having trouble with the logic to determine the properties of the viewport.

    Read the article

  • How would you code an AI engine to allow communication in any programming language?

    - by Tokyo Dan
    I developed a two-player iPhone board game. Computer players (AI) can either be local (in the game code) or remote running on a server. In the 2nd case, both client and server code are coded in Lua. On the server the actual AI code is separate from the TCP socket code and coroutine code (which spawns a separate instance of AI for each connecting client). I want to be able to further isolate the AI code so that that part can be a module coded by anyone in their language of choice. How can I do this? What tecniques/technology would enable communication between the Lua TCP socket/coroutine code and the AI module?

    Read the article

  • Order independent transparency in particle system

    - by Stepan Zastupov
    I'm writing a particle system and would like to find a trick to achieve proper alpha blending without sorting particles because: Each particle is a point sprite in a single mesh and I can't use scene graph ability to sort transparent nodes. The system node should be properly sorted, though. Particle position is computed on shader from initial velocity, acceleration and time. In order to sort the system I would have to perform all this computations on CPU, which is something I want to avoid. Sorting hundreds of particles against camera position and uploading it on GPU each frame seams to be quiet heavy operation. Alpha testing seems to be fast enough on GLES 2.0 and works fine for non-transparent but "masked" textures. Still, it's not enough for semi-transparent particles. How would you handle this?

    Read the article

  • Is a Single Texture Cube Map Possible?

    - by smoth190
    I'm currently developing a test project to explore OpenGL 3 texturing abilities. I have a simple cube, made of 8 vertices and 36 indices. I want each of the cubes faces to have a different texture, so I devised this texture: I made it obvious which sections I want visible (I hope...). In Direct3D, I once made a skybox, and I used a cubemap. However, I had to split it into 6 different textures. This is annoying and hard to manage, it would be nice to have just one texture. Is this even possible? I read somewhere that I could do this by duplicating vertices, is that a good idea? Someone else said I could do it in the shader, but that also baffles me...

    Read the article

  • Velocity collision detection (2D)

    - by ultifinitus
    Alright, so I have made a simple game engine (see youtube) And my current implementation of collision resolution has a slight problem, involving the velocity of a platform. Basically I run through all of the objects necessary to detect collisions on and resolve those collisions as I find them. Part of that resolution is setting the player's velocity = the platform's velocity. Which works great! Unless I have a row of platforms moving at different velocities or a platform between a stack of tiles.... (current system) bool player::handle_collisions() { collisions tcol; bool did_handle = false; bool thisObjectHandle = false; for (int temp = 0; temp < collideQueue.size(); temp++) { thisObjectHandle = false; tcol = get_collision(prevPos.x,y,get_img()->get_width(),get_img()->get_height(), collideQueue[temp]->get_position().x,collideQueue[temp]->get_position().y, collideQueue[temp]->get_img()->get_width(),collideQueue[temp]->get_img()->get_height()); if (prevPos.y >= collideQueue[temp]->get_prev_pos().y + collideQueue[temp]->get_img()->get_height()) if (tcol.top > 0) { add_pos(0,tcol.top); set_vel(get_vel().x,collideQueue[temp]->get_vel().y); thisObjectHandle = did_handle = true; } if (prevPos.y + get_img()->get_height() <= collideQueue[temp]->get_prev_pos().y) if (tcol.bottom > 0) { add_pos(collideQueue[temp]->get_vel().x,-tcol.bottom); set_vel(get_vel().x/*collideQueue[temp]->get_vel().x*/,collideQueue[temp]->get_vel().y); ableToJump = true; jumpTimes = maxjumpable; thisObjectHandle = did_handle = true; } /// /// ADD CODE FROM NEXT CODE BLOCK HERE (on forum, not in code) /// } for (int temp = 0; temp < collideQueue.size(); temp++) { thisObjectHandle = false; tcol = get_collision(x,y,get_img()->get_width(),get_img()->get_height(), collideQueue[temp]->get_position().x,collideQueue[temp]->get_position().y, collideQueue[temp]->get_img()->get_width(),collideQueue[temp]->get_img()->get_height()); if (prevPos.x + get_img()->get_width() <= collideQueue[temp]->get_prev_pos().x) if (tcol.left > 0) { add_pos(-tcol.left,0); set_vel(collideQueue[temp]->get_vel().x,get_vel().y); thisObjectHandle = did_handle = true; } if (prevPos.x >= collideQueue[temp]->get_prev_pos().x + collideQueue[temp]->get_img()->get_width()) if (tcol.right > 0) { add_pos(tcol.right,0); set_vel(collideQueue[temp]->get_vel().x,get_vel().y); thisObjectHandle = did_handle = true; } } return did_handle; } (if I add the following code {where the comment to do so is}, which is glitchy, the above problem doesn't happen, though it brings others) if (!thisObjectHandle) { if (tcol.bottom > tcol.top) { add_pos(collideQueue[temp]->get_vel().x,-tcol.bottom); set_vel(get_vel().x,collideQueue[temp]->get_vel().y); } else if (tcol.top > tcol.bottom) { add_pos(0,tcol.top); set_vel(get_vel().x,collideQueue[temp]->get_vel().y); } } How would you change my system to prevent this?

    Read the article

  • Dynamic Quad/Oct Trees

    - by KKlouzal
    I've recently discovered the power of Quadtrees and Octrees and their role in culling/LOD applications, however I've been pondering on the implementations for a Dynamic Quad/Oct Tree. Such tree would not require a complete rebuild when some of the underlying data changes (Vertex Data). Would it be possible to create such a tree? What would that look like? Could someone point me in the correct direction to get started? The application here would, in my scenario, be used for a dynamically changing spherical landscape with over 10,000,000 verticies. The use of Quad/Oct Trees is obvious for Culling & LOD as well as the benefits from not having to completely recompute the tree when the underlying data changes.

    Read the article

  • New to CG shader programming, what program should I use to write and test them?

    - by Notbad
    I have started witting some shaders. First ones were fairly easy to write in notepad but now I need something with a bit more meat. I have checked rendermonnkey that seems to support CG but it is really old and don't know if it is a good option. On the other hand there exist this FX Composer 2.0 but it seems somthing that could really distract me from learning shaders because it seems a pretty deep program. Are there any other possibilities? There's a really nice alternative to write shaders named ShaderToy but just supports GLSL. Any information will be really welcomed. Thanks in advance.

    Read the article

  • Initializing entities vs having a constructor parameter

    - by Vee
    I'm working on a turn-based tile-based puzzle game, and to create new entities, I use this code: Field.CreateEntity(10, 5, Factory.Player()); This creates a new Player at [10; 5]. I'm using a factory-like class to create entities via composition. This is what the CreateEntity method looks like: public void CreateEntity(int mX, int mY, Entity mEntity) { mEntity.Field = this; TileManager.AddEntity(mEntity, true); GetTile(mX, mY).AddEntity(mEntity); mEntity.Initialize(); InvokeOnEntityCreated(mEntity); } Since many of the components (and also logic) of the entities require to know what the tile they're in is, or what the field they belong to is, I need to have mEntity.Initialize(); to know when the entity knows its own field and tile. The Initialize(); method contains a call to an event handler, so that I can do stuff like this in the factory class: result.OnInitialize += () => result.AddTags(TDLibConstants.GroundWalkableTag, TDLibConstants.TrapdoorTag); result.OnInitialize += () => result.AddComponents(new RenderComponent(), new ElementComponent(), new DirectionComponent()); This works so far, but it is not elegant and it's very open to bugs. I'm also using the same idea with components: they have a parameterless constructor, and when you call the AddComponent(mComponent); method in an entity, it is the entity's job to set the component's entity to itself. The alternative would be having a Field, int, int parameters in the factory class, to do stuff like: new Entity(Field, 10, 5); But I also don't like the fact that I have to create new entities like this. I would prefer creating entities via the Field object itself. How can I make entity/component creation more elegant and less prone to bugs?

    Read the article

  • How stoper one annimation model on XNA?

    - by Mehdi Bugnard
    I met a Difficulty for one stoper annimation. Everything works great starter for the animation. But I do not see how stoper and can continue the annimation paused. The "animationPlayer.StartClip (clip)" is used to choke the annimation but impossible to find a way to stoper Thans's a lot Here is my code to use. protected override void LoadContent() { //Model - Player model_player = Content.Load<Model>("Models\\Player\\models"); // Look up our custom skinning information. SkinningData skinningData = model_player.Tag as SkinningData; if (skinningData == null) throw new InvalidOperationException ("This model does not contain a SkinningData tag."); // Create an animation player, and start decoding an animation clip. animationPlayer = new AnimationPlayer(skinningData); AnimationClip clip = skinningData.AnimationClips["ArmLowAction_006"]; animationPlayer.StartClip(clip); } protected overide update(GameTime gameTime) { KeyboardState key = Keyboard.GetState(); // If player don't move -> stop anim if (!key.IsKeyDown(Keys.W) && !keyStateOld.IsKeyUp(Keys.S) && !keyStateOld.IsKeyUp(Keys.A) && !keyStateOld.IsKeyUp(Keys.D)) { //animation stop ? not exist ? animationPlayer.Stop(); isPlayerStop = true; } else { if(isPlayerStop == true) { isPlayerStop = false; animationPlayer.StartClip(Clip); } }

    Read the article

  • Unable to use Maya animation with scripts when imported to Unity

    - by keshk
    I am testing to import Maya animation over to Unity. I set up a simple cylinder with 2 bones and an IK handle. Made a simple animation where the cylinder bends and goes back to straight position over 24 frames. Following that, I selected everything and baked, all bones,ik,(animation by selecting all at the graph editor) and even the cylinder. I saved the scene and then select all and export as FBX with animation and bake checked. In unity imported it and at the preview able to see the animation. When I load the model into scene and play (after assigning the controller), able to see animation too. But now when I try to script it and control the animation, nothing happens. Even to test, I tried the following under the Update method. if(animation.isPlaying) Debug.Log("Animation Works"); else Debug.Log("Animation not working"); The bool doesn't even return true nor false. My animation is called "bend", thus just for try I did the following and nothing happens. animation.Play("bend"); Can please advice based on my steps, am I missing something. Do I need to add the controller or is that an unnecessary step? Did I screw up on the Maya part or the Unity part. Thanks for help.

    Read the article

  • How to move a line of sprites in a sine wave?

    - by electroflame
    So, I'm spawning a horizontal line of enemies that I would like to have move in a nice wave. Currently I tried: Enemy.position.X += Enemy.velocity.X; Enemy.position.Y += -(float)Math.Cos(Enemy.position.X / 200) * 5; This...kind of works. But the wave is not a true wave. The top and bottom of one pass are not the same (e.g. 5 for the top, and -5 for the bottom (I don't mean literal points, I just meant that it's not symmetrical)). Is there a better way to do this? I would like the whole line to move in a wave, so it looks fluid. By that, I mean that it should look like each enemy is "following" the one in front of it. The code I posted does have this fluidity to it, but like I said, it's not a perfect wave. Any ideas? Thanks in advance.

    Read the article

  • Why can't a blendShader sample anything but the current coordinate of the background image?

    - by Triynko
    In Flash, you can set a DisplayObject's blendShader property to a pixel shader (flash.shaders.Shader class). The mechanism is nice, because Flash automatically provides your Shader with two input images, including the background surface and the foreground display object's bitmap. The problem is that at runtime, the shader doesn't allow you to sample the background anywhere but under the current output coordinate. If you try to sample other coordinates, it just returns the color of the current coordinate instead, ignoring the coordinates you specified. This seems to occur only at runtime, because it works properly in the Pixel Bender toolkit. This limitation makes it impossible to simulate, for example, the Aero Glass effect in Windows Vista/7, because you cannot sample the background properly for blurring. I must mention that it is possible to create the effect in Flash through manual composition techniques, but it's hard to determine when it actually needs updated, because Flash does not provide information about when a particular area of the screen or a particular display object needs re-rendered. For example, you may have a fixed glass surface with objects moving underneath it that don't dispatch events when they move. The only alternative is to re-render the glass bar every frame, which is inefficient, which is why I am trying to do it through a blendShader so Flash determines when it needs rendered automatically. Is there a technical reason for this limitation, or is it an oversight of some sort? Does anyone know of a workaround, or a way I could provide my manual composition implementation with information about when it needs re-rendered? The limitation is mentioned with no explanation in the last note in this page: http://help.adobe.com/en_US/as3/dev/WSB19E965E-CCD2-4174-8077-8E5D0141A4A8.html It says: "Note: When a Pixel Bender shader program is run as a blend in Flash Player or AIR, the sampling and outCoord() functions behave differently than in other contexts.In a blend, a sampling function will always return the current pixel being evaluated by the shader. You cannot, for example, use add an offset to outCoord() in order to sample a neighboring pixel. Likewise, if you use the outCoord() function outside a sampling function, its coordinates always evaluate to 0. You cannot, for example, use the position of a pixel to influence how the blended images are combined."

    Read the article

  • In a state machine, is it a good idea to separate states and transitions?

    - by codablank1
    I have implemented a small state machine in this way (in pseudo code): class Input {} class KeyInput inherits Input { public : enum { Key_A, Key_B, ..., } } class GUIInput inherits Input { public : enum { Button_A, Button_B, ..., } } enum Event { NewGame, Quit, OpenOptions, OpenMenu } class BaseState { String name; Event get_event (Input input); void handle (Event e); //event handling function } class Menu inherits BaseState{...} class InGame inherits BaseState{...} class Options inherits BaseState{...} class StateMachine { public : BaseState get_current_state () { return current_state; } void add_state (String name, BaseState state) { statesMap.insert(name, state);} //raise an exception if state not found BaseState get_state (String name) { return statesMap.find(name); } //raise an exception if state or next_state not found void add_transition (Event event, String state_name, String next_state_name) { BaseState state = get_state(state_name); BaseState next_state = get_state(next_state_name); transitionsMap.insert(pair<event, state>, next_state); } //raise exception if couple not found BaseState get_next_state(Event event, BaseState state) { return transitionsMap.find(pair<event, state>); } void handle(Input input) { Event event = current_state.get_event(input) current_state.handle(event); current_state = get_next_state(event, current_state); } private : BaseState current_state; map<String, BaseState> statesMap; //map of all states in the machine //for each couple event/state, this map stores the next state map<pair<Event, BaseState>, BaseState> transitionsMap; } So, before getting the transition, I need to convert the key input or GUI input to the proper event, given the current state; thus the same key 'W' can launch a new game in the 'Menu' state or moving forward a character in the 'InGame' state; Then I get the next state from the transitionsMap and I update the current state Does this configuration seem valid to you ? Is it a good idea to separate states and transitions ? And I have some kind of trouble to represent a 'null state' or a 'null event'; What initial value can I give to the current state and which one should be returned by get_state if it fails ?

    Read the article

  • How can I gain access to a player instance in a Minecraft mod?

    - by Andrew Graber
    I'm creating Minecraft mod with a pickaxe that takes away experience when you break a block. The method for taking away experience from a player is addExperience on EntityPlayer, so I need to get an instance of EntityPlayer for the player using my pickaxe when the pickaxe breaks a block, so that I can remove the appropriate amount of experience. My pickaxe class currently looks like this: public class ExperiencePickaxe extends ItemPickaxe { public ExperiencePickaxe(int ItemID, EnumToolMaterial material){ super(ItemID, material); } public boolean onBlockDestroyed(ItemStack par1ItemStack, World par2World, int par3, int par4, int par5, int par6, EntityLiving par7EntityLiving) { if ((double)Block.blocksList[par3].getBlockHardness(par2World, par4, par5, par6) != 0.0D) { EntityPlayer e = new EntityPlayer(); // create an instance e.addExperience(-1); } return true; } } Obviously, I cannot actually create a new EntityPlayer since it is an abstract class. How can I get access to the player using my pickaxe?

    Read the article

  • Low complexity shader to indicate the sides of a polyline

    - by Pris
    I have a bunch of polylines that I draw using GL_LINES. They can have thousands of points. They actually represent the separation of land and water on a map. I don't have complete polygons, just the ordered set of points. I'm looking for a neat but efficient way to visually convey Side A and Side B as being different. For example I could offset the polyline in one direction a few times and fade it out (but every offset is doubling the number of points), or offset it once to make a "ribbon" and give one side a 'glow' like effect to mimic the outer glow or shadow of a polygon). This is for a mobile application and I'm using OpenGL ES 2. I'd like to keep the effect as simple as possible from a complexity stand point. I'm looking for some additional ideas; maybe there's a clever shader technique out there or a visual effect I haven't considered.

    Read the article

  • Extrapolation breaks collision detection

    - by user22241
    Before applying extrapolation to my sprite's movement, my collision worked perfectly. However, after applying extrapolation to my sprite's movement (to smooth things out), the collision no longer works. This is how things worked before extrapolation: However, after I implement my extrapolation, the collision routine breaks. I am assuming this is because it is acting upon the new coordinate that has been produced by the extrapolation routine (which is situated in my render call ). After I apply my extrapolation How to correct this behaviour? I've tried puting an extra collision check just after extrapolation - this does seem to clear up a lot of the problems but I've ruled this out because putting logic into my rendering is out of the question. I've also tried making a copy of the spritesX position, extrapolating that and drawing using that rather than the original, thus leaving the original intact for the logic to pick up on - this seems a better option, but it still produces some weird effects when colliding with walls. I'm pretty sure this also isn't the correct way to deal with this. I've found a couple of similar questions on here but the answers haven't helped me. This is my extrapolation code: public void onDrawFrame(GL10 gl) { //Set/Re-set loop back to 0 to start counting again loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip){ SceneManager.getInstance().getCurrentScene().updateLogic(); nextGameTick+=skipTicks; timeCorrection += (1000d/ticksPerSecond) % 1; nextGameTick+=timeCorrection; timeCorrection %=1; loops++; tics++; } extrapolation = (float)(System.currentTimeMillis() + skipTicks - nextGameTick) / (float)skipTicks; render(extrapolation); } Applying extrapolation render(float extrapolation){ //This example shows extrapolation for X axis only. Y position (spriteScreenY is assumed to be valid) extrapolatedPosX = spriteGridX+(SpriteXVelocity*dt)*extrapolation; spriteScreenPosX = extrapolationPosX * screenWidth; drawSprite(spriteScreenX, spriteScreenY); } Edit As I mentioned above, I have tried making a copy of the sprite's coordinates specifically to draw with.... this has it's own problems. Firstly, regardless of the copying, when the sprite is moving, it's super-smooth, when it stops, it's wobbling slightly left/right - as it's still extrapolating it's position based on the time. Is this normal behavior and can we 'turn it off' when the sprite stops? I've tried having flags for left / right and only extrapolating if either of these is enabled. I've also tried copying the last and current positions to see if there is any difference. However, as far as collision goes, these don't help. If the user is pressing say, the right button and the sprite is moving right, when it hits a wall, if the user continues to hold the right button down, the sprite will keep animating to the right, while being stopped by the wall (therefore not actually moving), however because the right flag is still set and also because the collision routine is constantly moving the sprite out of the wall, it still appear to the code (not the player) that the sprite is still moving, and therefore extrapolation continues. So what the player would see, is the sprite 'static' (yes, it's animating, but it's not actually moving across the screen), and every now and then it shakes violently as the extrapolation attempts to do it's thing....... Hope this help

    Read the article

  • IDirect3DDevice9Ex and D3DPOOL_MANAGED?

    - by bluescrn
    So I wanted to switch to IDirect3DDevice9Ex, purely for the SetFrameLatency function, as fullscreen vsynced D3D seemed to produce noticable input lag. But then it tells me 'ha ha ha! now you can't use D3DPOOL_MANAGED!': Direct3D9: (ERROR) :D3DPOOL_MANAGED is not valid with IDirect3DDevice9Ex Is this really as unpleasant as it looks (when you're relying quite heavily on managed resources) - or is there a simple solution? If it really does mean manual management of everything (reloading all static textures, VBs, and IBs on a device reset), is it worth the hassle, will IDirect3DDevice9Ex bring enough benefit to make it worth writing a new resource manager? Starting to think I must be doing something wrong, due to this: Direct3D9: (ERROR) :Lock is not supported for textures allocated with POOL_DEFAULT unless they are marked D3DUSAGE_DYNAMIC. So if I put my (static) textures in POOL_DEFAULT, they need flagging as D3DUSAGE_DYNAMIC, just because I lock them once to load the data in?

    Read the article

< Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >