Search Results

Search found 25550 results on 1022 pages for 'mere development'.

Page 430/1022 | < Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >

  • Best way to implement mouse-based movement in MMOG

    - by fiftyeight
    I want to design an MMO where players click the destination they want to walk to with their mouse and the character moves there, similar to Runescape in this manner. I think it should be easier than keyboard movement since the client can simply send the server the destination each time the player clicks on a destination. The main thing I'm trying to decide is what to do when there are obstacles in the way. It's no problem to implement a simple path-finding solution on the client, the question is if the server will do path-finding as well, since it'll probably take too much Computation power from the server. What I though is that when there is an obstacle the client will send only the first coordinate it plans to go to and then when he gets there he'll send the next coordinate automatically. For example if there is a rock in the way the character will decide on a route that is made of two destinations so it goes around the rock and when it arrives at the first destination it sends the next coordinate. That way if the player changes destination is the middle he won't send unnecessary information. Is this a good way to implement it and is there a standard way MMOGs usually do it? EDIT: I should also mention that the server will make sure all movements are legal and there aren't any walls in the way etc. In the way I wrote it should be quite easy since all movements will be sent in straight lines so the server will just check there aren't any obstacles along that line.

    Read the article

  • Make an object slide around an obstacle

    - by Isaiah
    I have path areas set up in a game I'm making for canvas/html5 and have got it working to keep the player within these areas. I have a function isOut(boundary, x, y) that returns true if the point is outside the boundary. What I do is check only the new position x/y separately with the corresponding old position x/y. Then if each one is out I assign them the past value from the frame before. The old positions are kept in a variable from a closure I made. like this: opos = [x,y];//old position npos = [x,y];//new position if(isOut(bound, npos[0], opos[1])){ npos[0] = opos[0]; //assign it the old x position } if(isOut(bound, opos[0], npos[1])){ npos[1] = opos[1]; //assign it the old y position } It looks nice and works good at certain angles, but if your boundary has diagonal regions it results in jittery motion. What's happening is the y pos exits the area while x doesn't and continues pushing the player to the side, once it has moved the player to the side a bit the player can move forward and then the y exits again and the whole process repeats. Anyone know how I may be able to achieve a smoother slide? I have access to the player's velocity vector, the angle, and the speed(when used with the angle). I can move the play with either angle/speed or x/yvelocities as I've built in backups to translate one to the other if either have been altered manually.

    Read the article

  • Computer Games Technolgy or Software Engineering?

    - by Suleman Anwar
    I'm in the last year of my college and going to university next year. Could you tell me what the difference between Software Engineering and Computer Games Technology is? I know a bit of both but don't know the actual difference. I'm kind off in a dilemma between these two. I want to be a programmer, I'd love to go into gaming but I heard getting a job within a computer games company is really hard.

    Read the article

  • Which Open Source Licenses can address concerns for an Open Source Game Engine?

    - by Chris
    I am on a team that is looking to open source an engine we are building. It's intended as an engine for Online RPG style games. We're writing it to work on both desktops and android platforms. I've been over to the OSI http://opensource.org/licenses/category to check out the most common licenses. However, this will be my first time going into an open source project and I wanted to know if the community had some insight into which licenses might be best suited. Key licensing concerns: Removing or limiting our liability (most already seem to cover this, but stating for completeness). We want other developers to be able to take part or all of our project and use it in their own projects with proper accreditation to our project. Licensing should not hinder someone's ability to quickly use the engine. They should be able to download a release and start using it without needing to wait on licensing issues. Game content (gfx, sound, etc.) that is not part of the engine should be allowed to be licensed separately. If someone is using our engine, they can retain full copy right of their content, including engine generated data. Our primary goal is exposure, which is why we're going open source to start with. Both for the project and for the individuals developing it. Are there any licenses that can require accreditation visible to players? While I'd put our primary goal as exposure, for licensing the accreditation is less of a concern. From what I've read through (and have been able to understand) it doesn't seem like any of the licenses cover anything that is produced by the licensed software. Are there any that state this specifically, or does simply not mentioning it leave it open for other licensing? Are there any other concerns that we should consider? Has anyone had any issues using any of these licenses?

    Read the article

  • how difficult to add vibration/feedback to a open source driving game

    - by Jonathan Day
    Hi, I'm looking to use SuperTuxKart as a basis for a PhD research project. A key requirement for the game is to provide vibration feedback through the controller (obviously dependant on the controller itself). I don't believe that the game currently includes this feature and I'm trying to get a feel for how big a challenge it would be to add. My background is as a J2EE and PHP developer/architect, so I don't know C++ as such, but am prepared to give it a crack if there are resources and guides to assist, and it's not a herculean task. Alternatively, if you know of any open source games that do include vibration feedback, please feel free to let me know! Preferably the game would be of the style that the player had to navigate a character (or character's vehicle) over a repeatable course/map. TIA, JD

    Read the article

  • What is a good way to measure game virality?

    - by Chris Garrett
    I have added some social features to an iPhone game (Lexitect if you're curious), such as email, Twitter, and Facebook integration for sharing high scores. Along with these features, I am measuring how many times users make it to each step. The goal of these features are to make the game more viral, and I am trying to get to a measure of game virality. I would think that a game virality metric would produce a number based on 1.0, where 1.0 = zero viral growth, and 1.01 would represent 1% viral growth over some unit of time. How is virality normally measured, and in what units? How is time capped on the metric? i.e. if I gave each player a year to determine how many recommendations they make, I wouldn't get any real numbers for a year from the time I start tracking it. Are there any standards for tracking virality in a meaningful way?

    Read the article

  • Logic in Entity Components Systems

    - by aaron
    I'm making a game that uses an Entity/Component architecture basically a port of Artemis's framework to c++,the problem arises when I try to make a PlayerControllerComponent, my original idea was this. class PlayerControllerComponent: Component { public: virtual void update() = 0; }; class FpsPlayerControllerComponent: PlayerControllerComponent { public: void update() { //handle input } }; and have a system that updates PlayerControllerComponents, but I found out that the artemis framework does not look at sub-classes the way I thought it would. So all in all my question here is should I make the framework aware of subclasses or should I add a new Component like object that is used for logic.

    Read the article

  • What is the cost of custom made 2D game sprites? [closed]

    - by Michael Harroun
    Possible Duplicate: How much to pay for artwork in an indie game? I am looking for sprites similar in style to those of Final fantasy Tactics, but with a much higher resolution that will work well for both a browser and an iPhone. In terms of animations: Walking in 4 directions Swinging with 1 hand Some sort of "casting animation" (depending on cost I may use the 1 hand swing with a wand). Taking a hit Kneeling Fallen How much would something like that cost per sprite?

    Read the article

  • 3D Graphics with XNA Game Studio 4.0 bug in light map?

    - by Eibis
    i'm following the tutorials on 3D Graphics with XNA Game Studio 4.0 and I came up with an horrible effect when I tried to implement the Light Map http://i.stack.imgur.com/BUWvU.jpg this effect shows up when I look towards the center of the house (and it moves with me). it has this shape because I'm using a sphere to represent light; using other light shapes gives different results. I'm using a class PreLightingRenderer: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Dhpoware; using Microsoft.Xna.Framework.Content; namespace XNAFirstPersonCamera { public class PrelightingRenderer { // Normal, depth, and light map render targets RenderTarget2D depthTarg; RenderTarget2D normalTarg; RenderTarget2D lightTarg; // Depth/normal effect and light mapping effect Effect depthNormalEffect; Effect lightingEffect; // Point light (sphere) mesh Model lightMesh; // List of models, lights, and the camera public List<CModel> Models { get; set; } public List<PPPointLight> Lights { get; set; } public FirstPersonCamera Camera { get; set; } GraphicsDevice graphicsDevice; int viewWidth = 0, viewHeight = 0; public PrelightingRenderer(GraphicsDevice GraphicsDevice, ContentManager Content) { viewWidth = GraphicsDevice.Viewport.Width; viewHeight = GraphicsDevice.Viewport.Height; // Create the three render targets depthTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Single, DepthFormat.Depth24); normalTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); lightTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); // Load effects depthNormalEffect = Content.Load<Effect>(@"Effects\PPDepthNormal"); lightingEffect = Content.Load<Effect>(@"Effects\PPLight"); // Set effect parameters to light mapping effect lightingEffect.Parameters["viewportWidth"].SetValue(viewWidth); lightingEffect.Parameters["viewportHeight"].SetValue(viewHeight); // Load point light mesh and set light mapping effect to it lightMesh = Content.Load<Model>(@"Models\PPLightMesh"); lightMesh.Meshes[0].MeshParts[0].Effect = lightingEffect; this.graphicsDevice = GraphicsDevice; } public void Draw() { drawDepthNormalMap(); drawLightMap(); prepareMainPass(); } void drawDepthNormalMap() { // Set the render targets to 'slots' 1 and 2 graphicsDevice.SetRenderTargets(normalTarg, depthTarg); // Clear the render target to 1 (infinite depth) graphicsDevice.Clear(Color.White); // Draw each model with the PPDepthNormal effect foreach (CModel model in Models) { model.CacheEffects(); model.SetModelEffect(depthNormalEffect, false); model.Draw(Camera.ViewMatrix, Camera.ProjectionMatrix, Camera.Position); model.RestoreEffects(); } // Un-set the render targets graphicsDevice.SetRenderTargets(null); } void drawLightMap() { // Set the depth and normal map info to the effect lightingEffect.Parameters["DepthTexture"].SetValue(depthTarg); lightingEffect.Parameters["NormalTexture"].SetValue(normalTarg); // Calculate the view * projection matrix Matrix viewProjection = Camera.ViewMatrix * Camera.ProjectionMatrix; // Set the inverse of the view * projection matrix to the effect Matrix invViewProjection = Matrix.Invert(viewProjection); lightingEffect.Parameters["InvViewProjection"].SetValue(invViewProjection); // Set the render target to the graphics device graphicsDevice.SetRenderTarget(lightTarg); // Clear the render target to black (no light) graphicsDevice.Clear(Color.Black); // Set render states to additive (lights will add their influences) graphicsDevice.BlendState = BlendState.Additive; graphicsDevice.DepthStencilState = DepthStencilState.None; foreach (PPPointLight light in Lights) { // Set the light's parameters to the effect light.SetEffectParameters(lightingEffect); // Calculate the world * view * projection matrix and set it to // the effect Matrix wvp = (Matrix.CreateScale(light.Attenuation) * Matrix.CreateTranslation(light.Position)) * viewProjection; lightingEffect.Parameters["WorldViewProjection"].SetValue(wvp); // Determine the distance between the light and camera float dist = Vector3.Distance(Camera.Position, light.Position); // If the camera is inside the light-sphere, invert the cull mode // to draw the inside of the sphere instead of the outside if (dist < light.Attenuation) graphicsDevice.RasterizerState = RasterizerState.CullClockwise; // Draw the point-light-sphere lightMesh.Meshes[0].Draw(); // Revert the cull mode graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; } // Revert the blending and depth render states graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; // Un-set the render target graphicsDevice.SetRenderTarget(null); } void prepareMainPass() { foreach (CModel model in Models) foreach (ModelMesh mesh in model.Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) { // Set the light map and viewport parameters to each model's effect if (part.Effect.Parameters["LightTexture"] != null) part.Effect.Parameters["LightTexture"].SetValue(lightTarg); if (part.Effect.Parameters["viewportWidth"] != null) part.Effect.Parameters["viewportWidth"].SetValue(viewWidth); if (part.Effect.Parameters["viewportHeight"] != null) part.Effect.Parameters["viewportHeight"].SetValue(viewHeight); } } } } that uses three effect: PPDepthNormal.fx float4x4 World; float4x4 View; float4x4 Projection; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Depth : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 viewProjection = mul(View, Projection); float4x4 worldViewProjection = mul(World, viewProjection); output.Position = mul(input.Position, worldViewProjection); output.Normal = mul(input.Normal, World); // Position's z and w components correspond to the distance // from camera and distance of the far plane respectively output.Depth.xy = output.Position.zw; return output; } // We render to two targets simultaneously, so we can't // simply return a float4 from the pixel shader struct PixelShaderOutput { float4 Normal : COLOR0; float4 Depth : COLOR1; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; // Depth is stored as distance from camera / far plane distance // to get value between 0 and 1 output.Depth = input.Depth.x / input.Depth.y; // Normal map simply stores X, Y and Z components of normal // shifted from (-1 to 1) range to (0 to 1) range output.Normal.xyz = (normalize(input.Normal).xyz / 2) + .5; // Other components must be initialized to compile output.Depth.a = 1; output.Normal.a = 1; return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPLight.fx float4x4 WorldViewProjection; float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; sampler2D depthSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 LightColor; float3 LightPosition; float LightAttenuation; // Include shared functions #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 LightPosition : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WorldViewProjection); output.LightPosition = output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Find the pixel coordinates of the input position in the depth // and normal textures float2 texCoord = postProjToScreen(input.LightPosition) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; // Extract the normal from the normal map and move from // 0 to 1 range to -1 to 1 range float4 normal = (tex2D(normalSampler, texCoord) - .5) * 2; // Perform the lighting calculations for a point light float3 lightDirection = normalize(LightPosition - position); float lighting = clamp(dot(normal, lightDirection), 0, 1); // Attenuate the light to simulate a point light float d = distance(LightPosition, position); float att = 1 - pow(d / LightAttenuation, 6); return float4(LightColor * lighting * att, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPShared.vsi has some common functions: float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } and finally from the Game class I set up in LoadContent with: effect = Content.Load(@"Effects\PPModel"); models[0] = new CModel(Content.Load(@"Models\teapot"), new Vector3(-50, 80, 0), new Vector3(0, 0, 0), 1f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); house = new CModel(Content.Load(@"Models\house"), new Vector3(0, 0, 0), new Vector3((float)-Math.PI / 2, 0, 0), 35.0f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); models[0].SetModelEffect(effect, true); house.SetModelEffect(effect, true); renderer = new PrelightingRenderer(GraphicsDevice, Content); renderer.Models = new List(); renderer.Models.Add(house); renderer.Models.Add(models[0]); renderer.Lights = new List() { new PPPointLight(new Vector3(0, 120, 0), Color.White * .85f, 2000) }; where PPModel.fx is: float4x4 World; float4x4 View; float4x4 Projection; texture2D BasicTexture; sampler2D basicTextureSampler = sampler_state { texture = ; addressU = wrap; addressV = wrap; minfilter = anisotropic; magfilter = anisotropic; mipfilter = linear; }; bool TextureEnabled = true; texture2D LightTexture; sampler2D lightSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 AmbientColor = float3(0.15, 0.15, 0.15); float3 DiffuseColor; #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 worldViewProjection = mul(World, mul(View, Projection)); output.Position = mul(input.Position, worldViewProjection); output.PositionCopy = output.Position; output.UV = input.UV; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Sample model's texture float3 basicTexture = tex2D(basicTextureSampler, input.UV); if (!TextureEnabled) basicTexture = float4(1, 1, 1, 1); // Extract lighting value from light map float2 texCoord = postProjToScreen(input.PositionCopy) + halfPixel(); float3 light = tex2D(lightSampler, texCoord); light += AmbientColor; return float4(basicTexture * DiffuseColor * light, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } I don't have any idea on what's wrong... googling the web I found that this tutorial may have some bug but I don't know if it's the LightModel fault (the sphere) or in a shader or in the class PrelightingRenderer. Any help is very appreciated, thank you for reading!

    Read the article

  • Android Loading Screen: How do I use a stack to load elements?

    - by tom_mai78101
    I have some problems with figuring out what value I should put in the function: int value_needed_to_figure_out = X; ProgressBar.incrementProgressBy(value_needed_to_figure_out); I've been researching about loading screens and how to use them. Some examples I've seen have implemented Thread.sleep() in a Handler.post(new Runnable()) function. To me, I got most of that concept of using the Handler to update the ProgressBar, while pretending to do some heavy crunching work. So, I kept looking. I have read this thread here: How do I load chunks of data from an assest manager during a loading screen? It said that I can try using a stack it needs to load, and adding a size counter as I add elements to the stack. What does it mean? This is the part where I'm totally stumped. If anyone would provide some hints, I'll gladly appreciate it. Thanks in advance.

    Read the article

  • Draw depth works on WP7 emulator but not device

    - by Luke
    I am making a game on a WP7 device using C# and XNA. I have a system where I always want the object the user is touching to be brought to the top, so every time it is touched I add float.Epsilon to its draw depth (I have it set so that 0 is the back and 1 is the front). On the Emulator that all works fine, but when I run it on my device the draw depths seem to be completely random. I am hoping that anybody knows a way to fix this. I have tried doing a Clean & Rebuild and uninstalling from the device but that is no luck. My call to spritebatch.Begin is: spriteBatch.Begin(SpriteSortMode.FrontToBack, BlendState.AlphaBlend); and to draw I use spriteBatch.Draw(Texture, new Rectangle((int)X, (int)Y, (int)Width, (int)Height), null, Color.White, 0, Vector2.Zero, SpriteEffects.None, mDrawDepth); Where mDrawDepth is a float value of the draw depth (likely to be a very small number; a small multiple of float.Epsilon. Any help much appreciated, thanks!

    Read the article

  • Constrained/penalized distance function

    - by sigma.z.1980
    Assume a character is located on a n by n grid and has to reach a certain entry on that grid. Its current position is (x1,y1). Also on the same grid is an enemy with coordinates (x2,y2). Each step algorithm randomly generates new candidate locations for the hero (if there are k candidates then there is a kx2 matrix of new potential locations. What I need is some distance objective function to compare the candidates. I'm currently using d1 - c * d2, where d1 is distance to the objective (measure in terms of number of pixels for each axis), d2 is distance to the enemy and c is some coefficient (this is very much like a set-up for Lagrangian). It's not working very well though. I'd be quite keen to learn how what constrained distance function are used for similar cases. Any suggestions are very much appreciated.

    Read the article

  • What happened to .fx files in D3D11?

    - by bobobobo
    It seems they completely ruined .fx file loading / parsing in D3D11. In D3D9, loading an entire effect file was D3DXCreateEffectFromFile( .. ), and you got a ID3DXEffect9, which had great methods like SetTechnique and BeginPass, making it easy to load and execute a shader with multiple techniques. Is this completely manual now in D3D11? The highest level functionality I can find is loading a SINGLE shader from an FX file using D3DX11CompileFromFile. Does anyone know if there's an easier way to load FX files and choose a technique? With the level of functionality provided in D3D11 now, it seems like you're better off just writing .hlsl files and forgetting about the whole idea of Techniques.

    Read the article

  • Loading and drawing materials using Lib3ds

    - by Dfowj
    Hey all, i'm currently using Lib3ds to load models into my C++/OpenGL project. So far, i've been follow the model loading tutorial found here. The tutorial gives a good example of how to draw the vertices and normals using VBO's, but so far i've been lost as how to do the same thing with materials. Could i get an explanation/example of how to both load and draw materials of my meshes using Lib3ds and OpenGL?

    Read the article

  • What causes player box/world geometry glitches in old games?

    - by Alexander
    I'm looking to understand and find the terminology for what causes - or allows - players to interfere with geometry in old games. Famously, ID's Quake3 gave birth to a whole community of people breaking the physics by jumping, sliding, getting stuck and launching themselves off points in geometry. Some months ago (though I'd be darned if I can find it again!) I saw a conference held by Bungie's Vic DeLeon and a colleague in which Vic briefly discussed the issues he ran into while attempting to wrap 'collision' objects (please correct my terminology) around environment objects so that players could appear as though they were walking on organic surfaces, while not clipping through them or appear to be walking on air at certain points, due to complexities in the modeling. My aim is to compose a case study essay for University in which I can tackle this issue in games, drawing on early exploits and how techniques have changed to address such exploits and to aid in the gameplay itself. I have 3 current day example of where exploits still exist, however specifically targeting ID Software clearly shows they've massively improved their techniques between Q3 and Q4. So in summary, with your help please, I'd like to gain a slightly better understanding of this issue as a whole (its terminology mainly) so I can use terms and ask the right questions within the right contexts. In practical application, I know what it is, I know how to do it, but I don't have the benefit of level design knowledge yet and its technical widgety knick-knack terms =) Many thanks in advance AJ

    Read the article

  • Detecting collision between ball (circle) and brick(rectangle)?

    - by James Harrison
    Ok so this is for a small uni project. My lecturer provided me with a framework for a simple brickbreaker game. I am currently trying to overcome to problem of detecting a collision between the two game objects. One object is always the ball and the other objects can either be the bricks or the bat. public Collision hitBy( GameObject obj ) { //obj is the bat or the bricks //the current object is the ball // if ball hits top of object if(topX + width >= obj.topX && topX <= obj.topX + obj.width && topY + height >= obj.topY - 2 && topY + height <= obj.topY){ return Collision.HITY; } //if ball hits left hand side else if(topY + height >= obj.topY && topY <= obj.topY + obj.height && topX + width >= obj.topX -2 && topX + width <= obj.topX){ return Collision.HITX; } else return Collision.NO_HIT; } So far I have a method that is used to detect this collision. The the current obj is a ball and the obj passed into the method is the the bricks. At the moment I have only added statement to check for left and top collisions but do not want to continue as I have a few problems. The ball reacts perfectly if it hits the top of the bricks or bat but when it hits the ball often does not change directing. It seems that it is happening toward the top of the left hand edge but I cannot figure out why. I would like to know if there is another way of approaching this or if people know where I'm going wrong. Lastly the collision.HITX calls another method later on the changes the x direction likewise with y.

    Read the article

  • Skip the first RenderTarget when writing to MRT with Opaque blending

    - by cubrman
    I am writing to three rendertargets and whant to know how to tell a GPU not to write to the first RT. When you write a shader you can simply output less data than you have RTs (like output a single float4 when writing to three RTs) and only the first RTs will be affected, but you cannot specify to output this data anywhere else but to COLOR0, then 1, etc. Is there a way to write to several RTs but skip the first target? If I output zeroes, the data in the target will become zeroes, but I need it to remain untuched in the first target and only change in the specified ones. The reason I need this is to prevent data loss when calling SetRendertarget() with DiscardContents RTs. I write to all the RTs at one point and I need to write to only the specified ones afterwards. It must be the first texture as I have a depth buffer linked to it (XNA 4.0). Thanks.

    Read the article

  • Correct way to handle path-finding collision matrix

    - by Xander Lamkins
    Here is an example of me utilizing path finding. The red grid represents the grid utilized by my A* library to locate a distance. This picture is only an example, currently it is all calculated on the 1x1 pixel level (pretty darn laggy). I want to make it so that the farther I click, the less accurate it will be (split the map into larger grid pieces). Edit: as mentioned by Eric, this is not a required game mechanic. I am perfectly fine with any method that allows me to make this accurate while still fast. This isn't the really the topic of this question though. The problem I have is, my current library uses a two dimensional grid of integers. The higher the number in a cell, the more resistance for that grid tile. Currently I'm setting all unwalkable spots to Integer Max. Here is an example of what I want: I'm just not sure how I should set up the arrays of integers of the grid. Every time an element is added/removed to/from the game, it's collision details are updated in the table. Here is a picture of what the map looks like on my collision layer: I probably shouldn't be creating new arrays every time I have to do a path find because my game needs to support tons of PF at the same time. Should I have multiple arrays that are all updated when the dynamic elements are updated (a building is built/a building is destroyed). The problem I see with this is that it will probably make the creation and destruction of buildings a little more laggy than I would want because it would be setting the collision grid for each built in accuracy level. I would also have to add more/remove some arrays if I ever in the future changed the map size. Should I generate the new array based on an accuracy value every time I need to PF? The problem I see with this is that it will probably make any form of PF just as laggy because it will have to search through a MapWidth x MapHeight number of cells to shrink it all down. Or is there a better way? I'm certainly not the best at optimizing really anything. I've just started dealing with XNA so I'm not used to having optimization code really doing much of an affect until now... :( If you need code examples, please ask. I'll add it as an edit. EDIT: While this doesn't directly relate to the question, I figure the more information I provide, the better. To keep your units from moving as accurately to the players desired position, I've decided that once the unit PFs over to the less accurate grid piece, it will then PF on a more accurate level to the exact position requested.

    Read the article

  • Dealing with 2D pixel shaders and SpriteBatches in XNA 4.0 component-object game engine?

    - by DaveStance
    I've got a bit of experience with shaders in general, having implemented a couple, very simple, 3D fragment and vertex shaders in OpenGL/WebGL in the past. Currently, I'm working on a 2D game engine in XNA 4.0 and I'm struggling with the process of integrating per-object and full-scene shaders in my current architecture. I'm using a component-entity design, wherein my "Entities" are merely collections of components that are acted upon by discreet system managers (SpatialProvider, SceneProvider, etc). In the context of this question, my draw call looks something like this: SceneProvider::Draw(GameTime) calls... ComponentManager::Draw(GameTime, SpriteBatch) which calls (on each drawable component) DrawnComponent::Draw(GameTime, SpriteBatch) The SpriteBatch is set up, with the default SpriteBatch shader, in the SceneProvider class just before it tells the ComponentManager to start rendering the scene. From my understanding, if a component needs to use a special shader to draw itself, it must do the following when it's Draw(GameTime, SpriteBatch) method is invoked: public void Draw(GameTime gameTime, SpriteBatch spriteBatch) { spriteBatch.End(); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, null, null, null, EffectShader, ViewMatrix); // Draw things here that are shaded by the "EffectShader." spriteBatch.End(); spriteBatch.Begin(/* same settings that were set by SceneProvider to ensure the rest of the scene is rendered normally */); } My question is, having been told that numerous calls to SpriteBatch.Begin() and SpriteBatch.End() within a single frame was terrible for performance, is there a better way to do this? Is there a way to instruct the currently running SpriteBatch to simply change the Effect shader it is using for this particular draw call and then switch it back before the function ends?

    Read the article

  • How would I achieve diablo like 2D isometric projection?

    - by Darestium
    Good day, I am in the process of coming up with an idea for a game, and I would like it to be isometric like Diablo. The problem is I have no idea how it achieves the effect of height like in the following screenshot (on the columns): http://upload.wikimedia.org/wikipedia/en/thumb/2/20/Diabloscreen.jpg/350px-Diabloscreen.jpg but whatever the case, I'm sure it is going to be harder to achieve then creating a traditional isometric game, but any ideas regarding the topic would be greatly appreciated.

    Read the article

  • how to give action to the CCArray which contain bubbles(sprites)

    - by prakash s
    I am making bubbles shooter game in cocos2d I have taken one array in that i have inserted number of different color bubbles and i showing on my game scene also , but if give some move action to that array ,it moving down but it displaying all the bubbles at one position and automatically destroying , what is the main reason behind this please help me here is my code: -(void)addTarget { CGSize winSize = [[CCDirector sharedDirector] winSize]; //CCSprite *target = [CCSprite spriteWithFile:@"3.png" rect:CGRectMake(0, 0, 256, 256)]; NSMutableArray * movableSprites = [[NSMutableArray alloc] init]; NSArray *images = [NSArray arrayWithObjects:@"1.png", @"2.png", @"3.png", @"4.png",@"5.png",@"1.png",@"5.png", @"3.png", nil]; for(int i = 0; i < images.count; ++i) { NSString *image = [images objectAtIndex:i]; // generate random number based on size of array (array size is larger than 10) CCSprite*target = [CCSprite spriteWithFile:image]; float offsetFraction = ((float)(i+1))/(images.count+1); //target.position = ccp(winSize.width*offsetFraction, winSize.height/2); target.position = ccp(350*offsetFraction, 460); // [[CCActionManager sharedManager ] pauseAllActionsForTarget:target ] ; [self addChild:target]; [movableSprites addObject:target]; //[target runAction:[CCMoveTo actionWithDuration:20.0 position:ccp(0,0)]]; id actionMove = [CCMoveTo actionWithDuration:10 position:ccp(winSize.width/2,winSize. height/2)]; id actionMoveDone = [CCCallFuncN actionWithTarget:self selector:@selector(spriteMoveFinished:)]; [target runAction:[CCSequence actions:actionMove, actionMoveDone, nil]]; } } after the move at certain position i want to display all the bubbles in centre of my window

    Read the article

  • How would you code an AI engine to allow communication in any programming language?

    - by Tokyo Dan
    I developed a two-player iPhone board game. Computer players (AI) can either be local (in the game code) or remote running on a server. In the 2nd case, both client and server code are coded in Lua. On the server the actual AI code is separate from the TCP socket code and coroutine code (which spawns a separate instance of AI for each connecting client). I want to be able to further isolate the AI code so that that part can be a module coded by anyone in their language of choice. How can I do this? What tecniques/technology would enable communication between the Lua TCP socket/coroutine code and the AI module?

    Read the article

  • A problem with texture atlasing in Unity

    - by Hamzeh Soboh
    I have the texture below and I need to get rectangular parts from it. I could finally combine meshes of different quads to improve performance, but I with quads of different tilings, this means different materials, then combining meshes will fail. Can anybody tell me how to have a part of that texture in C#? Such that all quads will be of the same material only then combining meshes passes. Thanks in advance.

    Read the article

  • ConsumeStructuredBuffer, what am I doing wrong?

    - by John
    I'm trying to implement the 3rd exercise in chapter 12 of Introduction to 3D Game Programming with DirectX 11, that is: Implement a Compute Shader to calculate the length of 64 vectors. Previous exercises ask you to do the same with typed buffers and regular structured buffers and I had no problems with them. For what I've read, [Consume|Append]StructuredBuffers are bound to the pipeline using UnorderedAccessViews (as long as they use the D3D11_BUFFER_UAV_FLAG_APPEND, and the buffers have both D3D11_BIND_SHADER_RESOURCE and D3D11_BIND_UNORDERED_ACCESS bind flags). Problem is: my AppendStructuredBuffer works, since I can append data to it and retrieve it from the application to write to a results file, but the ConsumeStructuredBuffer always returns zeroed data. Data is in the buffer, since if I change the UAV to a ShaderResourceView and to a StructuredBuffer in the HLSL side it works. I don't know what I am missing: Should I initialize the ConsumeStructuredBuffer on the GPU, or can I do it when I create the buffer (as I amb currently doing). Is it OK to bind the buffer with a UAV as described above? Do I need to bind it as a ShaderResourceView somehow? Maybe I am missing some step? This is the declaration of buffers in the Compute Shader: struct Data { float3 v; }; struct Result { float l; }; ConsumeStructuredBuffer<Data> gInput; AppendStructuredBuffer<Result> gOutput; And here the creation of the buffer and UAV for input data: D3D11_BUFFER_DESC inputDesc; inputDesc.Usage = D3D11_USAGE_DEFAULT; inputDesc.ByteWidth = sizeof(Data) * mNumElements; inputDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_UNORDERED_ACCESS; inputDesc.CPUAccessFlags = 0; inputDesc.StructureByteStride = sizeof(Data); inputDesc.MiscFlags = D3D11_RESOURCE_MISC_BUFFER_STRUCTURED; D3D11_SUBRESOURCE_DATA vinitData; vinitData.pSysMem = &data[0]; HR(md3dDevice->CreateBuffer(&inputDesc, &vinitData, &mInputBuffer)); D3D11_UNORDERED_ACCESS_VIEW_DESC uavDesc; uavDesc.Format = DXGI_FORMAT_UNKNOWN; uavDesc.ViewDimension = D3D11_UAV_DIMENSION_BUFFER; uavDesc.Buffer.FirstElement = 0; uavDesc.Buffer.Flags = D3D11_BUFFER_UAV_FLAG_APPEND; uavDesc.Buffer.NumElements = mNumElements; md3dDevice->CreateUnorderedAccessView(mInputBuffer, &uavDesc, &mInputUAV); Initial data is an array of Data structs, which contain a XMFLOAT3 with random data. I bind the UAV to the shader using the Effects framework: ID3DX11EffectUnorderedAccessViewVariable* Input = mFX->GetVariableByName("gInput")->AsUnorderedAccessView(); Input->SetUnorderedAccessView(uav); // uav is mInputUAV Any ideas? Thank you.

    Read the article

  • how to stop the array of sprite in cocos2d?

    - by prakash s
    I am devoloping the bubble shooter game in cocos2d how to stop the sprite movement at the center of the game scene in my game bec i want to shoot the bubble after stopping the movenent at certain position here is my code -(void)addTarget { CGSize winSize = [[CCDirector sharedDirector] winSize]; NSMutableArray * movableSprites = [[NSMutableArray alloc] init]; NSArray *images = [NSArray arrayWithObjects:@"1.png", @"2.png", @"3.png", @"4.png",@"5.png", @"6.png", @"7.png",@"8.png" ,nil]; for(int i = 0; i < images.count; ++i) { int index = (arc4random() % 8)+1; NSString *image = [NSString stringWithFormat:@"%d.png", index]; CCSprite*target = [CCSprite spriteWithFile:image]; // generate random number based on size of array (array size is larger than 10) float offsetFraction = ((float)(i+1))/(images.count+1); //target.position = ccp(winSize.width*offsetFraction, winSize.height/2); target.position = ccp(350*offsetFraction, 460); [self addChild:target]; [movableSprites addObject:target]; id actionMove = [CCMoveTo actionWithDuration:10 position:ccp(350*offsetFraction, 100)]; id actionMoveDone = [CCCallFuncN actionWithTarget:self selector:@selector(spriteMoveFinished:)]; [target runAction:[CCSequence actions:actionMove, actionMoveDone, nil]]; } here bubbles are moving from top to bottom but after 5 rows my bubble movement should be stop so please help me to get that logic

    Read the article

< Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >