Search Results

Search found 26774 results on 1071 pages for 'distributed development'.

Page 533/1071 | < Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >

  • Wall avoidance steering

    - by Vodemki
    I making a small steering simulator using the reynolds boid algorythm. Now I want to add a wall avoidance feature. My walls are in 3D and defined using two points like that: ---------. P2 | | P1 .--------- My agents have a velocity, a position, etc... Could you tell me how to make avoidance with my agents ? Vector2D ReynoldsSteeringModel::repulsionFromWalls() { Vector2D force; vector<Wall *> wallsList = walls(); Point2D pos = self()->position(); Vector2D velocity = self()->velocity(); for (unsigned i=0; i<wallsList.size(); i++) { //TODO } return force; } Then I use all the forces returned by my boid functions and I apply it to my agent. I just need to know how to do that with my walls ? Thanks for your help.

    Read the article

  • Where i must put .xnb files in mono game project using VS2010?

    - by user23899
    Hello there my problem was describe below In the "The Content Pipeline" paragraph http://blogs.msdn.com/b/bobfamiliar/archive/2012/08/07/windows-8-xna-and-monogame-part-3-code-migration-and-windows-8-feature-support.aspx#comments Author describe how fix it using VS2012 put xnb files to \AppX\Content folder but i use VS2010 and mono game templates for it and there is no folders like this so where i must put this asstes to run game correctly

    Read the article

  • How can I get a 2D texture to rotate like a compass in XNA?

    - by IronGiraffe
    I'm working on a small maze puzzle game and I'm trying to add a compass to make it somewhat easier for the player to find their way around the maze. The problem is: I'm using XNA's draw method to rotate the arrow and I don't really know how to get it to rotate properly. What I need it to do is point towards the exit from the player's position, but I'm not sure how I can do that. So does anyone know how I can do this? Is there a better way to do it?

    Read the article

  • Should I be using a game engine?

    - by Kyle
    I'm an experienced programmer, but I'm completely new to making games. I'm thinking of making an iPhone game that is similar to a 2d tower defense type game. In the web programming world, it would be a big waste of time to make a website without using some sort of web framework (eg ruby on rails). Is that the same for making games? Do people mostly use some sort of framework/game engine for making a game? If so, what are the popular ones for iOS?

    Read the article

  • Setting uniform value of a vertex shader for different sprites in a SpriteBatch

    - by midasmax
    I'm using libGDX and currently have a simple shader that does a passthrough, except for randomly shifting the vertex positions. This shift is a vec2 uniform that I set within my code's render() loop. It's declared in my vertex shader as uniform vec2 u_random. I have two different kind of Sprites -- let's called them SpriteA and SpriteB. Both are drawn within the same SpriteBatch's begin()/end() calls. Prior to drawing each sprite in my scene, I check the type of the sprite. If sprite instance of SpriteA: I set the uniform u_random value to Vector2.Zero, meaning that I don't want any vertex changes for it. If sprite instance of SpriteB, I set the uniform u_random to Vector2(MathUtils.random(), MathUtils.random(). The expected behavior was that all the SpriteA objects in my scene won't experience any jittering, while all SpriteB objects would be jittering about their positions. However, what I'm experiencing is that both SpriteA and SpriteB are jittering, leading me to believe that the u_random uniform is not actually being set per Sprite, and being applied to all sprites. What is the reason for this? And how can I fix this such that the vertex shader correctly accepts the uniform value set to affect each sprite individually? passthrough.vsh attribute vec4 a_color; attribute vec3 a_position; attribute vec2 a_texCoord0; uniform mat4 u_projTrans; uniform vec2 u_random; varying vec4 v_color; varying vec2 v_texCoord; void main() { v_color = a_color; v_texCoord = a_texCoord0; vec3 temp_position = vec3( a_position.x + u_random.x, a_position.y + u_random.y, a_position.z); gl_Position = u_projTrans * vec4(temp_position, 1.0); } Java Code this.batch.begin(); this.batch.setShader(shader); for (Sprite sprite : sprites) { Vector2 v = Vector2.Zero; if (sprite instanceof SpriteB) { v.x = MathUtils.random(-1, 1); v.y = MathUtils.random(-1, 1); } shader.setUniformf("u_random", v); sprite.draw(this.batch); } this.batch.end();

    Read the article

  • jump pads problem

    - by Pasquale Sada
    I'm trying to make a character jump on a landing pad who stays above him. Here is the formula I've used (everything is pretty much self-explainable, maybe except character_MaxForce that is the total force the character can jump ): deltaPosition = target - character_position; sqrtTerm = Sqrt(2*-gravity.y * deltaPosition.y + MaxYVelocity* character_MaxForce); time = (MaxYVelocity-sqrtTerm) /gravity.y; speedSq = jumpVelocity.x* jumpVelocity.x + jumpVelocity.z *jumpVelocity.z; if speedSq < (character_MaxForce * character_MaxForce) we have the right time so we can store the value jumpVelocity.x = deltaPosition.x / time; jumpVelocity.z = deltaPosition.z / time; otherwise we try the other solution time = (MaxYVelocity+sqrtTerm) /gravity.y; and then store it jumpVelocity.x = deltaPosition.x / time; jumpVelocity.z = deltaPosition.z / time; jumpVelocity.y = MaxYVelocity; rigidbody_velocity = jumpVelocity; The problem is that the character is jumping away from the landing pad or sometime he jumps too far never hitting the landing pad.

    Read the article

  • Circular class dependency

    - by shad0w
    Is it bad design to have 2 classes which need each other? I'm writing a small game in which I have a GameEngine class which has got a few GameState objects. To access several rendering methods, these GameState objects also need to know the GameEngine class - so it's a circular dependency. Would you call this bad design? I am just asking, because I am not quite sure and at this time I am still able to refactor these things.

    Read the article

  • Text on a model

    - by alecnash
    I am trying to put some text on a Model and I want it to be dynamic. Did some research and came up with drawing the text on the texture and then set it on the model. I use something like this: public static Texture2D SpriteFontTextToTexture(SpriteFont font, string text, Color backgroundColor, Color textColor) { Size = font.MeasureString(text); RenderTarget2D renderTarget = new RenderTarget2D(GraphicsDevice, (int)Size.X, (int)Size.Y); GraphicsDevice.SetRenderTarget(renderTarget); GraphicsDevice.Clear(Color.Transparent); Spritbatch.Begin(); //have to redo the ColorTexture Spritbatch.Draw(ColorTexture.Create(GraphicsDevice, 1024, 1024, backgroundColor), Vector2.Zero, Color.White); Spritbatch.DrawString(font, text, Vector2.Zero, textColor); Spritbatch.End(); GraphicsDevice.SetRenderTarget(null); return renderTarget; } When I was working with primitives and not models everything worked fine because I set the texture exactly where I wanted but with the model (RoundedRect 3D button). It now looks like that: Is there a way to have the text centered only on one side?

    Read the article

  • Light on every model and not in the whole scene

    - by alecnash
    I am using a custom shader and try to pass the effect on my Models like that: foreach (ModelMesh mesh in Model.Meshes) { foreach (ModelMeshPart part in mesh.MeshParts) { part.Effect = effect; } mesh.Draw(); } My only issue is that every Model now has its own light source in it. Why is this happening and is this a problem of my shader? Edit: These are the parameters passed to the shader: private void Get_lambertEffect() { if (_lambertEffect == null) _lambertEffect = Engine.LambertEffect; //Lambert technique (LambertWithShadows, LambertWithShadows2x2PCF, LambertWithShadows3x3PCF) _lambertEffect.CurrentTechnique = _lambertEffect.Techniques["LambertWithShadows3x3PCF"]; _lambertEffect.Parameters["texelSize"].SetValue(Engine.ShadowMap.TexelSize); //ShadowMap parameters _lambertEffect.Parameters["lightViewProjection"].SetValue(Engine.ShadowMap.LightViewProjectionMatrix); _lambertEffect.Parameters["textureScaleBias"].SetValue(Engine.ShadowMap.TextureScaleBiasMatrix); _lambertEffect.Parameters["depthBias"].SetValue(Engine.ShadowMap.DepthBias); _lambertEffect.Parameters["shadowMap"].SetValue(Engine.ShadowMap.ShadowMapTexture); //Camera view and projection parameters _lambertEffect.Parameters["view"].SetValue(Engine._camera.ViewMatrix); _lambertEffect.Parameters["projection"].SetValue(Engine._camera.ProjectionMatrix); _lambertEffect.Parameters["world"].SetValue( Matrix.CreateScale(Size) * world ); //Light and color _lambertEffect.Parameters["lightDir"].SetValue(Engine._sourceLight.Direction); _lambertEffect.Parameters["lightColor"].SetValue(Engine._sourceLight.Color); _lambertEffect.Parameters["materialAmbient"].SetValue(Engine.Material.Ambient); _lambertEffect.Parameters["materialDiffuse"].SetValue(Engine.Material.Diffuse); _lambertEffect.Parameters["colorMap"].SetValue(ColorTexture.Create(Engine.GraphicsDevice, Color.Red)); }

    Read the article

  • Java 2D World question

    - by Munkybunky
    I have a 2D world background made up of a Grid of graphics, which I display on screen with a viewport (800x600) and it all works. My question is I have the following code to convert the mouse co-ordinates to world co-ordinates then World co-ordinates to grid co-ordinates then grid co-ordinates to screen co-ordinates. //Add camerax to mouse screen co-ords to convert to world co-ords. int cursorx_world=(int)camerax+(int)GameInput.mousex; int cursorx_grid=(int)cursorx_world/blocksize; // World Co-ords / gridsize give grid co-ords int cursorx_screen=-(int)camerax+(cursorx_grid*blocksize); So is there anyway I can convert straight from mouse screen co-ords to screen co-ordinates?

    Read the article

  • How to calculate direction from initial point and another point?

    - by Dvole
    I'm making a simple game where I shoot things from a certain point on screen (A). I tap the screen and shoot the projectile from initial point(A) to the tap point(B). But I want the projectile to move along the same path instead and fly out of bounds of the screen. How do I calculate a point that is on the same line that these two points, but further away? This is a simple math, but I can't figure it out.

    Read the article

  • In 3D camera math, calculate what Z depth is pixel unity for a given FOV

    - by badweasel
    I am working in iOS and OpenGL ES 2.0. Through trial and error I've figured out a frustum to where at a specific z depth pixels drawn are 1 to 1 with my source textures. So 1 pixel in my texture is 1 pixel on the screen. For 2d games this is good. Of course it means that I also factor in things like the size of the quad and the size of the texture. For example if my sprite is a quad 32x32 pixels. The quad size is 3.2 units wide and tall. And the texcoords are 32 / the size of the texture wide and tall. Then the frustum is: matrixFrustum(-(float)backingWidth/frustumScale,(float)backingWidth/frustumScale, -(float)backingHeight/frustumScale, (float)backingHeight/frustumScale, 40, 1000, mProjection); Where frustumScale is 800 for a retina screen. Then at a distance of 800 from camera the sprite is pixel for pixel the same as photoshop. For 3d games sometimes I still want to be able to do this. But depending on the scene I sometimes need the FOV to be different things. I'm looking for a way to figure out what Z depth will achieve this same pixel unity for a given FOV. For this my mProjection is set using: matrixPerspective(cameraFOV, near, far, (float)backingWidth / (float)backingHeight, mProjection); With testing I found that at an FOV of 45.0 a Z of 38.5 is very close to pixel unity. And at an FOV of 30.0 a Z of 59.5 is about right. But how can I calculate a value that is spot on? Here's my matrixPerspecitve code: void matrixPerspective(float angle, float near, float far, float aspect, mat4 m) { //float size = near * tanf(angle / 360.0 * M_PI); float size = near * tanf(degreesToRadians(angle) / 2.0); float left = -size, right = size, bottom = -size / aspect, top = size / aspect; // Unused values in perspective formula. m[1] = m[2] = m[3] = m[4] = 0; m[6] = m[7] = m[12] = m[13] = m[15] = 0; // Perspective formula. m[0] = 2 * near / (right - left); m[5] = 2 * near / (top - bottom); m[8] = (right + left) / (right - left); m[9] = (top + bottom) / (top - bottom); m[10] = -(far + near) / (far - near); m[11] = -1; m[14] = -(2 * far * near) / (far - near); } And my mView is set using: lookAtMatrix(cameraPos, camLookAt, camUpVector, mView); * UPDATE * I'm going to leave this here in case anyone has a different solution, can explain how they do it, or why this works. This is what I figured out. In my system I use a 10th scale unit to pixels on non-retina displays and a 20th scale on retina displays. The iPhone is 640 pixels wide on retina and 320 pixels wide on non-retina (obsolete). So if I want something to be the full screen width I divide by 20 to get the OpenGL unit width. Then divide that by 2 to get the left and right unit position. Something 32 units wide centered on the screen goes from -16 to +16. Believe it or not I have an excel spreadsheet do all this math for me and output all the vertex data for my sprite sheet. It's an arbitrary thing I made up to do .1 units = 1 non-retina pixel or 2 retina pixels. I could have made it .01 units = 2 pixels and someday I might switch to that. But for now it's the other. So the width of the screen in units is 32.0, and that means the left most pixel is at -16.0 and the right most is at 16.0. After messing a bit I figured out that if I take the [0] value of an identity modelViewProjection matrix and multiply it by 16 I get the depth required to get 1:1 pixels. I don't know why. I don't know if the 16 is related to the screen size or just a lucky guess. But I did a test where I placed a sprite at that calculated depth and varied the FOV through all the valid values and the object stays steady on screen with 1:1 pixels. So now I'm just calculating the unityDepth that way. If someone gives me a better answer I'll checkmark it.

    Read the article

  • Why wont the LibGDX's main class Initialize on Android Launcher?

    - by BluFire
    So I was searching for different ways that could suit me in programming and came across LibGDX. Naturally I looked at the tutorial. As I was doing it, I was following the steps word for word, except naming the classes. In the end, I was able to create the desktop launcher for the game but not the android launcher. The following error is my error: Cannot instantiate the type Game (Game is the name of the class) I got the tutorial from http://steigert.blogspot.com.au/2012/02/1-libgdx-tutorial-introduction.html The link in the tutorial is the original but it uses jogl instead of lwjgl.

    Read the article

  • Unity , libgdx, or something else to develop my first game for Android?

    - by capcom
    I want to start by saying that I absolutely love Unity (even more when I team it up with Blender). I really want to start developing games for Android, but it seems like Unity poses way too many roadblocks in terms of which devices it supports (and even if it does support them, it doesn't work well on all of them). I've been looking around for alternatives, and found something called libgdx. Well, it's nothing like Unity unfortunately, but at least it seems like I may be able to reach a larger audience in the market. I'd like to start by making 2D games, but with 3D graphics (say, imported from Blender). I can do this very easily in Unity, and it seems like it should be alright with libgdx too. But I really want to know if ditching Unity is a smart idea, considering how comfortable I am with it already, and how much I like it. Finally, is libgdx something you would recommend considering my requirements/situation? BTW, I am quite familiar with Eclipse too. Many thanks. Feel free to request further details.

    Read the article

  • snapping an angle to the closest cardinal direction

    - by Josh E
    I'm developing a 2D sprite-based game, and I'm finding that I'm having trouble with making the sprites rotate correctly. In a nutshell, I've got spritesheets for each of 5 directions (the other 3 come from just flipping the sprite horizontally), and I need to clamp the velocity/rotation of the sprite to one of those directions. My sprite class has a pre-computed list of radians corresponding to the cardinal directions like this: protected readonly List<float> CardinalDirections = new List<float> { MathHelper.PiOver4, MathHelper.PiOver2, MathHelper.PiOver2 + MathHelper.PiOver4, MathHelper.Pi, -MathHelper.PiOver4, -MathHelper.PiOver2, -MathHelper.PiOver2 + -MathHelper.PiOver4, -MathHelper.Pi, }; Here's the positional update code: if (velocity == Vector2.Zero) return; var rot = ((float)Math.Atan2(velocity.Y, velocity.X)); TurretRotation = SnapPositionToGrid(rot); var snappedX = (float)Math.Cos(TurretRotation); var snappedY = (float)Math.Sin(TurretRotation); var rotVector = new Vector2(snappedX, snappedY); velocity *= rotVector; //...snip private float SnapPositionToGrid(float rotationToSnap) { if (rotationToSnap == 0) return 0.0f; var targetRotation = CardinalDirections.First(x => (x - rotationToSnap >= -0.01 && x - rotationToSnap <= 0.01)); return (float)Math.Round(targetRotation, 3); } What am I doing wrong here? I know that the SnapPositionToGrid method is far from what it needs to be - the .First(..) call is on purpose so that it throws on no match, but I have no idea how I would go about accomplishing this, and unfortunately, Google hasn't helped too much either. Am I thinking about this the wrong way, or is the answer staring at me in the face?

    Read the article

  • ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND

    - by Telanor
    I've stared at this for at least half an hour now and I cannot figure out what directx is complaining about. I know this error normally means you put float3 instead of a float4 or something like that, but I've checked over and over and as far as I can tell, everything matches. This is the full error message: D3D11: ERROR: ID3D11DeviceContext::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The input stage requires Semantic/Index (COLOR,0) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND ] This is the vertex shader's input signature as seen in PIX: // Input signature: // // Name Index Mask Register SysValue Format Used // -------------------- ----- ------ -------- -------- ------ ------ // POSITION 0 xyz 0 NONE float xyz // NORMAL 0 xyz 1 NONE float // COLOR 0 xyzw 2 NONE float The HLSL structure looks like this: struct VertexShaderInput { float3 Position : POSITION0; float3 Normal : NORMAL0; float4 Color: COLOR0; }; The input layout, from PIX, is: The C# structure holding the data looks like this: [StructLayout(LayoutKind.Sequential)] public struct PositionColored { public static int SizeInBytes = Marshal.SizeOf(typeof(PositionColored)); public static InputElement[] InputElements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0), new InputElement("NORMAL", 0, Format.R32G32B32_Float, 0), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 0) }; Vector3 position; Vector3 normal; Vector4 color; #region Properties ... #endregion public PositionColored(Vector3 position, Vector3 normal, Vector4 color) { this.position = position; this.normal = normal; this.color = color; } public override string ToString() { StringBuilder sb = new StringBuilder(base.ToString()); sb.Append(" Position="); sb.Append(position); sb.Append(" Color="); sb.Append(Color); return sb.ToString(); } } SizeInBytes comes out to 40, which is correct (4*3 + 4*3 + 4*4 = 40). Can anyone find where the mistake is?

    Read the article

  • How do I create a big multiplayer world in UDK?

    - by Dorpe
    I want to create a big multiplayer world in UDK and I'm having a few difficulties. I created the biggest terrain possible but then any terrain related action I do takes forever. However, I've seen videos of people make same size terrain and working without a problem. My pc is strong enough, so maybe someone can tell me what I'm doing wrong. I want to make it even bigger then the biggest terrain size, so I was thinking of doing level streaming but then I read that streaming is working server side which means if I have a player on every terrain all terrains will still be loaded and I want to save as much memory possible so it will work well online. Thanks for any help you can give.

    Read the article

  • game engine done, ideas missing

    - by Thoms
    I read at many places how people have this GREAT ideas but are not able to program themselves. I have quite the opposite problem. I have developed game engine, level editor, embedded Lua scripting language, I have even made wrapper for Android and it all works well. But I have no good idea about how to proceed with actual levels; I have no good ideas. The engine itself is very generic and can be used in many game concepts, but I just cannot think of anything useful. Do you have any thoughts on how to proceed? Where should I seek ideas? Who should I ask? I am sorry if this question is a duplicate.

    Read the article

  • Character equipment combinations

    - by JimFing
    I'm developing a 2d isometric game (typical Tolkien RPG) and wondering how to handle character/equipment combinations. So for example, the player wears leather boots with chain-mail and a wooden shield and a sword - but then picks up plate-armour instead of chain-mail. I'm using Blender3D to create objects, environments and characters in 3D, then a script runs to render all 3D meshes into 2D orthographic tile maps. So I can use this script to create all the combinations of character equipment for me, but there would be an explosion in terms of the combinations required.

    Read the article

  • How can I get textures on edge of walls like in Super Metroid and Aquaria?

    - by meds
    Games like Super Metroid and Aquaria present the terrain with the other facing parts having rocks and stuff while deeper behind them (i.e. underground) there's different detail or just black. I would like to do something similar using polygons. Terrain is created in my current level as a set of overlapping square boxes. I'm not sure if this rendering method will work such a system for creating terrain but if anyone has ideas I'd love to hear them. Otherwise I'd like to know how I should re-write the terrain rendering system so it actually works to draw terrain in this manner...

    Read the article

  • Do I need the 'w' component in my Vector class?

    - by bobobobo
    Assume you're writing matrix code that handles rotation, translation etc for 3d space. Now the transformation matrices have to be 4x4 to fit the translation component in. However, you don't actually need to store a w component in the vector do you? Even in perspective division, you can simply compute and store w outside of the vector, and perspective divide before returning from the method. For example: // post multiply vec2=matrix*vector Vector operator*( const Matrix & a, const Vector& v ) { Vector r ; // do matrix mult r.x = a._11*v.x + a._12*v.y ... real w = a._41*v.x + a._42*v.y ... // perspective divide r /= w ; return r ; } Is there a point in storing w in the Vector class?

    Read the article

  • Climbing boxes in box2D

    - by Rothens
    I've just stepped into the world of Box2D with libgdx. I've already made a stack of boxes: They are dropped randomly ontop of each other. What I'd like to achieve is to make a character, that could freely climb on the boxes, (He can grip on the boxes anywhere, not just on the side/top of a box) but his weight affects the stack as well, so the boxes could fall down. My google-fu failed me... Is there any way to make this possible?

    Read the article

  • Efficient skeletal animation

    - by Will
    I am looking at adopting a skeletal animation format (as prompted here) for an RTS game. The individual representation of each model on-screen will be small but there will be lots of them! In skeletal animation e.g. MD5 files, each individual vertex can be attached to an arbitrary number of joints. How can you efficiently support this whilst doing the interpolation in GLSL? Or do engines do their animation on the CPU? Or do engines set arbitrary limits on maximum joints per vertex and invoke nop multiplies for those joints that don't use the maximum number? Are there games that use skeletal animation in an RTS-like setting thus proving that on integrated graphics cards I have nothing to worry about in going the bones route?

    Read the article

  • What is the recommended library for using Lua from C++?

    - by DevilWithin
    I am currently planning how to integrate Lua scripting in my 2D Game Engine, and i would like to go straight to the most adequate solution for having C++ classes and objects exposed. I've read this (if it helps you help): http://lua-users.org/wiki/BindingCodeToLua If you have a better scripting language to recomend, go for it ;D All help is welcome, i need to pickup the best solution to start implementing Thanks

    Read the article

  • How to handle jumping up a slope in a runner game?

    - by you786
    In an 2D endless runner, what should happen when the player is running "too fast" up a slope and jumps? For example, in a "normal" case: .O. . __..O_____ . / . / O/ _/ If he is moving to the right slowly enough, he will jump upwards and land on the flat part of the surface. However, if he is moving too fast, the jump will have no effect as his forward motion will bring him back in contact with the slope before he can get high enough to pass over it. When the speed is sufficiently high, there will effectively be no jump. _________ / .O/ O/ _/ Are there any known ways to solve this issue? I know it's physically correct*, but are there techniques that other games use to overcome this in a reasonable manner? As a last resort I'll have to just remove all slopes that are too slanted. *If you constrain the player to never jumping backwards.

    Read the article

< Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >