Search Results

Search found 25377 results on 1016 pages for 'development 4 0'.

Page 444/1016 | < Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >

  • Google Cloud Messaging (GCM) for turn-based mobile multiplayer server?

    - by Chris
    I'm designing a multiplayer turn-based game for Android (over 3g). I'm thinking the clients will send data to a central server over a socket or http, and receive data via GCM push messaging. I'd like to know if anyone has practical experience with GCM for pushing 'real-time' turn data to game clients. What kind of performance and limitations does it have? I'm also considering using a RESTful approach with GAE or Amazon EC2. Any advice about these approaches is appreciated.

    Read the article

  • How do I create a camera?

    - by Morphex
    I am trying to create a generic camera class for a game engine, which works for different types of cameras (Orbital, GDoF, FPS), but I have no idea how to go about it. I have read about quaternions and matrices, but I do not understand how to implement it. Particularly, it seems you need "Up", "Forward" and "Right" vectors, a Quaternion for rotations, and View and Projection matrices. For example, an FPS camera only rotates around the World Y and the Right Axis of the camera; the 6DoF rotates always around its own axis, and the orbital is just translating for a set distance and making it look always at a fixed target point. The concepts are there; implementing this is not trivial for me. SharpDX seems to have has already Matrices and Quaternions implemented, but I don't know how to use them to create a camera. Can anyone point me on what am I missing, what I got wrong? I would really enjoy if you could give a tutorial, some piece of code, or just plain explanation of the concepts.

    Read the article

  • Move a 2D square on y axis on android GLES2

    - by Dan
    I am trying to create a simple game for android, to start i am trying to make the square move down the y axis but the way i am doing it dosent move the square at all and i cant find any tutorials for GLES20 The on draw frame function in the render class updates the users position based on accleration dew to gravity, gets the transform matrix from the user class which is used to move the square down, then the program draws it. All that happens is that the square is drawn, no motion happens public void onDrawFrame(GL10 gl) { user.update(0.0, phy.AccelerationDewToGravity); GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT); // Re draws black background GLES20.glVertexAttribPointer(maPositionHandle, 3, GLES20.GL_FLOAT, false, 12, user.SquareVB);//triangleVB); GLES20.glEnableVertexAttribArray(maPositionHandle); GLES20.glUniformMatrix4fv(maPositionHandle, 1, false, user.getTransformMatrix(), 0); GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4); } The update function in the player class is public void update(double vh, double vv) { Vh += vh; // Increase horrzontal Velosity Vv += vv; // Increase vertical velosity //Matrix.translateM(mMMatrix, 0, (int)Vh, (int)Vv, 0); Matrix.translateM(mMMatrix, 0, mMMatrix, 0, (float)Vh, (float)Vv, 0); }

    Read the article

  • Is there a good way to get pixel-perfect collision detection in XNA?

    - by ashes999
    Is there a well-known way (or perhaps reusable bit of code) for pixel-perfect collision detection in XNA? I assume this would also use polygons (boxes/triangles/circles) for a first-pass, quick-test for collisions, and if that test indicated a collision, it would then search for a per-pixel collision. This can be complicated, because we have to account for scale, rotation, and transparency. WARNING: If you're using the sample code from the link from the answer below, be aware that the scaling of the matrix is commented out for good reason. You don't need to uncomment it out to get scaling to work.

    Read the article

  • JBox2D Polygon Collisions Acting Strange

    - by andy
    I have been playing around with JBox2D and Slick2D and made a little demo with a ground object, a box object, and two different polygons. The problem I am facing is that the collision-detection for the polygons seems to be off (see picture below), but the box's collision works fine. My Code: Main Class package main; import org.jbox2d.common.Vec2; import org.jbox2d.dynamics.BodyType; import org.jbox2d.dynamics.World; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.SlickException; import org.newdawn.slick.state.BasicGameState; import org.newdawn.slick.state.StateBasedGame; import shapes.Box; import shapes.Polygon; public class State1 extends BasicGameState{ World world; int velocityIterations; int positionIterations; float pixelsPerMeter; int state; Box ground; Box box1; Polygon poly1; Polygon poly2; Renderer renderer; public State1(int state) { this.state = state; } @Override public void init(GameContainer gc, StateBasedGame game) throws SlickException { velocityIterations = 10; positionIterations = 10; pixelsPerMeter = 1f; world = new World(new Vec2(0.f, -9.8f)); renderer = new Renderer(gc, gc.getGraphics(), pixelsPerMeter, world); box1 = new Box(-100f, 200f, 40, 50, BodyType.DYNAMIC, world); ground = new Box(-14, -275, 50, 900, BodyType.STATIC, world); poly1 = new Polygon(50f, 10f, new Vec2[] { new Vec2(-6f, -14f), new Vec2(0f, -20f), new Vec2(6f, -14f), new Vec2(10f, 10f), new Vec2(-10f, 10f) }, BodyType.DYNAMIC, world); poly2 = new Polygon(0f, 10f, new Vec2[] { new Vec2(10f, 0f), new Vec2(20f, 0f), new Vec2(30f, 10f), new Vec2(30f, 20f), new Vec2(20f, 30f), new Vec2(10f, 30f), new Vec2(0f, 20f), new Vec2(0f, 10f) }, BodyType.DYNAMIC, world); } @Override public void update(GameContainer gc, StateBasedGame game, int delta) throws SlickException { world.step((float)delta / 180f, velocityIterations, positionIterations); } @Override public void render(GameContainer gc, StateBasedGame game, Graphics g) throws SlickException { renderer.render(); } @Override public int getID() { return this.state; } } Polygon Class package shapes; import org.jbox2d.collision.shapes.PolygonShape; import org.jbox2d.common.Vec2; import org.jbox2d.dynamics.Body; import org.jbox2d.dynamics.BodyDef; import org.jbox2d.dynamics.BodyType; import org.jbox2d.dynamics.FixtureDef; import org.jbox2d.dynamics.World; import org.newdawn.slick.Color; public class Polygon { public float x, y; public Color color; public BodyType bodyType; org.newdawn.slick.geom.Polygon poly; BodyDef def; PolygonShape ps; FixtureDef fd; Body body; World world; Vec2[] verts; public Polygon(float x, float y, Vec2[] verts, BodyType bodyType, World world) { this.verts = verts; this.x = x; this.y = y; this.bodyType = bodyType; this.world = world; init(); } public void init() { def = new BodyDef(); def.type = bodyType; def.position.set(x, y); ps = new PolygonShape(); ps.set(verts, verts.length); fd = new FixtureDef(); fd.shape = ps; fd.density = 2.0f; fd.friction = 0.7f; fd.restitution = 0.5f; body = world.createBody(def); body.createFixture(fd); } } Rendering Class package main; import org.jbox2d.collision.shapes.PolygonShape; import org.jbox2d.collision.shapes.ShapeType; import org.jbox2d.common.MathUtils; import org.jbox2d.common.Vec2; import org.jbox2d.dynamics.Body; import org.jbox2d.dynamics.Fixture; import org.jbox2d.dynamics.World; import org.newdawn.slick.Color; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.geom.Polygon; import org.newdawn.slick.geom.Transform; public class Renderer { World world; float pixelsPerMeter; GameContainer gc; Graphics g; public Renderer(GameContainer gc, Graphics g, float ppm, World world) { this.world = world; this.pixelsPerMeter = ppm; this.g = g; this.gc = gc; } public void render() { Body current = world.getBodyList(); Vec2 center = current.getLocalCenter(); while(current != null) { Vec2 pos = current.getPosition(); g.pushTransform(); g.translate(pos.x * pixelsPerMeter + (0.5f * gc.getWidth()), -pos.y * pixelsPerMeter + (0.5f * gc.getHeight())); Fixture f = current.getFixtureList(); while(f != null) { ShapeType type = f.getType(); g.setColor(getColor(current)); switch(type) { case POLYGON: { PolygonShape shape = (PolygonShape)f.getShape(); Vec2[] verts = shape.getVertices(); int count = shape.getVertexCount(); Polygon p = new Polygon(); for(int i = 0; i < count; i++) { p.addPoint(verts[i].x, verts[i].y); } p.setCenterX(center.x); p.setCenterY(center.y); p = (Polygon)p.transform(Transform.createRotateTransform(current.getAngle() + MathUtils.PI, center.x, center.y)); p = (Polygon)p.transform(Transform.createScaleTransform(pixelsPerMeter, pixelsPerMeter)); g.draw(p); break; } case CIRCLE: { f.getShape(); } default: } f = f.getNext(); } g.popTransform(); current = current.getNext(); } } public Color getColor(Body b) { Color c = new Color(1f, 1f, 1f); switch(b.m_type) { case DYNAMIC: if(b.isActive()) { c = new Color(255, 123, 0); } else { c = new Color(99, 99, 99); } break; case KINEMATIC: break; case STATIC: c = new Color(111, 111, 111); break; default: break; } return c; } } Any help with fixing the collisions would be greatly appreciated, and if you need any other code snippets I would be happy to provide them.

    Read the article

  • Why can't a blendShader sample anything but the current coordinate of the background image?

    - by Triynko
    In Flash, you can set a DisplayObject's blendShader property to a pixel shader (flash.shaders.Shader class). The mechanism is nice, because Flash automatically provides your Shader with two input images, including the background surface and the foreground display object's bitmap. The problem is that at runtime, the shader doesn't allow you to sample the background anywhere but under the current output coordinate. If you try to sample other coordinates, it just returns the color of the current coordinate instead, ignoring the coordinates you specified. This seems to occur only at runtime, because it works properly in the Pixel Bender toolkit. This limitation makes it impossible to simulate, for example, the Aero Glass effect in Windows Vista/7, because you cannot sample the background properly for blurring. I must mention that it is possible to create the effect in Flash through manual composition techniques, but it's hard to determine when it actually needs updated, because Flash does not provide information about when a particular area of the screen or a particular display object needs re-rendered. For example, you may have a fixed glass surface with objects moving underneath it that don't dispatch events when they move. The only alternative is to re-render the glass bar every frame, which is inefficient, which is why I am trying to do it through a blendShader so Flash determines when it needs rendered automatically. Is there a technical reason for this limitation, or is it an oversight of some sort? Does anyone know of a workaround, or a way I could provide my manual composition implementation with information about when it needs re-rendered? The limitation is mentioned with no explanation in the last note in this page: http://help.adobe.com/en_US/as3/dev/WSB19E965E-CCD2-4174-8077-8E5D0141A4A8.html It says: "Note: When a Pixel Bender shader program is run as a blend in Flash Player or AIR, the sampling and outCoord() functions behave differently than in other contexts.In a blend, a sampling function will always return the current pixel being evaluated by the shader. You cannot, for example, use add an offset to outCoord() in order to sample a neighboring pixel. Likewise, if you use the outCoord() function outside a sampling function, its coordinates always evaluate to 0. You cannot, for example, use the position of a pixel to influence how the blended images are combined."

    Read the article

  • 2D wave-like sprite movement XNA

    - by TheBroodian
    I'm trying to create a particle that will 'circle' my character. When the particle is created, it's given a random position in relation to my character, and a box to provide boundaries for how far left or right this particle should circle. When I use the phrase 'circle', I'm referring to a simulated circling, i.e., when moving to the right, the particle will appear in front of my character, when passing back to the left, the particle will appear behind my character. That may have been too much context, so let me cut to the chase: In essence, the path I would like my particle to follow would be akin to a sine wave, with the left and right sides of the provided rectangle being the apexes of the wave. The trouble I'm having is that the position of the particle will be random, so it will never be produced at the same place within the wave twice, but I have no idea how to create this sort of behavior procedurally.

    Read the article

  • Creating smooth lighting transitions using tiles in HTML5/JavaScript game

    - by user12098
    I am trying to implement a lighting effect in an HTML5/JavaScript game using tile replacement. What I have now is kind of working, but the transitions do not look smooth/natural enough as the light source moves around. Here's where I am now: Right now I have a background map that has a light/shadow spectrum PNG tilesheet applied to it - going from darkest tile to completely transparent. By default the darkest tile is drawn across the entire level on launch, covering all other layers etc. I am using my predetermined tile sizes (40 x 40px) to calculate the position of each tile and store its x and y coordinates in an array. I am then spawning a transparent 40 x 40px "grid block" entity at each position in the array The engine I'm using (ImpactJS) then allows me to calculate the distance from my light source entity to every instance of this grid block entity. I can then replace the tile underneath each of those grid block tiles with a tile of the appropriate transparency. Currently I'm doing the calculation like this in each instance of the grid block entity that is spawned on the map: var dist = this.distanceTo( ig.game.player ); var percentage = 100 * dist / 960; if (percentage < 2) { // Spawns tile 64 of the shadow spectrum tilesheet at the specified position ig.game.backgroundMaps[2].setTile( this.pos.x, this.pos.y, 64 ); } else if (percentage < 4) { ig.game.backgroundMaps[2].setTile( this.pos.x, this.pos.y, 63 ); } else if (percentage < 6) { ig.game.backgroundMaps[2].setTile( this.pos.x, this.pos.y, 62 ); } // etc... (sorry about the weird spacing, I still haven't gotten the hang of pasting code in here properly) The problem is that like I said, this type of calculation does not make the light source look very natural. Tile switching looks too sharp whereas ideally they would fade in and out smoothly using the spectrum tilesheet (I copied the tilesheet from another game that manages to do this, so I know it's not a problem with the tile shades. I'm just not sure how the other game is doing it). I'm thinking that perhaps my method of using percentages to switch out tiles could be replaced with a better/more dynamic proximity forumla of some sort that would allow for smoother transitions? Might anyone have any ideas for what I can do to improve the visuals here, or a better way of calculating proximity with the information I'm collecting about each tile? (PS: I'm reposting this from Stack Overflow at someone's suggestion, sorry about the duplicate!)

    Read the article

  • How to move a line of sprites in a sine wave?

    - by electroflame
    So, I'm spawning a horizontal line of enemies that I would like to have move in a nice wave. Currently I tried: Enemy.position.X += Enemy.velocity.X; Enemy.position.Y += -(float)Math.Cos(Enemy.position.X / 200) * 5; This...kind of works. But the wave is not a true wave. The top and bottom of one pass are not the same (e.g. 5 for the top, and -5 for the bottom (I don't mean literal points, I just meant that it's not symmetrical)). Is there a better way to do this? I would like the whole line to move in a wave, so it looks fluid. By that, I mean that it should look like each enemy is "following" the one in front of it. The code I posted does have this fluidity to it, but like I said, it's not a perfect wave. Any ideas? Thanks in advance.

    Read the article

  • Character jump animation is not working when i hit the space bar

    - by muzzy
    i am having an issue with my game in XNA. My jump sprite sheet for my character does not trigger when i hit the space bar. I cant seem to find the problem. Please help me. I am also put the code below to make things easier. namespace WindowsGame4 { public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; // start of new code Texture2D playerWalk; // sprite sheet of walk cycle (14 frames) Texture2D idle; // idle animation Texture2D jump; // jump animation Vector2 playerPos; // to hold x and y position info for the player Point frameDimensions; // to hold width and height values for the frames int presentFrame; // to record which frame we are on at any given time int noOfFrames; // to hold the total number of frames in the spritesheet int elapsedTime; // to know how long each frame has been shown int frameDuration; // to hold info about how long each frame should be shown SpriteEffects flipDirection; // SpriteEffects object int speed; //rate of movement int upMovement; int downMovement; int rightMovement; int leftMovement; int jumpApex; string state; //this is going to be "idle","walking" or "jumping". KeyboardState previousKeyboardState; Vector2 originalPlayerPos; Vector2 movementDirection; Vector2 movementSpeed; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void Initialize() { // textures will be defined in the LoadContent() method playerPos = new Vector2(0, 200); // starting position for the player is at the left of the screen, and a Y position of 200 frameDimensions = new Point(55, 65); // each frame in the idle sprite sheet is 55 wide by 65 high presentFrame = 0; // start at frame 0 noOfFrames = 5; // there are 5 frames in the idle cycle elapsedTime = 0; // set elapsed time to start at 0 frameDuration = 80; // 80 milliseconds is how long each frame will show for (the higher the number, the slower the animation) flipDirection = SpriteEffects.None; // set the value of flipDirection to none speed = 200; upMovement = -2; downMovement = 2; rightMovement = 1; leftMovement = -1; jumpApex = 100; state = "idle"; previousKeyboardState = Keyboard.GetState(); originalPlayerPos = Vector2.Zero; movementDirection = Vector2.Zero; movementSpeed = Vector2.Zero; base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); playerWalk = Content.Load<Texture2D>("sprites/walkSmall"); // load the walk cycle spritesheet idle = Content.Load<Texture2D>("sprites/idleCycle"); // load the idle cycle sprite sheet jump = Content.Load<Texture2D>("sprites/jump"); // load the jump cycle sprite sheet } protected override void UnloadContent() // we're not using this method at the moment { } protected override void Update(GameTime gameTime) // Update method - used it to call a number of other methods { if (Keyboard.GetState().IsKeyDown(Keys.Escape)) { this.Exit(); // Exit the game if the Escape key is pressed } KeyboardState presentKeyboardState = Keyboard.GetState(); UpdateMovement(presentKeyboardState, gameTime); UpdateIdle(presentKeyboardState, gameTime); UpdateJump(presentKeyboardState); UpdateAnimation(gameTime); playerPos += movementDirection * movementSpeed * (float)gameTime.ElapsedGameTime.TotalSeconds; previousKeyboardState = presentKeyboardState; base.Update(gameTime); } private void UpdateAnimation(GameTime gameTime) { elapsedTime += gameTime.ElapsedGameTime.Milliseconds; if (elapsedTime > frameDuration) { elapsedTime -= frameDuration; elapsedTime = elapsedTime - frameDuration; presentFrame++; if (presentFrame > noOfFrames) if (state != "jumping") { presentFrame = 0; } else { presentFrame = 8; } } } protected void UpdateMovement(KeyboardState presentKeyboardState, GameTime gameTime) { if (state == "idle") { movementSpeed = Vector2.Zero; movementDirection = Vector2.Zero; if (presentKeyboardState.IsKeyDown(Keys.Left)) { state = "walking"; movementSpeed.X = speed; movementDirection.X = leftMovement; flipDirection = SpriteEffects.FlipHorizontally; } if (presentKeyboardState.IsKeyDown(Keys.Right)) { state = "walking"; movementSpeed.X = speed; movementDirection.X = rightMovement; flipDirection = SpriteEffects.None; } } } private void UpdateIdle(KeyboardState presentKeyboardState, GameTime gameTime) { if ((presentKeyboardState.IsKeyUp(Keys.Left) && previousKeyboardState.IsKeyDown(Keys.Left) || presentKeyboardState.IsKeyUp(Keys.Right) && previousKeyboardState.IsKeyDown(Keys.Right) && state != "jumping")) { state = "idle"; } } private void UpdateJump(KeyboardState presentKeyboardState) { if (state == "walking" || state == "idle") { if (presentKeyboardState.IsKeyDown(Keys.Space) && !presentKeyboardState.IsKeyDown(Keys.Space)) { presentFrame = 1; DoJump(); } } if (state == "jumping") { if (originalPlayerPos.Y - playerPos.Y > jumpApex) { movementDirection.Y = downMovement; } if (playerPos.Y > originalPlayerPos.Y) { playerPos.Y = originalPlayerPos.Y; state = "idle"; movementDirection = Vector2.Zero; } } } private void DoJump() { if (state != "jumping") { state = "jumping"; originalPlayerPos = playerPos; movementDirection.Y = upMovement; movementSpeed = new Vector2(speed, speed); } } protected override void Draw(GameTime gameTime) // Draw method { GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); // begin the spritebatch if (state == "walking") { noOfFrames = 14; frameDimensions = new Point(55, 65); Vector2 playerWalkPos = new Vector2(playerPos.X, playerPos.Y - 28); spriteBatch.Draw(playerWalk, playerWalkPos, new Rectangle((presentFrame * frameDimensions.X), 0, frameDimensions.X, frameDimensions.Y), Color.White, 0, Vector2.Zero, 1, flipDirection, 0); } if (state == "idle") { noOfFrames = 5; frameDimensions = new Point(55, 65); Vector2 idlePos = new Vector2(playerPos.X, playerPos.Y - 28); spriteBatch.Draw(idle, idlePos, new Rectangle((presentFrame * frameDimensions.X), 0, frameDimensions.X, frameDimensions.Y), Color.White, 0, Vector2.Zero, 1, flipDirection, 0); } if (state == "jumping") { noOfFrames = 9; frameDimensions = new Point(55, 92); Vector2 jumpPos = new Vector2(playerPos.X, playerPos.Y - 28); spriteBatch.Draw(jump, jumpPos, new Rectangle((presentFrame * frameDimensions.X), 0, frameDimensions.X, frameDimensions.Y), Color.White, 0, Vector2.Zero, 1, flipDirection, 0); } spriteBatch.End(); // end the spritebatch commands base.Draw(gameTime); } } }

    Read the article

  • xbox thumbstick used to rotate sprite, basic formula makes it "stick" or feel "sticky" at 90 degree intervals! how do get smooth rotation?

    - by Hugh
    Context: C#, XNA game I am using a very basic formula to calculate what angle my sprite (spaceship for example) should be facing based on the xbox controller thumbstick ie. you use the thumbstick to rotate the ship! in my main update method: shuttleAngle = (float) Math.Atan2(newGamePadState.ThumbSticks.Right.X, newGamePadState.ThumbSticks.Right.Y); in my main draw method: spriteBatch.Draw(shuttle, shuttleCoords, sourceRectangle, Color.White, shuttleAngle, origin, 1.0f, SpriteEffects.None, 1); as you can see its quite simple, i take the current radians from the thumbstick and store it in a float "shuttleAngle" and then use this as the rotation angle (in radians) arguement for drawing the shuttle. For some reason when i rotate the sprint it feels sticky at 0, 90, 180 and 270 degrees angles, it wants to settle at those angles. its not giving me a smooth and natural rotation like i would feel in a game that uses a similar mechanic. PS: my xbox controller is fine!

    Read the article

  • Inverse projection: question about w coordinate

    - by fayeWilly
    I have to perform in shader an inverse projection from a u/v of a render target. What I do is: Get NDC as 2*(u,v,depth) - 1 Then world space as tmp = (P*V)^-1 * (NDC,1.0); world space = tmp/tmp.w; This apparently works, but I am confused about the w division there. Why this work? Shouldn't be a multiplication by a w somewhere (as in the "forward" pipeline there is the perpsective division?) Thank you, Faye

    Read the article

  • Can I use PBOs for textures in iOS?

    - by Radu
    As far as I can see, there is no GL_PIXEL_UNPACK_BUFFER. Also, the OpenGL ES 2.0 specification (and as far as I know, no iOS device currently supports OpenGL ES 2.0) states that glMapBufferOES() can only use GL_ARRAY_BUFFER as a target, yet glTexImage2D() and glTexSubImage2D() only seem to use PBOs if GL_PIXEL_UNPACK_BUFFER is bound. The OpenGL documentation for glBindBuffer() also states that: GL_PIXEL_PACK_BUFFER and GL_PIXEL_UNPACK_BUFFER are available only if the GL version is 2.1 or greater. So, can I use PBOs for textures? Am I missing something obvious?

    Read the article

  • Why does my 3D model not translate the way I expect? [closed]

    - by ChocoMan
    In my first image, my model displays correctly: But when I move the model's position along the Z-axis (forward) I get this, yet the Y-axis doesnt change. An if I keep going, the model disappears into the ground: Any suggestions as to how I can get the model to translate properly visually? Here is how Im calling the model and the terrain in draw(): cameraPosition = new Vector3(camX, camY, camZ); // Copy any parent transforms. Matrix[] transforms = new Matrix[mShockwave.Bones.Count]; mShockwave.CopyAbsoluteBoneTransformsTo(transforms); Matrix[] ttransforms = new Matrix[terrain.Bones.Count]; terrain.CopyAbsoluteBoneTransformsTo(ttransforms); // Draw the model. A model can have multiple meshes, so loop. foreach (ModelMesh mesh in mShockwave.Meshes) { // This is where the mesh orientation is set, as well // as our camera and projection. foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(modelRotation) * Matrix.CreateTranslation(modelPosition); // Looking at the model (picture shouldnt change other than rotation) effect.View = Matrix.CreateLookAt(cameraPosition, modelPosition, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.TextureEnabled = true; } // Draw the mesh, using the effects set above. prepare3d(); mesh.Draw(); } //Terrain test foreach (ModelMesh meshT in terrain.Meshes) { foreach (BasicEffect effect in meshT.Effects) { effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; effect.World = ttransforms[meshT.ParentBone.Index] * Matrix.CreateRotationY(0) * Matrix.CreateTranslation(terrainPosition); // Looking at the model (picture shouldnt change other than rotation) effect.View = Matrix.CreateLookAt(cameraPosition, terrainPosition, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.TextureEnabled = true; } // Draw the mesh, using the effects set above. prepare3d(); meshT.Draw(); DrawText(); } base.Draw(gameTime); } I'm suspecting that there may be something wrong with how I'm handling my camera. The model rotates fine on its Y-axis.

    Read the article

  • Switching between levels, re-initialize existing structure or create new one?

    - by Martino Wullems
    This is something I've been wondering for quite a while. When building games that exist out of multiple levels (platformers, shmups etc) what is the prefered method to switch between the levels? Let's say we have a level class that does the following: Load data for the level design (tiles), enemies, graphics etc. Setup all these elements in their appriopate locations and display them Start physics and game logic I'm stuck between the following 2 methods: 1: Throw away everything in the level class and make a new one, we have to load an entirely new level anyway! 2: pause the game logic and physics, unload all currents assets, then re-initialize those components with the level data for the new level. They both have their pros and cons. Method 1 is alot easier and seems to make sense since we have to redo everything anyway. But method 2 allows you to re-use exisiting elements which might save resources and allows for a smoother transfer to the new level.

    Read the article

  • Unable to use Maya animation with scripts when imported to Unity

    - by keshk
    I am testing to import Maya animation over to Unity. I set up a simple cylinder with 2 bones and an IK handle. Made a simple animation where the cylinder bends and goes back to straight position over 24 frames. Following that, I selected everything and baked, all bones,ik,(animation by selecting all at the graph editor) and even the cylinder. I saved the scene and then select all and export as FBX with animation and bake checked. In unity imported it and at the preview able to see the animation. When I load the model into scene and play (after assigning the controller), able to see animation too. But now when I try to script it and control the animation, nothing happens. Even to test, I tried the following under the Update method. if(animation.isPlaying) Debug.Log("Animation Works"); else Debug.Log("Animation not working"); The bool doesn't even return true nor false. My animation is called "bend", thus just for try I did the following and nothing happens. animation.Play("bend"); Can please advice based on my steps, am I missing something. Do I need to add the controller or is that an unnecessary step? Did I screw up on the Maya part or the Unity part. Thanks for help.

    Read the article

  • How to get the Exact Collision Point and ignore the collision (from 2 "ghost bodies")

    - by Moritz
    I have a very basic problem with Box2D. For a arenatype game where you can throw scriptable "missiles" at other players I decided to use Box2D for the collision detection between the players and the missiles. Players and missiles have their own circular shape with a specific size (varying). But I don´t want to use dynamic bodies because the missiles need to move themselve in any way they want to (defined in the script) and shouldnt be resolved unless the script wants it. The behavior I look for is as following (for each time step): velocity of missiles is set by the specific missile script each missile is moved according to that velocity if a collision accurs now, I want to get the exact position of impact, and now I need a mechanism to decide if the missile should just ignore the collision (for example collision between two fireballs which shouldnt interact) or take it (so they are resolved and dont overlap anymore) So is there a way in Box2D to create Ghost bodies and listen to collisions from them, then deciding if they should ignore the collision or should take them and resolve their position? I hope I was clear enough and would be happy about any help!

    Read the article

  • How to create a reasonably sized urban area manually but efficiently

    - by Overv
    I have a game concept that only really works in an urban area that is of reasonable scale and diversity. In terms of what it should look like, think GTA, in terms of the size think more like a small neighbourhood with residents and a few local shops, perhaps a supermarket. I'm mostly experienced in programming and not at all with modelling, texturing or drawing, but I've found that SketchUp allows me to design interesting looking buildings that I model after real world buildings in my own neighbourhood. Designing these buildings and other objects can take from a few tens of minutes to a few hours. My question is: what is the best approach for a one man army like me who does manage to model buildings to create an interesting city environment in a reasonable amount of time? My game will not be based on procedural generation, the environment will actually be modelled like GTA cities.

    Read the article

  • Best way to implement mouse-based movement in MMOG

    - by fiftyeight
    I want to design an MMO where players click the destination they want to walk to with their mouse and the character moves there, similar to Runescape in this manner. I think it should be easier than keyboard movement since the client can simply send the server the destination each time the player clicks on a destination. The main thing I'm trying to decide is what to do when there are obstacles in the way. It's no problem to implement a simple path-finding solution on the client, the question is if the server will do path-finding as well, since it'll probably take too much Computation power from the server. What I though is that when there is an obstacle the client will send only the first coordinate it plans to go to and then when he gets there he'll send the next coordinate automatically. For example if there is a rock in the way the character will decide on a route that is made of two destinations so it goes around the rock and when it arrives at the first destination it sends the next coordinate. That way if the player changes destination is the middle he won't send unnecessary information. Is this a good way to implement it and is there a standard way MMOGs usually do it? EDIT: I should also mention that the server will make sure all movements are legal and there aren't any walls in the way etc. In the way I wrote it should be quite easy since all movements will be sent in straight lines so the server will just check there aren't any obstacles along that line.

    Read the article

  • How to import or "using" a custom class in Unity script?

    - by Bobbake4
    I have downloaded the JSONObject plugin for parsing JSON in Unity but when I use it in a script I get an error indicating JSONObject cannot be found. My question is how do I use a custom object class defined inside another class. I know I need a using directive to solve this but I am not sure of the path to these custom objects I have imported. They are in the root project folder inside JSONObject folder and class is called JSONObject. Thanks

    Read the article

  • (Abstract) Game engine design

    - by lukeluke
    I am writing a simple 2D game (for mobile platforms) for the first time. From an abstract point of view, i have the main player controlled by the human, the enemies, elments that will interact with the main player, other living elements that will be controlled by a simple AI (both enemies and non-enemies). The human player will be totally controlled by the player, the other actors will be controlled by AI. So i have a class CActor and a class CActorLogic to start with. I would define a CActor subclass CHero (the main player controlled with some input device). This class will probably implement some type of listener, in order to capture input events. The other players controlled by the AI will be probably a specific subclass of CActor (a subclass per-type, obviously). This seems to be reasonable. The CActor class should have a reference to a method of CActorLogic, that we will call something like CActorLogic::Advance() or similar. Actors should have a visual representation. I would introduce a CActorRepresentation class, with a method like Render() that will draw the actor (that is, the right frame of the right animation). Where to change the animation? Well, the actor logic method Advance() should take care of checking collisions and other things. I would like to discuss the design of a game engine (actors, entities, objects, messages, input handling, visualization of object states (that is, rendering, sound output and so on)) but not from a low level point of view, but from an high level point of view, like i have described above. My question is: is there any book/on line resource that will help me organize things (using an object oriented approach)? Thanks

    Read the article

  • Variables in static library are never initialized. Why?

    - by Coyote
    I have a bunch of variables that should be initialized then my game launches, but must of them are never initialized. Here is an example of the code: MyClass.h class MyClass : public BaseObject { DECLARE_CLASS_RTTI(MyClass, BaseObject); ... }; MyClass.cpp REGISTER_CLASS(MyClass) Where REGISTER_CLASS is a macro defined as follow #define REGISTER_CLASS(className)\ class __registryItem##className : public __registryItemBase {\ virtual className* Alloc(){ return NEW className(); }\ virtual BaseObject::RTTI& GetRTTI(){ return className::RTTI; }\ }\ \ const __registryItem##className __registeredItem##className(#className); and __registryItemBase looks like this: class __registryItemBase { __registryItemBase(const _string name):mName(name){ ClassRegistry::Register(this); } const _string mName; virtual BaseObject* Alloc() = 0; virtual BaseObject::RTTI& GetRTTI() = 0; } Now the code is similar to what I currently have and what I have works flawlessly, all the registered classes are registered to a ClassManager before main(...) is called. I'm able to instantiate and configure components from scripts and auto-register them to the right system etc... The problem arrises when I create a static library (currently for the iPhone, but I fear it will happen with android as well). In that case the code in the .cpp files is never registered. Why is the resulting code not executed when it is in the library while the same code in the program's binary is always executed? Bonus questions: For this to work in the static library, what should I do? Is there something I am missing? Do I need to pass a flag when building the lib? Should I create another structure and init all the __registeredItem##className using that structure?

    Read the article

  • EaseFunction in LoopEntityModifier

    - by Siddharth
    For my game, I need EaseFunction in LoopEntityModifier. In my game, I am rotating ball over certain object. For giving effect I want to use EaseFunction. I want to rotate ball around an object take around 4 to 5 round that was already rotating but I want add some effect so that it looks good. For this I have to use EaseFunction which suits my needs. But if I put EaseFunction in rotation modifier then each round rotation modifier apply an effect of EaseFunction that I want only one time occur either starting or ending time. So if I can able to provide EaseFunction in LoopEntityModifier then it will good for me or something similar also work for me. At present my code is something similar like this. new LoopEntityModifier(new RotationModifier(...)); I hope someone has some idea on this.

    Read the article

  • How do I draw video frames onto the screen permanently using XNA?

    - by izb
    I have an app that plays back a video and draws the video onto the screen at a moving position. When I run the app, the video moves around the screen as it plays. Here is my Draw method... protected override void Draw(GameTime gameTime) { Texture2D videoTexture = null; if (player.State != MediaState.Stopped) videoTexture = player.GetTexture(); if (videoTexture != null) { spriteBatch.Begin(); spriteBatch.Draw( videoTexture, new Rectangle(x++, 0, 400, 300), /* Where X is a class member */ Color.White); spriteBatch.End(); } base.Draw(gameTime); } The video moves horizontally acros the screen. This is not exactly as I expected since I have no lines of code that clear the screen. My question is why does it not leave a trail behind? Also, how would I make it leave a trail behind?

    Read the article

  • Material, Pass, Technique and shaders

    - by Papi75
    I'm trying to make a clean and advanced Material class for the rendering of my game, here is my architecture: class Material { void sendToShader() { program->sendUniform( nameInShader, valueInMaterialOrOther ); } private: Blend blendmode; ///< Alpha, Add, Multiply, … Color ambient; Color diffuse; Color specular; DrawingMode drawingMode; // Line Triangles, … Program* program; std::map<string, TexturePacket> textures; // List of textures with TexturePacket = { Texture*, vec2 offset, vec2 scale} }; How can I handle the link between the Shader and the Material? (sendToShader method) If the user want to send additionals informations to the shader (like time elapsed), how can I allow that? (User can't edit Material class) Thanks!

    Read the article

< Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >