Search Results

Search found 32114 results on 1285 pages for 'general development'.

Page 520/1285 | < Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >

  • What''s easy extensible technique to store game data?

    - by Miro
    I'm looking for library/technique for storing my game resources - levels, object (effects,world info), items(price,effects,...), NPC(visual info, behavior), everything except graphics/audio stuff. I've seen lua used for Awesome WM configuration. protobuf looks good, but it seems to be designed for network communication. I've tried to write my own parser, but as the project grows it's more and more harder to manage it and catch all the bugs. My requiremets: stability easy extension of data without need to convert older versions to newer good(don't have to be the best) performance of loading not much coding not XML!

    Read the article

  • How should I structure my turn based engine to allow flexibility for players/AI and observation?

    - by Reefpirate
    I've just started making a Turn Based Strategy engine in GameMaker's GML language... And I was cruising along nicely until it came time to handle the turn cycle, and determining who is controlling what player, and also how to handle the camera and what is displayed on screen. Here's an outline of the main switch happening in my main game loop at the moment: switch (GameState) { case BEGIN_TURN: // Start of turn operations/routines break; case MID_TURN: switch (PControlledBy[Turn]) { case HUMAN: switch (MidTurnState) { case MT_SELECT: // No units selected, 'idle' UI state break; case MT_MOVE: // Unit selected and attempting to move break; case MT_ATTACK: break; } break; case COMPUTER: // AI ROUTINES GO HERE break; case OBSERVER: // OBSERVER ROUTINES GO HERE break; } break; case END_TURN: // End of turn routines/operations, and move Turn to next player break; } Now, I can see a couple of problems with this set-up already... But I don't have any idea how to go about making it 'right'. Turn is a global variable that stores which player's turn it is, and the BEGIN_TURN and END_TURN states make perfect sense to me... But the MID_TURN state is baffling me because of the things I want to happen here: If there are players controlled by humans, I want the AI to do it's thing on its turn here, but I want to be able to have the camera follow the AI as it makes moves in the human player's vision. If there are no human controlled player's, I'd like to be able to watch two or more AI's battle it out on the map with god-like 'observer' vision. So basically I'm wondering if there are any resources for how to structure a Turn Based Strategy engine? I've found lots of writing about pathfinding and AI, and those are all great... But when it comes to handling the turn structure and the game states I am having trouble finding any resources at all. How should the states be divided to allow flexibility between the players and the controllers (HUMAN, COMPUTER, OBSERVER)? Also, maybe if I'm on the right track I just need some reassurance before I lay down another few hundred lines of code...

    Read the article

  • Designing generic render/graphics component in C++?

    - by s73v3r
    I'm trying to learn more about Component Entity systems. So I decided to write a Tetris clone. I'm using the "style" of component-entity system where the Entity is just a bag of Components, the Components are just data, a Node is a set of Components needed to accomplish something, and a System is a set of methods that operates on a Node. All of my components inherit from a basic IComponent interface. I'm trying to figure out how to design the Render/Graphics/Drawable Components. Originally, I was going to use SFML, and everything was going to be good. However, as this is an experimental system, I got the idea of being able to change out the render library at will. I thought that since the Rendering would be fairly componentized, this should be doable. However, I'm having problems figuring out how I would design a common Interface for the different types of Render Components. Should I be using C++ Template types? It seems that having the RenderComponent somehow return it's own mesh/sprite/whatever to the RenderSystem would be the simplest, but would be difficult to generalize. However, letting the RenderComponent just hold on to data about what it would render would make it hard to re-use this component for different renderable objects (background, falling piece, field of already fallen blocks, etc). I realize this is fairly over-engineered for a regular Tetris clone, but I'm trying to learn about component entity systems and making interchangeable components. It's just that rendering seems to be the hardest to split out for me.

    Read the article

  • 2D Tile based Game Collision problem

    - by iNbdy
    I've been trying to program a tile based game, and I'm stuck at the collision detection. Here is my code (not the best ^^): void checkTile(Character *c, int **map) { int x1,x2,y1,y2; /* Character position in the map */ c->upY = (c->y) / TILE_SIZE; // Top left corner c->downY = (c->y + c->h) / TILE_SIZE; // Bottom left corner c->leftX = (c->x) / TILE_SIZE; // Top right corner c->rightX = (c->x + c->w) / TILE_SIZE; // Bottom right corner x1 = (c->x + 10) / TILE_SIZE; // 10px from left side point x2 = (c->x + c->w - 10) / TILE_SIZE; // 10px from right side point y1 = (c->y + 10) / TILE_SIZE; // 10px from top side point y2 = (c->y + c->h - 10) / TILE_SIZE; // 10px from bottom side point /* Top */ if (map[c->upY][x1] > 2 || map[c->upY][x2] > 2) c->topCollision = 1; else c->topCollision = 0; /* Bottom */ if ((map[c->downY][x1] > 2 || map[c->downY][x2] > 2)) c->downCollision = 1; else c->downCollision = 0; /* Left */ if (map[y1][c->leftX] > 2 || map[y2][c->leftX] > 2) c->leftCollision = 1; else c->leftCollision = 0; /* Right */ if (map[y1][c->rightX] > 2 || map[y2][c->rightX] > 2) c->rightCollision = 1; else c->rightCollision = 0; } That calculates 8 collision points My moving function is like that: void movePlayer(Character *c, int **map) { if ((c->dirX == LEFT && !c->leftCollision) || (c->dirX == RIGHT && !c->rightCollision)) c->x += c->vx; if ((c->dirY == UP && !c->topCollision) || (c->dirY == DOWN && !c->downCollision)) c->y += c->vy; checkPosition(c, map); } and the checkPosition: void checkPosition(Character *c, int **map) { checkTile(c, map); if (c->downCollision) { if (c->state != JUMPING) { c->vy = 0; c->y = (c->downY * TILE_SIZE - c->h); } } if (c->leftCollision) { c->vx = 0; c->x = (c->leftX) * TILE_SIZE + TILE_SIZE; } if (c->rightCollision) { c->vx = 0; c->x = c->rightX * TILE_SIZE - c->w; } } This works, but sometimes, when the player is landing on ground, right and left collision points become equal to 1. So it's as if there were collision coming from left or right. Does anyone know why this is doing this?

    Read the article

  • Structuring game world entities and their rendering objects

    - by keithjgrant
    I'm putting together a simple 2d tile-based game. I'm finding myself spinning circles on some design decisions, and I think I'm in danger of over-engineering. After all, the game is simple enough that I had a working prototype inside of four hours with fewer than ten classes, it just wasn't scalable or flexible enough for a polished game. My question is about how to structure flow of control between game entity objects and their rendering objects. Should each renderer have a reference to their entity or vice-versa? Or both? Should the entity be in control of calling the render() method, or be completely oblivious? I know there are several valid approaches here, but I'm kind of feeling decision paralysis. What are the pros and cons of each approach?

    Read the article

  • How do I make a jumping dolphin rotate realistically?

    - by Johnny
    I want to program a dolphin that jumps and rotates like a real dolphin. Jumping is not the problem, but I don't know how to make the rotation. At the moment, my dolphin rotates a little weird. But I want that it rotates like a real dolphin does. How can I improve the rotation? public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; Texture2D image, water; float Gravity = 5.0F; float Acceleration = 20.0F; Vector2 Position = new Vector2(1200,720); Vector2 Velocity; float rotation = 0; SpriteEffects flip; Vector2 Speed = new Vector2(0, 0); public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; graphics.PreferredBackBufferWidth = 1280; graphics.PreferredBackBufferHeight = 720; } protected override void Initialize() { base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); image = Content.Load<Texture2D>("cartoondolphin"); water = Content.Load<Texture2D>("background"); flip = SpriteEffects.None; } protected override void Update(GameTime gameTime) { float VelocityX = 0f; float VelocityY = 0f; float time = (float)gameTime.ElapsedGameTime.TotalSeconds; KeyboardState kbState = Keyboard.GetState(); if(kbState.IsKeyDown(Keys.Left)) { rotation = 0; flip = SpriteEffects.None; VelocityX += -5f; } if(kbState.IsKeyDown(Keys.Right)) { rotation = 0; flip = SpriteEffects.FlipHorizontally; VelocityX += 5f; } // jump if the dolphin is under water if(Position.Y >= 670) { if (kbState.IsKeyDown(Keys.A)) { if (flip == SpriteEffects.None) { rotation += 0.01f; VelocityY += 40f; } else { rotation -= 0.01f; VelocityY += 40f; } } } else { if (flip == SpriteEffects.None) { rotation -= 0.01f; VelocityY += -10f; } else { rotation += 0.01f; VelocityY += -10f; } } float deltaY = 0; float deltaX = 0; deltaY = Gravity * (float)gameTime.ElapsedGameTime.TotalSeconds; deltaX += VelocityX * (float)gameTime.ElapsedGameTime.TotalSeconds * Acceleration; deltaY += -VelocityY * (float)gameTime.ElapsedGameTime.TotalSeconds * Acceleration; Speed = new Vector2(Speed.X + deltaX, Speed.Y + deltaY); Position += Speed * (float)gameTime.ElapsedGameTime.TotalSeconds; Velocity.X = 0; if (Position.Y + image.Height/2 > graphics.PreferredBackBufferHeight) Position.Y = graphics.PreferredBackBufferHeight - image.Height/2; base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); spriteBatch.Draw(water, new Rectangle(0, graphics.PreferredBackBufferHeight -100, graphics.PreferredBackBufferWidth, 100), Color.White); spriteBatch.Draw(image, Position, null, Color.White, rotation, new Vector2(image.Width / 2, image.Height / 2), 1, flip, 1); spriteBatch.End(); base.Draw(gameTime); } } I changed my code a little. But I still have some trouble with the rotation. Here's the entire code. The dolphin looks at the wrong direction if I press the left or right key. For example, it looks down if I press the left key. What is wrong with the rotation? At the beginning, the dolphin looks at the left side, but after I pressed a key it just looks down or up. I deleted the "rotation += 0.01f;" lines in the code. Is that correct? public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; Texture2D image, water; float Gravity = 5.0F; float Acceleration = 20.0F; Vector2 Position = new Vector2(1200,720); Vector2 Velocity; float rotation = 0; SpriteEffects flip; Vector2 Speed = new Vector2(0, 0); Vector2 prevPos; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; graphics.PreferredBackBufferWidth = 1280; graphics.PreferredBackBufferHeight = 720; } protected override void Initialize() { base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); image = Content.Load<Texture2D>("cartoondolphin"); water = Content.Load<Texture2D>("background"); flip = SpriteEffects.None; } protected override void Update(GameTime gameTime) { float VelocityX = 0f; float VelocityY = 0f; float time = (float)gameTime.ElapsedGameTime.TotalSeconds; KeyboardState kbState = Keyboard.GetState(); if(kbState.IsKeyDown(Keys.Left)) { flip = SpriteEffects.None; VelocityX += -5f; } if(kbState.IsKeyDown(Keys.Right)) { flip = SpriteEffects.FlipHorizontally; VelocityX += 5f; } rotation = (float)Math.Atan2(Position.X - prevPos.X, Position.Y - prevPos.Y); prevPos = Position; // jump if the dolphin is under water if(Position.Y >= 670) { if (kbState.IsKeyDown(Keys.A)) { if (flip == SpriteEffects.None) { VelocityY += 40f; } else { VelocityY += 40f; } } } else { if (flip == SpriteEffects.None) { VelocityY += -10f; } else { VelocityY += -10f; } } float deltaY = 0; float deltaX = 0; deltaY = Gravity * (float)gameTime.ElapsedGameTime.TotalSeconds; deltaX += VelocityX * (float)gameTime.ElapsedGameTime.TotalSeconds * Acceleration; deltaY += -VelocityY * (float)gameTime.ElapsedGameTime.TotalSeconds * Acceleration; Speed = new Vector2(Speed.X + deltaX, Speed.Y + deltaY); Position += Speed * (float)gameTime.ElapsedGameTime.TotalSeconds; Velocity.X = 0; if (Position.Y + image.Height/2 > graphics.PreferredBackBufferHeight) Position.Y = graphics.PreferredBackBufferHeight - image.Height/2; base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); spriteBatch.Draw(water, new Rectangle(0, graphics.PreferredBackBufferHeight -100, graphics.PreferredBackBufferWidth, 100), Color.White); spriteBatch.Draw(image, Position, null, Color.White, rotation, new Vector2(image.Width / 2, image.Height / 2), 1, flip, 1); spriteBatch.End(); base.Draw(gameTime); } }

    Read the article

  • Vector.Unproject - Checking if a model intersects a large sprite

    - by Fibericon
    Let's say I have a sprite, drawn like this: spriteBatch.Draw(levelCannons[i].texture, levelCannons[i].position, null, alpha, levelCannons[i].rotation, Vector2.Zero, scale, SpriteEffects.None, 0); Picture levelCannon as being a laser beam that goes across the entire screen. I need to see if my 3d model intersects with the screen space inhabited by the sprite. I managed to dig up Vector.Unproject, but that seems to only be useful when dealing with a single point in 2d space, rather than an area. What can I do in my case?

    Read the article

  • Sun & Moon Movement

    - by Thomas Mosey
    I'm creating a 2D HTML5 Canvas Game and am stuck on how to go about animating my Sun & Moon. The current setup is basically setting the moon at -1024 on the X-axis and the sun at 0 and animating them at 1 pixel a second. My canvas width is 1024 pixels and whenever the sun/moons X position crosses over the width of the canvas, it's X position is then set to -1024 to repeat the animation. What I am trying to do is get it to sync up with my day/night cycles. Each day is 10000 ticks long (A tick being added every frame) with Day/Night being 50% each (5000 ticks each). What I am trying to calculate is what I'll need to add to the X position of each per frame to get the sun from an X of 0 to 1024 after 5000 ticks/frames. Any help is appreciated.

    Read the article

  • Why Swipe left doesn't work? [on hold]

    - by Hitesh
    I wrote the below code to detect and perform a sprite action on the single tap and swipe right event. @Override public boolean onSceneTouchEvent(Scene pScene, TouchEvent pSceneTouchEvent) { float x = 0F; int tapCount = 0; boolean playermoving = false; // TODO Auto-generated method stub if (pSceneTouchEvent.getAction() == MotionEvent.ACTION_MOVE) { if (pSceneTouchEvent.getX() > x) { playermoving = true; players.runRight(); } if (pSceneTouchEvent.getX() < x) { Log.i("Run Left", "SPRITE Left"); } /* * if (pSceneTouchEvent.getX() < x) { System.exit(0); * Log.i("SWIPE left", "SPRITE LEFT"); } */ } if (pSceneTouchEvent.getAction() == MotionEvent.ACTION_DOWN) { playermoving = false; x = pSceneTouchEvent.getX(); tapCount++; Log.i("X CORD", String.valueOf(x)); } if (pSceneTouchEvent.isActionDown()) { if (tapCount == 1 && playermoving != true) { tapCount = 0; players.jumpRight(); } } return true; } The code works fine. The only problem is that the swipe left event is not being detected due to some reasons. What can i do to make the swipe left action work? Please help

    Read the article

  • What is the difference between Constant Vertex Attributes and Uniforms?

    - by Samaursa
    According to the OpenGL ES 2.0 Programming Guide: A constant vertex attribute is the same for all vertices of a primitive, and therefore only one value needs to be specified for all the vertices of a primitive. For uniforms the book states: ...any parameter to a shader that is constant across either all vertices or fragments (but that is not known at compile time) should be passed in as a uniform. I've always used uniforms for data that is constant for a primitive but now it appears that attributes can also be used in the same way. Is there more to constant vertex attribute than simply 'they are the same as uniforms'?

    Read the article

  • Turn Based Event Algorithm

    - by GamersIncoming
    I'm currently working on a small roguelike in XNA, which sees the player in a randomly generated series of dungeons fending off creeps, as you might expect. As with most roguelikes, the player makes a move, and then each of the creeps currently active on screen will make a move in turns, until all creeps have updated, and it return's to the player's go. On paper, the simple algorithm is straightforward: Player takes turn Turn Number increments For each active creep, update Position Once all active creeps have updated, allow player to take next turn However, when it comes to actually writing this in more detail, the concept becomes a bit more tricky for me. So my question comes as this: what is the best way to handle events taking turns to trigger, where the completion of each last event triggers the next, when dealing with a large number of creeps (probably stored as an array of an enemy object), and is there an easier way to create some kind of engine that just takes all objects that need updating and chains them together so their updates follow suit? I'm not asking for code, just algorithms and theory in the direction of objects triggering updates one after the other, in a turn based manner. Thanks in advance. Edited: Here's the code I currently have that is horrible :/ if (player.getTurnOver() && updateWait == 0) { if (creep[creepToUpdate].getActive()) { creep[creepToUpdate].moveObject(player, map1); updateWait = 10; } if (creepToUpdate < creep.Length -1) { creepToUpdate++; } else { creepToUpdate = 0; player.setTurnOver(false); } } if (updateWait > 0) { updateWait--; }

    Read the article

  • Movement in RPG

    - by user1264811
    I want to make an RPG game in which I move tile by tile. So when I hit up, the tile row that I am on decreases by one for example. Also, it's supposed to be a slow movement so that I can see the change in tile, i.e. I can see my sprite move from tile to tile. Currently, with the code I have, when I hit a direction on my keyboard, I move several blocks within seconds and by the time I release the button I have already gotten a nullPointerException error because I have left the map. How can I slow down the movement?

    Read the article

  • "Walking" along a rotating surface in LimeJS

    - by Dave Lancea
    I'm trying to have a character walk along a plank (a long, thin rectangle) that works like a seesaw, being rotated around a central point by box2d physics (falling objects). I want the left and right arrow keys to move the player up and down the plank, regardless of it's slope, and I don't want to use real physics for the player movement. My idea for achieving this was to compute the coordinate based on the rotation of the plank and the current location "up" or "down" the board. My math is derived from here: http://math.stackexchange.com/questions/143932/calculate-point-given-x-y-angle-and-distance Here's the code I have so far: movement = 0; if(keys[37]){ // Left movement = -3; } if(keys[39]){ // Right movement = 3; } // this.plank is a LimeJS sprite. // getRotation() Should return an angle in degrees var rotation = this.plank.getRotation(); // this.current_plank_location is initialized as 0 this.current_plank_location += movement; var x_difference = this.current_plank_location * Math.cos(rotation); var y_difference = this.current_plank_location * Math.sin(rotation); this.setPosition(seesaw.PLANK_CENTER_X + x_difference, seesaw.PLANK_CENTER_Y + y_difference); This code causes the player to swing around in a circle when they are out of the center of the plank given a slight change in rotation of the plank. Any ideas on how I can get the player position to follow the board position?

    Read the article

  • Finding the shorter turning direction towards a target

    - by A.B.
    I'm trying to implement a type of movement where the object gradually faces the target. The problem I've run into is figuring out which turning direction is faster. The following code works until the object's orientation crosses the -PI or PI threshold, at which point it will start turning into the opposite direction void moveToPoint(sf::Vector2f destination) { if (destination == position) return; auto distance = distanceBetweenPoints(position, destination); auto direction = angleBetweenPoints(position, destination); /// Decides whether incrementing or decrementing orientation is faster /// the next line is the problem if (atan2(sin(direction - rotation), cos(direction - rotation)) > 0 ) { /// Increment rotation rotation += rotation_speed; } else { /// Decrement rotation rotation -= rotation_speed; } if (distance < movement_speed) { position = destination; } else { position.x = position.x + movement_speed*cos(rotation); position.y = position.y + movement_speed*sin(rotation); } updateGraphics(); } 'rotation' and 'rotation_speed' are implemented as custom data type for radians which cannot have values lower than -PI and greater than PI. Any excess or deficit "wraps around". For example, -3.2 becomes ~3.08.

    Read the article

  • get stick analogue XY position using Jinput in lwjgl

    - by oIrC
    i want to capture the movement of the analogue stick of the gamePad. is there any equivalent function to this public void mouseMoved(MouseEvent mouseEvent) { mouseEvent.getX(); //return the X coordinate of the cursor inside a component mouseEvent.getY();//return the Y coordinate of the cursor inside a component } into lwjgl.input.Controllers, i found controller.getAxisValue() but this one doesn't work as the function above. any help? thanks.

    Read the article

  • Client-Server MMOG & data structures sync when joining / playing

    - by plang
    After reading a few articles on MMOG architecture, there is still one point on which I cannot find much information: it has to do with how you keep in sync server data on the client, when you join, and while you play. A pretty vague question, I agree. Let me refine it: Let's say we have an MMOG virtual world subdivided into geographical cells. A player in a cell is mostly interested in what happens in the cell itself, and all the surrounding cells, not more. When joining the game for the first time, the only thing we can do is send some sort of "database dump" of the interesting cells to the client. When playing, I guess it would be very inefficient to do the same thing regularly. I imagine the best thing to do is to send "deltas" to the client, which would allow keeping the local database in sync. Now let's say the player moves, and arrives in another cell. Surrounding cells change, and for all the new cells the player subscribes, the same technique as used when joining the game has to be used: some sort of "database dump". This mechanic of joining/moving in a cell-based MMOG virtual world interests me, and I was wondering if there were tried and tested techniques in this domain. Thanks!

    Read the article

  • Boolean checks with a single quadtree, or multiple quadtrees?

    - by Djentleman
    I'm currently developing a 2D sidescrolling shooter game for PC (think metroidvania but with a lot more happening at once). Using XNA. I'm utilising quadtrees for my spatial partitioning system. All objects will be encompassed by standard bounding geometry (box or sphere) with possible pixel-perfect collision detection implemented after geometry collision (depends on how optimised I can get it). These are my collision scenarios, with < representing object overlap (multiplayer co-op is the reason for the player<player scenario): Collision scenarios (true = collision occurs): Player <> Player = false Enemy <> Enemy = false Player <> Enemy = true PlayerBullet <> Enemy = true PlayerBullet <> Player = false PlayerBullet <> EnemyBullet = true PlayerBullet <> PlayerBullet = false EnemyBullet <> Player = true EnemyBullet <> Enemy = false EnemyBullet <> EnemyBullet = false Player <> Environment = true Enemy <> Environment = true PlayerBullet <> Environment = true EnemyBullet <> Environment = true Going off this information and the fact that were will likely be several hundred objects rendering on-screen at any given time, my question is as follows: Which method is likely to be the most efficient/optimised and why: Using a single quadtree with boolean checks for collision between the different types of objects. Using three quadtrees at once (player, enemy, environment), only testing the player and enemy trees against each other while testing both the player and enemy trees against the environment tree.

    Read the article

  • Converting a DrawModel() using BasicEffect to one using Effect

    - by Fibericon
    Take this DrawModel() provided by MSDN: private void DrawModel(Model m) { Matrix[] transforms = new Matrix[m.Bones.Count]; float aspectRatio = graphics.GraphicsDevice.Viewport.Width / graphics.GraphicsDevice.Viewport.Height; m.CopyAbsoluteBoneTransformsTo(transforms); Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); Matrix view = Matrix.CreateLookAt(new Vector3(0.0f, 50.0f, Zoom), Vector3.Zero, Vector3.Up); foreach (ModelMesh mesh in m.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.View = view; effect.Projection = projection; effect.World = gameWorldRotation * transforms[mesh.ParentBone.Index] * Matrix.CreateTranslation(Position); } mesh.Draw(); } } How would I apply a custom effect to a model with that? Effect doesn't have View, Projection, or World members. This is what they recommend replacing the foreach loop with: foreach (ModelMesh mesh in terrain.Meshes) { foreach (Effect effect in mesh.Effects) { mesh.Draw(); } } Of course, that doesn't really work. What else needs to be done?

    Read the article

  • How can I generate signed distance fields (2D) in real time, fast?

    - by heishe
    In a previous question, it was suggested that signed distance fields can be precomputed, loaded at runtime and then used from there. For reasons I will explain at the end of this question (for people interested), I need to create the distance fields in real time. There are some papers out there for different methods which are supposed to be viable in real-time environments, such as methods for Chamfer distance transforms and Voronoi diagram-approximation based transforms (as suggested in this presentation by the Pixeljunk Shooter dev guy), but I (and thus can be assumed a lot of other people) have a very hard time actually putting them to use, since they're usually long, largely bloated with math and not very algorithmic in their explanation. What algorithm would you suggest for creating the distance fields in real-time (favourably on the GPU) especially considering the resulting quality of the distance fields? Since I'm looking for an actual explanation/tutorial as opposed to a link to just another paper or slide, this question will receive a bounty once it's eligible for one :-). Here's why I need to do it in real time: There's something else:

    Read the article

  • Application toolkits like QT versus traditional game/multimedia libraries like SFML

    - by Aaron
    I currently intend to use SFML for my next game project. I'll need a substantial GUI though (RPG/strategy-type) so I'll either have to implement my own or try to find an appropriate third party library, which seem to boil down to CEGUI, libRocket, and GWEN. At the same time, I do not anticipate doing that many advanced graphical effects. My game will be 2D and primarily sprite-based with some sprite animations. I've recently discovered that QT applications can have their appearance styled so that they don't have to look like plain OS apps. Given that, I am beginning to consider QT a valid alternative to SFML. I wouldn't have to implement the GUI functionality I'd need, and I may not be taking advantage of SFML's lower-level access anyway. The only drawbacks I can think of immediately are the learning curve for QT and figuring out how to fit game logic inside such a framework after getting used to the input/update/render loop of traditional game libraries. When would an application toolkit like QT be more appropriate for a game than a traditional game or multimedia library like SFML?

    Read the article

  • Sending Graphics to the C drive [on hold]

    - by CodeOfGenius
    I'm trying to create image files on the users' desktop. Let's say i have a picture of an orange in my eclipse workspace in the resource folder. When somebody downloads the project, I want to take that image of an orange and put it in a folder called fruit on their desktop. Whenever i export my game it can't read the images anymore which is wierd so I prefer to try this method. Just like minecraft has its' stuff in %Appdata%, I want to put a folder with my images the game uses on the desktop. There isn't any errors, I'm just asking how do i do this.

    Read the article

  • Strange behavior of RigidBody with gravity and impulse applied

    - by Heisenbug
    I'm doing some experiments trying to figure out how physics works in Unity. I created a cube mesh with a BoxCollider and a RigidBody. The cuve is laying on a mesh plane with a BoxCollider. I'm trying to update the object position applying a force on its RigidBody. Inside script FixedUpdate function I'm doing the following: public void FixedUpdate() { if (leftButtonPressed()) this.rigidbody.AddForce( this.transform.forward * this.forceStrength, ForceMode.Impulse); } Despite the object is aligned with the world axis and the force is applied along Z axis, it performs a quite big rotation movement around its y axis. Since I didn't modify the center of mass and the BoxCollider position and dimension, all values should be fine. Removing gravity and letting the object flying without touching the plane, the problem doesn't show. So I suppose it's related to the friction between objects, but I can't understand exactly which is the problem. Why this? What's my mistake? How can I fix this, or what's the right way to do such a moving an object on a plane through a force impulse?

    Read the article

  • Unity Problem with colliding instances of same object

    - by Kuba Sienkiewicz
    I want to check if object's instance is overlapping with another instance (any spawned object with another spawned object, not necessary the same object). I'm doing this by detecting collisions between bodies. But I have a problem. Spawned object (instances) are detecting collision with everything but other spawned objects. I've checked collision layers etc. All of spawned objects have rigidbodies and mesh colliders. Also when I attach my script to another body and I touch that body with an instanced object it detects collision. So problem is visible only in collision between spawned objects. And one more information I have script, rigid body and collider attached to child of main object. using UnityEngine; using System.Collections; public class CantPlace : MonoBehaviour { public bool collided = false; // Use this for initialization void Start () { } // Update is called once per frame void Update () { //Debug.Log (collided); } void OnTriggerEnter(Collider collider) { //if (true) { //foreach (Transform child in this.transform) { // if (child.name == "Cylinder") { //collided = true; Color c; c = this.renderer.material.color; c.g = 0f; c.b = 1f; c.r = 0f; this.renderer.material.color = c; Debug.Log (collider.name); //} // } //} //foreach (ContactPoint contact in collision.contacts) { // Debug.DrawRay(contact.point, contact.normal, Color.red,15f); // } } }

    Read the article

  • ssao implementation

    - by Irbis
    I try to implement a ssao based on this tutorial: link I use a deferred rendering and world coordinates for shading calculations. When saving gbuffer a vertex shader output looks like this: worldPosition = vec3(ModelMatrix * vec4(inPosition, 1.0)); normal = normalize(normalModelMatrix * inNormal); gl_Position = ProjectionMatrix * ViewMatrix * ModelMatrix * vec4(inPosition, 1.0); Next for a ssao calculations I render a scene as a full screen quad and I save an occlusion parameter in a texture. (Vertex positions in the world space: link Normals in the world space: link) SSAO implementation: subroutine (RenderPassType) void ssao() { vec2 texCoord = CalcTexCoord(); vec3 worldPos = texture(texture0, texCoord).xyz; vec3 normal = normalize(texture(texture1, texCoord).xyz); vec2 noiseScale = vec2(screenSize.x / 4, screenSize.y / 4); vec3 rvec = texture(texture2, texCoord * noiseScale).xyz; vec3 tangent = normalize(rvec - normal * dot(rvec, normal)); vec3 bitangent = cross(normal, tangent); mat3 tbn = mat3(tangent, bitangent, normal); float occlusion = 0.0; float radius = 4.0; for (int i = 0; i < kernelSize; ++i) { vec3 pix = tbn * kernel[i]; pix = pix * radius + worldPos; vec4 offset = vec4(pix, 1.0); offset = ProjectionMatrix * ViewMatrix * offset; offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; float sample_depth = texture(texture0, offset.xy).z; float range_check = abs(worldPos.z - sample_depth) < radius ? 1.0 : 0.0; occlusion += (sample_depth <= pix.z ? 1.0 : 0.0); } outputColor = vec4(occlusion, occlusion, occlusion, 1); } That code gives following results: camera looking towards -z world space: link camera looking towards +z world space: link I wonder if it is possible to use world coordinates in the above code ? When I move camera I get different results because world space positions don't change. Can I treat worldPos.z as a linear depth ? What should I change to get a correct results ? I except the white areas in place of occlusion, so the ground should has the white areas only near to the object.

    Read the article

  • how to properly implement alpha blending in a complex 3d scene

    - by Gajet
    I know this question might sound a bit easy to answer but It's driving me crazy. There are too many possible situations that a good alpha blending mechanism should handle, and for each Algorithm I can think of there is something missing. these are the methods I've though about so far: first of I though about object sorting by depth, this one simply fails because Objects are not simple shapes, they might have curves and might loop inside each other. so I can't always tell which one is closer to camera. then I thought about sorting triangles but this one also might fail, thought I'm not sure how to implement it there is a rare case that might again cause problem, in which two triangle pass through each other. again no one can tell which one is nearer. the next thing was using depth buffer, at least the main reason we have depth buffer is because of the problems with sorting that I mentioned but now we get another problem. Since objects might be transparent, in a single pixel there might be more than one object visible. So for which Object should I store pixel depth? I then thought maybe I can only store the most front Object depth, and using that determine how should I blend next draw calls at that pixel. But again there was a problem, think about 2 semi transparent planes with a solid plane in middle of them. I was going to render the solid plane at the end, one can see the most distant plane. note that I was going to merge every two planes until there is only one color left for that pixel. Obviously I can use sorting methods too because of the same reasons I've explained above. Finally the only thing I imagine being able to work is to render all objects into different render targets and then sort those layers and display the final output. But this time I don't know how can I implement this algorithm.

    Read the article

< Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >