Search Results

Search found 28230 results on 1130 pages for 'embedded development'.

Page 454/1130 | < Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >

  • How can I make the camera return to the beginning of the terrain when it reaches the end?

    - by wbaccari
    How can I make the camera return to the beginning of the terrain when it reaches the end? I tried using the ICameraSceneNode*-setPosition(). if (camera->getPosition().X>1200.f) camera->setPosition(vector3df(1.f,1550.f,camera->getPosition().Z)); if (camera->getPosition().X<0.f) camera->setPosition(vector3df(1199.f,1550.f,camera->getPosition().Z)); if (camera->getPosition().Z>1200.f) camera->setPosition(vector3df(camera->getPosition().X,1550.f,1.f)); if (camera->getPosition().Z<0.f) camera->setPosition(vector3df(camera->getPosition().X,1550.f,1199.f)); It seems to work fine with a flat terrain (one shade of grey in heightmap) but it starts to produce a strange behavior as soon as i try to add some hills. Edit: The setPosition() call seems to perform a translation of the camera toward the new position, therefore the camera stops at the first obstacle it encounters on its way.

    Read the article

  • Simulating smooth movement along a line after calculating a collision containing a restitution of zero in 2D

    - by Casey
    [for tl;dr see after listing] //...Code to determine shapes types involved in collision here... //...Rectangle-Line collision detected. if(_rbTest->GetCollisionShape()->Intersects(*_ground->GetCollisionShape())) { //Convert incoming shape to a line. a2de::Line l(*dynamic_cast<a2de::Line*>(_ground->GetCollisionShape())); //Get line's normal. a2de::Vector2D normal_vector(l.GetSlope().GetY(), -l.GetSlope().GetX()); a2de::Vector2D::Normalize(normal_vector); //Accumulate forces involved. a2de::Vector2D intermediate_forces; a2de::Vector2D normal_force = normal_vector * _rbTest->GetMass() * _world->GetGravityHandler()->GetGravityValue(); intermediate_forces += normal_force; //Calculate final velocity: See [1] double Ma = _rbTest->GetMass(); a2de::Vector2D Ua = _rbTest->GetVelocity(); double Mb = _ground->GetMass(); a2de::Vector2D Ub = _ground->GetVelocity(); double mCr = Mb * _ground->GetRestitution(); a2de::Vector2D collision_velocity( ((Ma * Ua) + (Mb * Ub) + ((mCr * Ub) - (mCr * Ua))) / (Ma + Mb)); //Calculate reflection vector: See [2] a2de::Vector2D reflect_velocity( -collision_velocity + 2 * (a2de::Vector2D::DotProduct(collision_velocity, normal_vector)) * normal_vector ); //Affect velocity to account for restitution of colliding bodies. reflect_velocity *= (_ground->GetRestitution() * _rbTest->GetRestitution()); _rbTest->SetVelocity(reflect_velocity); //THE ULTIMATE ISSUE STEMS FROM THE FOLLOWING LINE: //Move object away from collision one pixel to prevent constant collision. _rbTest->SetPosition(_rbTest->GetPosition() + normal_vector); _rbTest->ApplyImpulse(intermediate_forces); } Sources: (1) Wikipedia: Coefficient of Restitution: Speeds after impact (2) Wikipedia: Specular Reflection: Direction of reflection First, I have a system in place to account for friction (that is, a coefficient of friction) but is not used right now (in addition, it is zero, which should not affect the math anyway). I'll deal with that after I get this working. Anyway, when the restitution of either object involved in the collision is zero the object stops as required, but if movement along the same direction (again, irrespective of the friction value that isn't used) as the line is attempted the object moves as if slogging through knee deep snow. If I remove the line of code in question and the object is not push away one pixel the object barely moves at all. All because the object collides, is stopped, is pushed up, collides, is stopped...etc. OR collides, is stopped, collides, is stopped, etc... TL;DR How do I only account for a collision ONCE for restitution purposes (BONUS: but CONTINUALLY for frictional purposes, to be implemented later)

    Read the article

  • SDL_BlitSurface segmentation fault (surfaces aren't null)

    - by Trollkemada
    My app is crashing on SDL_BlitSurface() and i can't figure out why. I think it has something to do with my static object. If you read the code you'll why I think so. This happens when the limits of the map are reached, i.e. (iwidth || jheight). This is the code: Map.cpp (this render) Tile const * Map::getTyle(int i, int j) const { if (i >= 0 && j >= 0 && i < width && j < height) { return data[i][j]; } else { return &Tile::ERROR_TYLE; // This makes SDL_BlitSurface (called later) crash //return new Tile(TileType::ERROR); // This works with not problem (but is memory leak, of course) } } void Map::render(int x, int y, int width, int height) const { //DEBUG("(Rendering...) x: "<<x<<", y: "<<y<<", width: "<<width<<", height: "<<height); int firstI = x / TileType::PIXEL_PER_TILE; int firstJ = y / TileType::PIXEL_PER_TILE; int lastI = (x+width) / TileType::PIXEL_PER_TILE; int lastJ = (y+height) / TileType::PIXEL_PER_TILE; // The previous integer division rounds down when dealing with positive values, but it rounds up // negative values. This is a fix for that (We need those values always rounded down) if (firstI < 0) { firstI--; } if (firstJ < 0) { firstJ--; } const int firstX = x; const int firstY = y; SDL_Rect srcRect; SDL_Rect dstRect; for (int i=firstI; i <= lastI; i++) { for (int j=firstJ; j <= lastJ; j++) { if (i*TileType::PIXEL_PER_TILE < x) { srcRect.x = x % TileType::PIXEL_PER_TILE; srcRect.w = TileType::PIXEL_PER_TILE - (x % TileType::PIXEL_PER_TILE); dstRect.x = i*TileType::PIXEL_PER_TILE + (x % TileType::PIXEL_PER_TILE) - firstX; } else if (i*TileType::PIXEL_PER_TILE >= x + width) { srcRect.x = 0; srcRect.w = x % TileType::PIXEL_PER_TILE; dstRect.x = i*TileType::PIXEL_PER_TILE - firstX; } else { srcRect.x = 0; srcRect.w = TileType::PIXEL_PER_TILE; dstRect.x = i*TileType::PIXEL_PER_TILE - firstX; } if (j*TileType::PIXEL_PER_TILE < y) { srcRect.y = 0; srcRect.h = TileType::PIXEL_PER_TILE - (y % TileType::PIXEL_PER_TILE); dstRect.y = j*TileType::PIXEL_PER_TILE + (y % TileType::PIXEL_PER_TILE) - firstY; } else if (j*TileType::PIXEL_PER_TILE >= y + height) { srcRect.y = y % TileType::PIXEL_PER_TILE; srcRect.h = y % TileType::PIXEL_PER_TILE; dstRect.y = j*TileType::PIXEL_PER_TILE - firstY; } else { srcRect.y = 0; srcRect.h = TileType::PIXEL_PER_TILE; dstRect.y = j*TileType::PIXEL_PER_TILE - firstY; } SDL::YtoSDL(dstRect.y, srcRect.h); SDL_BlitSurface(getTyle(i,j)->getType()->getSurface(), &srcRect, SDL::getScreen(), &dstRect); // <-- Crash HERE /*DEBUG("i = "<<i<<", j = "<<j); DEBUG("srcRect.x = "<<srcRect.x<<", srcRect.y = "<<srcRect.y<<", srcRect.w = "<<srcRect.w<<", srcRect.h = "<<srcRect.h); DEBUG("dstRect.x = "<<dstRect.x<<", dstRect.y = "<<dstRect.y);*/ } } } Tile.h #ifndef TILE_H #define TILE_H #include "TileType.h" class Tile { private: TileType const * type; public: static const Tile ERROR_TYLE; Tile(TileType const * t); ~Tile(); TileType const * getType() const; }; #endif Tile.cpp #include "Tile.h" const Tile Tile::ERROR_TYLE(TileType::ERROR); Tile::Tile(TileType const * t) : type(t) {} Tile::~Tile() {} TileType const * Tile::getType() const { return type; } TileType.h #ifndef TILETYPE_H #define TILETYPE_H #include "SDL.h" #include "DEBUG.h" class TileType { protected: TileType(); ~TileType(); public: static const int PIXEL_PER_TILE = 30; static const TileType * ERROR; static const TileType * AIR; static const TileType * SOLID; virtual SDL_Surface * getSurface() const = 0; virtual bool isSolid(int x, int y) const = 0; }; #endif ErrorTyle.h #ifndef ERRORTILE_H #define ERRORTILE_H #include "TileType.h" class ErrorTile : public TileType { friend class TileType; private: ErrorTile(); mutable SDL_Surface * surface; static const char * FILE_PATH; public: SDL_Surface * getSurface() const; bool isSolid(int x, int y) const ; }; #endif ErrorTyle.cpp (The surface can't be loaded when building the object, because it is a static object and SDL_Init() needs to be called first) #include "ErrorTile.h" const char * ErrorTile::FILE_PATH = ("C:\\error.bmp"); ErrorTile::ErrorTile() : TileType(), surface(NULL) {} SDL_Surface * ErrorTile::getSurface() const { if (surface == NULL) { if (SDL::isOn()) { surface = SDL::loadAndOptimice(ErrorTile::FILE_PATH); if (surface->w != TileType::PIXEL_PER_TILE || surface->h != TileType::PIXEL_PER_TILE) { WARNING("Bad tile surface size"); } } else { ERROR("Trying to load a surface, but SDL is not on"); } } if (surface == NULL) { // This if doesn't get called, so surface != NULL ERROR("WTF? Can't load surface :\\"); } return surface; } bool ErrorTile::isSolid(int x, int y) const { return true; }

    Read the article

  • Parent variable inheritance methods Unity3D/C#

    - by Timothy Williams
    I'm creating a system where there is a base "Hero" class and each hero inherits from that with their own stats and abilities. What I'm wondering is, how could I call a variable from one of the child scripts in the parent script (something like maxMP = MP) or call a function in a parent class that is specified in each child class (in the parent update is alarms() in the child classes alarms() is specified to do something.) Is this possible at all? Or not? Thanks.

    Read the article

  • Moving player in direciton camera is facing

    - by Samurai Fox
    I have a 3rd person camera which can rotate around the player. My problem is that wherever camera is facing, players forward is always the same direction. For example when camera is facing the right side of the player, when I press button to move forward, I want player to turn to the left and make that the "new forward". My camera script so far: using UnityEngine; using System.Collections; public class PlayerScript : MonoBehaviour { public float RotateSpeed = 150, MoveSpeed = 50; float DeltaTime; void Update() { DeltaTime = Time.deltaTime; transform.Rotate(0, Input.GetAxis("LeftX") * RotateSpeed * DeltaTime, 0); transform.Translate(0, 0, -Input.GetAxis("LeftY") * MoveSpeed * DeltaTime); } } public class CameraScript : MonoBehaviour { public GameObject Target; public float RotateSpeed = 170, FollowDistance = 20, FollowHeight = 10; float RotateSpeedPerTime, DesiredRotationAngle, DesiredHeight, CurrentRotationAngle, CurrentHeight, Yaw, Pitch; Quaternion CurrentRotation; void LateUpdate() { RotateSpeedPerTime = RotateSpeed * Time.deltaTime; DesiredRotationAngle = Target.transform.eulerAngles.y; DesiredHeight = Target.transform.position.y + FollowHeight; CurrentRotationAngle = transform.eulerAngles.y; CurrentHeight = transform.position.y; CurrentRotationAngle = Mathf.LerpAngle(CurrentRotationAngle, DesiredRotationAngle, 0); CurrentHeight = Mathf.Lerp(CurrentHeight, DesiredHeight, 0); CurrentRotation = Quaternion.Euler(0, CurrentRotationAngle, 0); transform.position = Target.transform.position; transform.position -= CurrentRotation * Vector3.forward * FollowDistance; transform.position = new Vector3(transform.position.x, CurrentHeight, transform.position.z); Yaw = Input.GetAxis("Right Horizontal") * RotateSpeedPerTime; Pitch = Input.GetAxis("Right Vertical") * RotateSpeedPerTime; transform.Translate(new Vector3(Yaw, -Pitch, 0)); transform.position = new Vector3(transform.position.x, transform.position.y, transform.position.z); transform.LookAt(Target.transform); } }

    Read the article

  • Determinism in multiplayer simulation with Box2D, and single computer

    - by Jake
    I wrote a small test car driving multiplayer game with Box2D using TCP server-client communcations. I ran 1 instance of server.exe and 2 instance of client.exe on the same machine that I code and compile the executables. I type inputs (WASD for a simple car movement) into one of the 2 clients and I can get both clients to update the simulation. There are 2 cars in the simulation. As long as the cars do not collide, I get the same identical output on both client.exe. I can run the car(s) around for as long as I could they still update the same. However, if I start to collide the cars, very quickly they go out of sync. My tools: Windows 7, C++, MSVS 2010, Box2D, freeGlut. My Psuedocode: // client.exe void timer(int value) { tcpServer.send(my_inputs); foreach(i = player including myself) inputs[i] = tcpServer.receive(); foreach(i = player including myself) players[i].process(inputs[i]); myb2World.step(33, 8, 6); // Box2D world step simulation foreach(i = player including myself) renderer.render(player[i]); glutTimerFunc(33, timer, 0); } // server.exe void serviceloop { while(all clients alive) { foreach(c = clients) tcpClients[c].receive(&inputs[c]); // send input of each client to all clients foreach(source = clients) { foreach(dest = clients) { tcpClients[dest].send(inputs[source]); } } } } I have read all over the internet and SE the following claims (paraphrased): Box2D is deterministic as long as floating point architecture/implementation is the same. (For any deterministic engine) Determinism is gauranteed if playback of recorded inputs is on the same machine with exe compiled using same compiler and machine. Additionally my server.exe and client.exe gameloop is single thread with blocking socket calls and fixed time step. Question: Can anyone explain what I did wrong to get different Box2D output?

    Read the article

  • Can I run into legal issues with random names?

    - by Nathan Sabruka
    I'm currently building a game whose NPC's are going to be assigned a random gender and a random name for the right gender. To do this I will be using a "database" of names (actually a text file with tuples). There would also be a list of last names, which will be added to the first name also randomly. My question is the following. Suppose one such random name is "George Bush", and this person has been randomly assigned the job of president. As you can see, this could easily be seen as having been "copied" from a real-life person. The main issue is this. Names will be randomly-generated, yes, but the seed for random-number generation will be constant. In other words, the name of an NPC would be randomly-generated, i.e. I wouldn't choose it, but it would be the same for every player. Could this get me in trouble? We cannot verify all possible names, since the generated number of NPC's could be potentially limitless (new NPC's are being created whenever needed).

    Read the article

  • How do I apply skeletal animation from a .x (Direct X) file?

    - by Byte56
    Using the .x format to export a model from Blender, I can load a mesh, armature and animation. I have no problems generating the mesh and viewing models in game. Additionally, I have animations and the armature properly loaded into appropriate data structures. My problem is properly applying the animation to the models. I have the framework for applying the models and the code for selecting animations and stepping through frames. From what I understand, the AnimationKeys inside the AnimationSet supplies the transformations to transform the bind pose to the pose in the animated frame. As small example: Animation { {Armature_001_Bone} AnimationKey { 2; //Position 121; //number of frames 0;3; 0.000000, 0.000000, 0.000000;;, 1;3; 0.000000, 0.000000, 0.005524;;, 2;3; 0.000000, 0.000000, 0.022217;;, ... } AnimationKey { 0; //Quaternion Rotation 121; 0;4; -0.707107, 0.707107, 0.000000, 0.000000;;, 1;4; -0.697332, 0.697332, 0.015710, 0.015710;;, 2;4; -0.684805, 0.684805, 0.035442, 0.035442;;, ... } AnimationKey { 1; //Scale 121; 0;3; 1.000000, 1.000000, 1.000000;;, 1;3; 1.000000, 1.000000, 1.000000;;, 2;3; 1.000000, 1.000000, 1.000000;;, ... } } So, to apply frame 2, I would take the position, rotation and scale from frame 2, create a transformation matrix (call it Transform_A) from them and apply that matrix the vertices controlled by Armature_001_Bone at their weights. So I'd stuff TransformA into my shader and transform the vertex. Something like: vertexPos = vertexPos * bones[ int(bfs_BoneIndices.x) ] * bfs_BoneWeights.x; Where bfs_BoneIndices and bfs_BoneWeights are values specific to the current vertex. When loading in the mesh vertices, I transform them by the rootTransform and the meshTransform. This ensures they're oriented and scaled correctly for viewing the bind pose. The problem is when I create that transformation matrix (using the position, rotation and scale from the animation), it doesn't properly transform the vertex. There's likely more to it than just using the animation data. I also tried applying the bone transform hierarchies, still no dice. Basically I end up with some twisted models. It should also be noted that I'm working in openGL, so any matrix transposes that might need to be applied should be considered. What data do I need and how do I combine it for applying .x animations to models?

    Read the article

  • Where should i organize my matrices in a 3D Game engine?

    - by Need4Sleep
    I'm working with a group of people from around the world to create a game engine(and hopefully a game with it) within the next upcoming years. My first task was writing a camera class for the engine to use in order to add cameras to the scene, position and follow points in the scene. The problem i have is with using matrices for transformations in the class, should i keep matrices separate to each class? such as have the model matrix in the model class, camera matrix in the camera class, or have all matrices placed in one class/chuck? I could see pros and cons for each method, but i wanted to hear some input form a more professional standpoint.

    Read the article

  • Android Touch Event Collision Detection

    - by chrissb
    I'm relatively new to both Java and Android, so hopefully the problem I'm having is stemming from something pretty minor that I've overlooked. I've got a (very early stage) game that I've started working on, for Android using Java. At this stage, when the user touches the screen, if they touched a point at which there is an enemy, the enemies health is decreased and they become immobile (for the current implementation at least). The issue that I'm having is that the touch detection doesn't always seem to work. I've got a testing sprite set up that goes to the eventX and eventY coordinates of the touch down event, and it always seems to collide with the enemy object. Yet, the enemy doesn't always register as being hit, and sometimes a hit is registered when the sprite indicates the touch coordinates were outside of the enemies bounding box. I realise that this probably doesn't mean much without any code, so here's what I've got so far. Be gentle, as this is literally my first attempt at something more than basic movement etc. First off, the MainGamePanel class registers the touch event, and informs the levelmanager class (which is what I set up to monitor/handle enemies) public boolean onTouchEvent(MotionEvent event) { if (event.getAction() == MotionEvent.ACTION_DOWN){ levelManager.handleActionDown((int)event.getX(), (int)event.getY()); targetX=event.getX(); targetY=event.getY(); } if (event.getAction() == MotionEvent.ACTION_MOVE) { //the gestures } if (event.getAction() == MotionEvent.ACTION_UP) { //touch was released } return true; } From there, in the levelmanager class the touch event is passed on to all of the enemies within a list array: public static void handleActionDown(int eventX,int eventY){ hit=false; for (enemy1 en : enemy1array){ en.handleActionDown(eventX, eventY); } } The rest of the collision code is handled within the enemies handleActionDown function: public void handleActionDown(int eventX, int eventY) { if(eventX>this.x-enemy1bitmap.getWidth() && eventX<this.x+enemy1bitmap.getWidth() && eventY>this.y-enemy1bitmap.getHeight() && eventY<this.x+enemy1bitmap.getHeight()){ takeDamage(1); levelmanager.setHit(); } } I should probably be using getWidth()/2 and getHeight()/2 for it to be more accurate, but I expanded the area to test this - although I've noticed no improvement. At this stage, the games detection over whether or not the enemy is hit is spotty at best. Generally it takes two or three attempts before a collision is successfully registered, even though the sprite that is being used for testing and set to the eventX and eventY coordinates always indicates that the collision should have worked. Hopefully someone can steer me in the right direction here, and if more information is needed, ask away! Cheers, -Chris

    Read the article

  • Pixel alignment algorithm

    - by user42325
    I have a set of square blocks, I want to draw them in a window. I am sure the coordinates calculation is correct. But on the screen, some squares' edge overlap with other, some are not. I remember the problem is caused by accuracy of pixels. I remember there's a specific topic related to this kind of problem in 2D image rendering. But I don't remember what exactly it is, and how to solve it. Look at this screenshot. Each block should have a fixed width margin. But in the image, the vertical white line have different width.Though, the horizontal lines looks fine.

    Read the article

  • Java - Tile engine changing number in array not changing texture

    - by Corey
    I draw my map from a txt file. Would I have to write to the text file to notice the changes I made? Right now it changes the number in the array but the tile texture doesn't change. Do I have to do more than just change the number in the array? public class Tiles { public Image[] tiles = new Image[5]; public int[][] map = new int[64][64]; private Image grass, dirt, fence, mound; private SpriteSheet tileSheet; public int tileWidth = 32; public int tileHeight = 32; Player player = new Player(); public void init() throws IOException, SlickException { tileSheet = new SpriteSheet("assets/tiles.png", tileWidth, tileHeight); grass = tileSheet.getSprite(0, 0); dirt = tileSheet.getSprite(7, 7); fence = tileSheet.getSprite(2, 0); mound = tileSheet.getSprite(2, 6); tiles[0] = grass; tiles[1] = dirt; tiles[2] = fence; tiles[3] = mound; int x=0, y=0; BufferedReader in = new BufferedReader(new FileReader("assets/map.dat")); String line; while ((line = in.readLine()) != null) { String[] values = line.split(","); for (String str : values) { int str_int = Integer.parseInt(str); map[x][y]=str_int; //System.out.print(map[x][y] + " "); y=y+1; } //System.out.println(""); x=x+1; y = 0; } in.close(); } public void update(GameContainer gc) { } public void render(GameContainer gc) { for(int x = 0; x < map.length; x++) { for(int y = 0; y < map.length; y ++) { int textureIndex = map[y][x]; Image texture = tiles[textureIndex]; texture.draw(x*tileWidth,y*tileHeight); } } } Mouse picking public void checkDistance(GameContainer gc) { Input input = gc.getInput(); float mouseX = input.getMouseX(); float mouseY = input.getMouseY(); double mousetileX = Math.floor((double)mouseX/tiles.tileWidth); double mousetileY = Math.floor((double)mouseY/tiles.tileHeight); double playertileX = Math.floor(playerX/tiles.tileWidth); double playertileY = Math.floor(playerY/tiles.tileHeight); double lengthX = Math.abs((float)playertileX - mousetileX); double lengthY = Math.abs((float)playertileY - mousetileY); double distance = Math.sqrt((lengthX*lengthX)+(lengthY*lengthY)); if(input.isMousePressed(Input.MOUSE_LEFT_BUTTON) && distance < 4) { System.out.println("Clicked"); if(tiles.map[(int)mousetileX][(int)mousetileY] == 1) { tiles.map[(int)mousetileX][(int)mousetileY] = 0; } } System.out.println(tiles.map[(int)mousetileX][(int)mousetileY]); }

    Read the article

  • 2D Tile Based Collision Detection

    - by MrPlosion1243
    There are a lot of topics about this and it seems each one addresses a different problem, this topic does the same. I was looking into tile collision detection and found this where David Gouveia explains a great way to get around the person's problem by separating the two axis. So I implemented the solution and it all worked perfectly from all the testes I through at it. Then I implemented more advanced platforming physics and the collision detection broke down. Unfortunately I have not been able to get it to work again which is where you guys come in :)! I will present the code first: public void Update(GameTime gameTime) { if(Input.GetKeyDown(Keys.A)) { velocity.X -= moveAcceleration; } else if(Input.GetKeyDown(Keys.D)) { velocity.X += moveAcceleration; } if(Input.GetKeyDown(Keys.Space)) { if((onGround && isPressable) || (!onGround && airTime <= maxAirTime && isPressable)) { onGround = false; airTime += (float)gameTime.ElapsedGameTime.TotalSeconds; velocity.Y = initialJumpVelocity * (1.0f - (float)Math.Pow(airTime / maxAirTime, Math.PI)); } } else if(Input.GetKeyReleased(Keys.Space)) { isPressable = false; } if(onGround) { velocity.X *= groundDrag; velocity.Y = 0.0f; } else { velocity.X *= airDrag; velocity.Y += gravityAcceleration; } velocity.Y = MathHelper.Clamp(velocity.Y, -maxFallSpeed, maxFallSpeed); velocity.X = MathHelper.Clamp(velocity.X, -maxMoveSpeed, maxMoveSpeed); position += velocity * (float)gameTime.ElapsedGameTime.TotalSeconds; position = new Vector2((float)Math.Round(position.X), (float)Math.Round(position.Y)); if(Math.Round(velocity.X) != 0.0f) { HandleCollisions2(Direction.Horizontal); } if(Math.Round(velocity.Y) != 0.0f) { HandleCollisions2(Direction.Vertical); } } private void HandleCollisions2(Direction direction) { int topTile = (int)Math.Floor((float)Bounds.Top / Tile.PixelTileSize); int bottomTile = (int)Math.Ceiling((float)Bounds.Bottom / Tile.PixelTileSize) - 1; int leftTile = (int)Math.Floor((float)Bounds.Left / Tile.PixelTileSize); int rightTile = (int)Math.Ceiling((float)Bounds.Right / Tile.PixelTileSize) - 1; for(int x = leftTile; x <= rightTile; x++) { for(int y = topTile; y <= bottomTile; y++) { Rectangle tileBounds = new Rectangle(x * Tile.PixelTileSize, y * Tile.PixelTileSize, Tile.PixelTileSize, Tile.PixelTileSize); Vector2 depth; if(Tile.IsSolid(x, y) && Intersects(tileBounds, direction, out depth)) { if(direction == Direction.Horizontal) { position.X += depth.X; } else { onGround = true; isPressable = true; airTime = 0.0f; position.Y += depth.Y; } } } } } From the code you can see when velocity.X is not equal to zero the HandleCollisions() Method is called along the horizontal axis and likewise for the vertical axis. When velocity.X is not equal to zero and velocity.Y is equal to zero it works fine. When velocity.Y is not equal to zero and velocity.X is equal to zero everything also works fine. However when both axis are not equal to zero that's when it doesn't work and I don't know why. I basically teleport to the left side of a tile when both axis are not equal to zero and there is a air block next to me. Hopefully someone can see the problem with this because I sure don't as far as I'm aware nothing has even changed from what I'm doing to what the linked post's solution is doing. Thanks.

    Read the article

  • Using multiple indexes with buffer objects in OpenTK

    - by Rushyo
    I've got multiple buffers in OpenGL holding data on position, normals and texcoords. I also have an equal number of buffers holding distinct index data for each of those buffers. I quite like this format (indvidual indexes for each buffer) utilised by COLLADA since it strikes me as optimally efficient at accessing each buffer. I've set up pointers to the relevant data arrays using VertexPointer, NormalPointer, etc however I have no way to assign pointers to the index buffers since DrawElements appear to only look at one ElementArrayBuffer. Can I utilise multiple indices some way or will I be better off using a different technique which can support this? I'd prefer to keep the distinct indices if at all possible.

    Read the article

  • How to correctly export UV coordinates from Blender

    - by KlashnikovKid
    Alright, so I'm just now getting around to texturing some assets. After much trial and error I feel I'm pretty good at UV unwrapping now and my work looks good in Blender. However, either I'm using the UV data incorrectly (I really doubt it) or Blender doesn't seem to export the correct UV coordinates into the obj file because the texture is mapped differently in my game engine. And in Blender I've played with the texture panel and it's mapping options and have noticed it doesn't appear to affect the exported obj file's uv coordinates. So I guess my question is, is there something I need to do prior to exporting in order to bake the correct UV coordinates into the obj file? Or something else that needs to be done to massage the texture coordinates for sampling. Or any thoughts at all of what could be going wrong? (Also here is a screen shot of my diffused texture in blender and the game engine. As you can see in the image, I have the same problem with a simple test cube not getting correct uv's either) http://www.digitalinception.net/blenderSS.png http://www.digitalinception.net/gameSS.png

    Read the article

  • Unable to access jar. Why?

    - by SystemNetworks
    I was making a game in java and exported it as jar file. Then after that, I opeed jar splice. I added the libaries and exported jar. I added the natives then i made a main class. I created a fat jar and put it on my desktop. I'm using Mac OS X 10.8 Mountain Lion. When I put in the terminal, java -jar System Front.jar it says unable to access System Front.jar Even if i double click on the file, it doesen't show up! Help! I'm using slick. I added slick and lwjgl as libraries for the jar splice at the jars.

    Read the article

  • Incorporating XNA into an existing project

    - by Boreal
    My game as-is is using IrrlichtLime, which I'm beginning to dislike because it hides a lot of implementation and makes adding your own implementation incredibly complex. I don't really need the scene manager for anything and the only animation I need is manual (i.e. transforming the bones programmatically). However, I've only ever used XNA in the past as a starting point with the templates. How would I take my current project and add XNA to it?

    Read the article

  • What are the pro/cons of Unity3D as a choice to make games ?

    - by jokoon
    We are doing our school project with Unity3d, since they were using Shiva the previous year (which seems horrible to me), and I wanted to know your point of view for this tool. Pros: multi platform, I even heard Google is going to implement it in Chrome everything you need is here scripting languages makes it a good choice for people who are not programming gurus Cons: multiplayer ? proprietary, you are totally dependent of unity and its limit and can't extend it it's less "making a game from scratch" C++ would have been a cool thing I really think this kind of tool is interesting, but is it worth it to use at school for a project that involves more than 3 programming persons ? What do we really learn in term of programming from using this kind of tool (I'm ok with python and js, but I hate C#) ? We could have use Ogre instead, even if we were learning direct x starting january...

    Read the article

  • Breakout clone, how to handle/design for collision detection/physics between objects?

    - by Zolomon
    I'm working on a breakout clone, and I wish to create some realistic physics effects for collision - angles on the paddle should allow the ball to bounce, as well as doing curve balls etc. I could use per-pixel based collision detection, but then I thought it might be easier with line/circle intersection testing. So, then I naturally consider making a polygon class for the line-based objects and use the built-in circle class for the circular objects. That sounds like an OK approach, right? And then just check for collision using the specified algorithm based on the objects that might be within each other's range?

    Read the article

  • Starting point for a simple game written in action script [closed]

    - by Hossein
    Possible Duplicate: AS3/Flash Game Dev: Looking for good & current step by step. Hi, I want to develop a simple game like: http://www.albinoblacksheep.com/games/falldown2 And then making it a bit more fancy. But I don't know where to start. I have already started AS3 so I know about the syntax and stuff, but I am kinda lost. Does anyone knows of a nice starting point or a tutorial that can help me with this? Thanks

    Read the article

  • what is the best way to use loops to detect events while the main loop is running?

    - by yao jiang
    I am making an "game" that has pathfinding using pygame. I am using Astar algo. I have a main loop which draws the whole map. In the loop I check for events. If user press "enter" or "space", random start and end are selected, then animation starts and it will try to get from start to end. My draw function is stupid as hell right now, it works as expected but I feel that I am doing it wrong. It'll draw everything to the end of the animation. I am also detecting events in there as well. What is a better way of implementing the draw function such that it will draw one "step" at a time while checking for events? animating = False; while loop: check events: if not animating: # space or enter press will choose random start/end coords if enter_pressed or space_pressed: start, end = choose_coords route = find_route(start, end) draw(start, end, grid, route) else: # left click == generate an event to block the path # right click == user can choose a new destination if left_mouse_click: gen_event() reroute() elif right_mouse_click: new_end = new_end() new_start = current_pos() route = find_route(new_start, new_end) draw(new_start, new_end, grid, route) # draw out the grid def draw(start, end, grid, route_coord): # draw the end coords color = red; pick_image(screen, color, width*end[1],height*end[0]); pygame.display.flip(); # then draw the rest of the route for i in range(len(route_coord)): # pausing because we want animation time.sleep(speed); # get the x/y coords x,y = route_coord[i]; event_on = False; if grid[x][y] == 2: color = green; elif grid[x][y] == 3: color = blue; for event in pygame.event.get(): if event.type == pygame.MOUSEBUTTONDOWN: if event.button == 3: print "destination change detected, rerouting"; # get mouse position, px coords pos = pygame.mouse.get_pos(); # get grid coord c = pos[0] // width; r = pos[1] // height; grid[r][c] = 4; end = [r, c]; elif event.button == 1: print "user generated event"; pos = pygame.mouse.get_pos(); # get grid coord c = pos[0] // width; r = pos[1] // height; # mark it as a block for now grid[r][c] = 1; event_on = True; if check_events([x,y]) or event_on: # there is an event # mark it as a block for now grid[y][x] = 1; pick_image(screen, event_x, width*y, height*x); pygame.display.flip(); # then find a new route new_start = route_coord[i-1]; marked_grid, route_coord = find_route(new_start, end, grid); draw(new_start, end, grid, route_coord); return; # just end draw here so it wont throw the "index out of range" error elif grid[x][y] == 4: color = red; pick_image(screen, color, width*y, height*x); pygame.display.flip(); # clear route coord list, otherwise itll just add more unwanted coords route_coord_list[:] = [];

    Read the article

  • State / Screen management in Entity Component Systems

    - by David Lively
    My entity/component system is happily humming along and, despite some performance concerns I initially had, everything is working fine. However, I've realized that I missed a crucial point when starting this thing: how do you handle different screens? At the moment, I have a GameManager class which owns a component manager and entity manager. When I create an entity, the entity manager assigns it an ID and makes sure it's tracked. When I modify the components that are assigned to an entity. an UpdateEntity method is called, which alerts each of the systems that they may need to add or remove the entity from their respective entity lists. A problem with this is that the collection of entities operated on by each system is determined solely by the individual Systems, typically based on a "required component" filter. (An entity has to have a Renderable component to be rendered, for instance.) In this situation, I can't just keep collections of entities per screen and only Update/Draw those collections. They'd have to either be added and removed depending on their applicability to the current screen, which would cause their associated components to be removed, or enable/disable entities in a group per screen to hide what's not supposed to be visible. These approaches seem like really, really crappy kludges. What's a good way to handle this? A pretty straightforward way that comes to mind is to create a separate GameManager (which in my implementation owns all of the systems, entities, etc.) per screen, which means that everything outside of the device context would be duplicated. That's bothersome because some things are always visible, or I might want to continue to display the game under a translucent menu window. Another option would be to add a "layer" key to the GameManager class, which could be checked against a displayable layer stack held by the game manager. *System.Draw() would be called for each active layer, in the required order as determined by the stack. When the systems request an iterator for their respective entity collections, it would be pre-filtered to a (cached) set of those entities that participate in the active layer. Those collections could be updated from the same UpdateEntity event that's already used to maintain each system's entity collections. Still, kinda feels like a hack. If I've coded myself into a corner, feel free to throw tomatoes as long as they're labeled with a helpful suggestion. Hooray for learning curves.

    Read the article

  • Resultant Vector Algorithm for 2D Collisions

    - by John
    I am making a Pong based game where a puck hits a paddle and bounces off. Both the puck and the paddles are Circles. I came up with an algorithm to calculate the resultant vector of the puck once it meets a paddle. The game seems to function correctly but I'm not entirely sure my algorithm is correct. Here are my variables for the algorithm: Given: velocity = the magnitude of the initial velocity of the puck before the collision x = the x coordinate of the puck y = the y coordinate of the puck moveX = the horizontal speed of the puck moveY = the vertical speed of the puck otherX = the x coordinate of the paddle otherY = the y coordinate of the paddle piece.horizontalMomentum = the horizontal speed of the paddle before it hits the puck piece.verticalMomentum = the vertical speed of the paddle before it hits the puck slope = the direction, in radians, of the puck's velocity distX = the horizontal distance between the center of the puck and the center of the paddle distY = the vertical distance between the center of the puck and the center of the paddle Algorithm solves for: impactAngle = the angle, in radians, of the angle of impact. newSpeedX = the speed of the resultant vector in the X direction newSpeedY = the speed of the resultant vector in the Y direction Here is the code for my algorithm: int otherX = piece.x; int otherY = piece.y; double velocity = Math.sqrt((moveX * moveX) + (moveY * moveY)); double slope = Math.atan(moveX / moveY); int distX = x - otherX; int distY = y - otherY; double impactAngle = Math.atan(distX / distY); double newAngle = impactAngle + slope; int newSpeedX = (int)(velocity * Math.sin(newAngle)) + piece.horizontalMomentum; int newSpeedY = (int)(velocity * Math.cos(newAngle)) + piece.verticalMomentum; for those who are not program savvy here is it simplified: velocity = v(moveX² + moveY²) slope = arctan(moveX / moveY) distX = x - otherX distY = y - otherY impactAngle = arctan(distX / distY) newAngle = impactAngle + slope newSpeedX = velocity * sin(newAngle) + piece.horizontalMomentum newSpeedY = velocity * cos(newAngle) + piece.verticalMomentum My Question: Is this algorithm correct? Is there an easier/simpler way to do what I'm trying to do?

    Read the article

  • Handling Players, enemies and attacks in HTML5

    - by Chris Morris
    I'm building a simple (currently) game with free roaming player and monsters on a map built by a 2D grid. I've been looking at the methods for implementing characters and enemies onto the screen and I've seen two seperate methods for doing this online. Drawing the player onto the screen canvas directly and refreshing the entire screen every FPS tick. Having a separate canvas to handle the player and moving the player canvas on top of the screen canvas via absolute positioning. I can see some pros and cons of both methods but what is generally the best method for doing this? I assume the second due to not having to drain resources by refreshing the map when the user is not moving, but the type of game will generally have constant movement.

    Read the article

  • How to draw a global day night curve

    - by Lumis
    I see many applications which have world-clock map, and I would like to make my own to enhance some of my mobile apps. I wonder if anybody has any knowledge where to start, how to draw a curved shadow representing the dawn and the sunset on the globe. See the example: http://aa.usno.navy.mil/imagery/earth/map?year=2012&month=6&day=19&hour=14&minute=47 I think that this curve goes up and down and creates an artic day/night etc Perhaps there is some acceptable approximation formula without a need to load data for each our and each global parallel and meridian...

    Read the article

< Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >