Search Results

Search found 33291 results on 1332 pages for 'development environment'.

Page 570/1332 | < Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >

  • 2D Camera Acceleration/Lag

    - by Cyral
    I have a nice camera set up for my 2D xna game. Im wondering how I should make the camera have 'acceleration' or 'lag' so it smoothly follows the player, instead of 'exactly' like mine does now. Im thinking somehow I need to Lerp the values when I set CameraPosition. Heres my code private void ScrollCamera(Viewport viewport) { float ViewMargin = .35f; float marginWidth = viewport.Width * ViewMargin; float marginLeft = cameraPosition.X + marginWidth; float marginRight = cameraPosition.X + viewport.Width - marginWidth; float TopMargin = .3f; float BottomMargin = .1f; float marginTop = cameraPosition.Y + viewport.Height * TopMargin; float marginBottom = cameraPosition.Y + viewport.Height - viewport.Height * BottomMargin; Vector2 CameraMovement; Vector2 maxCameraPosition; CameraMovement.X = 0.0f; if (Player.Position.X < marginLeft) CameraMovement.X = Player.Position.X - marginLeft; else if (Player.Position.X > marginRight) CameraMovement.X = Player.Position.X - marginRight; maxCameraPosition.X = 16 * Width - viewport.Width; cameraPosition.X = MathHelper.Clamp(cameraPosition.X + CameraMovement.X, 0.0f, maxCameraPosition.X); CameraMovement.Y = 0.0f; if (Player.Position.Y < marginTop) //above the top margin CameraMovement.Y = Player.Position.Y - marginTop; else if (Player.Position.Y > marginBottom) //below the bottom margin CameraMovement.Y = Player.Position.Y - marginBottom; maxCameraPosition.Y = 16 * Height - viewport.Height; cameraPosition.Y = MathHelper.Clamp(cameraPosition.Y + CameraMovement.Y, 0.0f, maxCameraPosition.Y); }

    Read the article

  • jump pads problem

    - by Pasquale Sada
    I'm trying to make a character jump on a landing pad who stays above him. Here is the formula I've used (everything is pretty much self-explainable, maybe except character_MaxForce that is the total force the character can jump ): deltaPosition = target - character_position; sqrtTerm = Sqrt(2*-gravity.y * deltaPosition.y + MaxYVelocity* character_MaxForce); time = (MaxYVelocity-sqrtTerm) /gravity.y; speedSq = jumpVelocity.x* jumpVelocity.x + jumpVelocity.z *jumpVelocity.z; if speedSq < (character_MaxForce * character_MaxForce) we have the right time so we can store the value jumpVelocity.x = deltaPosition.x / time; jumpVelocity.z = deltaPosition.z / time; otherwise we try the other solution time = (MaxYVelocity+sqrtTerm) /gravity.y; and then store it jumpVelocity.x = deltaPosition.x / time; jumpVelocity.z = deltaPosition.z / time; jumpVelocity.y = MaxYVelocity; rigidbody_velocity = jumpVelocity; The problem is that the character is jumping away from the landing pad or sometime he jumps too far never hitting the landing pad.

    Read the article

  • C++ Directx 11 D3DXVECTOR3 doesn't allow me to devide it

    - by Miguel P
    If i have a simple vector3 like this: D3DXVECTOR3 inversevector = D3DXVECTOR3( (pos+lookat_pos)); It works perfect! But let's say i wanted to multiply it by: Speed*(float) timeHandler.GetDelta() So: D3DXVECTOR3 inversevector = D3DXVECTOR3( (pos+lookat_pos) * Speed*(float) timeHandler.GetDelta()); Now this fails completely, i've used this snippet before, but for some wierd reason it simply won't work( The vector somehow leads x,y,z to 0 or almost, no idea why). Do you have any idea why?

    Read the article

  • Particle Effect Completion

    - by Siddharth
    In my game I use particle effect for various purposes. In that I detect the completion of the particle effect. Basically I want to do something after completion of the particle effect. But the problem is that I didn't able to find the particle effect completion. So any community member please help me. EDIT : I was creating particle effect using following code pointParticleEmtitter = new PointParticleEmitter(pX, pY); particleSystem = new ParticleSystem(pointParticleEmtitter, maxRate, minRate, maxParticles, mParticleTextureRegion.deepCopy()); particleSystem.setBlendFunction(GL10.GL_SRC_ALPHA, GL10.GL_ONE); particleSystem.addParticleInitializer(new ColorInitializer(0f, 0f, 1f)); particleSystem.addParticleModifier(new AlphaModifier(1, 0, 0, 0.5f)); particleSystem.addParticleModifier(new ExpireModifier(0.5f)); gameObject.getScene().attachChild(particleSystem); Using above code the particle effect was started but when finished that I want to detect. After finishing effect I want to remove the object from the scene.

    Read the article

  • XNA - Obtaining depth from the scene's render target?

    - by user1423893
    I'm currently rendering my scene to a render target so it can be used for rendering methods such as post processing and order independent transparency. rtScene = new RenderTarget2D( GraphicsDevice, GraphicsDevice.PresentationParameters.BackBufferWidth, GraphicsDevice.PresentationParameters.BackBufferHeight, false, SurfaceFormat.Rgba64, DepthFormat.Depth24Stencil8, // Requires a depth format for objects to be drawn correctly (e.g. wireframe model surrounding model) 0, RenderTargetUsage.PreserveContents ); I am required to use RenderTargetUsage.PreserveContents so that the same render target can be rendered to multiple times, once for each of the draw methods below. DrawBackground DrawDeferred DrawForward DrawTransparent The problem is that DrawTransparent requires a copy of the scene's depth as a texture. Is there any way to obtain this from the scene render target above (rtScene)? I can't have more than one render target with RenderTargetUsage.PreserveContents as this causes problems on hardware such as the XBOX 360, so rendering the depth to a separate render target at the same time as I render the scene isn't possible as far as I can tell. Would I be able to get around this problem by "Ping-Ponging" two render targets (using the more compatible RenderTargetUsage.DiscardContents) and using the result for the depth texture?

    Read the article

  • Textures selectively not applying in Unity

    - by user46790
    On certain imported objects (fbx) in Unity, upon applying a material, only the base colour of the material is applied, with none of the tiled texture showing. This isn't universal; on a test model only some submeshes didn't show the texture, while some did. I have tried every combination of import/calculate normals/tangents to no avail. FYI I'm not exactly experienced with the software or gamedev in general; this is to make a small static scene with 3-4 objects max. One model tested was created in 3DSMax, the other in Blender. I've had this happen on every export from Blender, but only some submeshes from the 3DSMax model (internet sourced to test the problem)

    Read the article

  • AI agents with FSM: a question regarding this

    - by Prog
    Finite State Machines implemented with the State design pattern are a common way to design AI agents. I am familiar with the State design pattern and know how to implement it. However I have a question regarding how this is used in games to design AI agents. Please consider a class Monster that represents an AI agent. Simplified it looks like this: class Monster{ State state; // other fields omitted public void update(){ // called every game-loop cycle state.execute(this); } public void setState(State state){ this.state = state; } // irrelevant stuff omitted } There are several State subclasses that implement execute() differently. So far classic State pattern. Here's my question: AI agents are subject to environmental effects and other objects communicating with them. For example an AI agent might tell another AI agent to attack (i.e. agent.attack()). Or a fireball might tell an AI agent to fall down. This means that the agent must have methods such as attack() and fallDown(), or commonly some message receiving mechanism to understand such messages. My question is divided to two parts: 1- Please say if this is correct: With an FSM, the current State of the agent should be the one taking care of such method calls - i.e. the agent delegates to the current state upon every event. Correct? Or wrong? 2- If correct, than how is this done? Are all states obligated by their superclass) to implement methods such as attack(), fallDown() etc., so the agent can always delegate to them on almost every event? Or is it done in some other way?

    Read the article

  • Libraries for game developement in c++? [on hold]

    - by LPeter1997
    It's time for me to start developing games in c++ (I have experience in game developement with xna and java). What libraries do you recommend? I tried Allegro, but the installation is pretty headcrushing already. Could you share me your experiences with the library(ies) you use? (maybe even advantages and disadvantages) By the way it's a good point if it can be easily connected to CodeBlocks or Dev-C++. Thanks for the answers!

    Read the article

  • How to divide hex grid evenly among n players?

    - by manabreak
    I'm making a simple hex-based game, and I want the map to be divided evenly among the players. The map is created randomly, and I want the players to have about equal amount of cells, with relatively small areas. For example, if there's four players and 80 cells in the map, each of the players would have about 20 cells (it doesn't have to be spot-on accurate). Also, each player should have no more than four adjacent cells. That is to say, when the map is generated, the biggest "chunks" cannot be more than four cells each. I know this is not always possible for two or three players (as this resembles the "coloring the map" problem), and I'm OK with doing other solutions for those (like creating maps that solve the problem instead). But, for four to eight players, how could I approach this problem? As always, any and all help is appreciated. :)

    Read the article

  • I have an "amoeba" game mechanic. Any idea on how to implement it?

    - by Jason
    Outside of a tetris clone, a crappy 2D top-down shooter, and some messing around with stuff like Unity and Flixel, I realize that I have yet to complete a single, polished, bells-and-whistles game. I want to change this, and I have an idea for my next project. The idea is that you're an amoeba. Amoebas have these eye-like cores (or something like that, I don't know biology), and you have two of 'em. You control one with WASD and the other with IJKL. There has to be a constant radius of stuff around each of the cores: And the area of the amoeba has to stay constant. So if you move a core in one direction, you increase the amoeba's area, but that increase is compensated by a decrease somewhere else: Aaaaaand I'd like to implement a vagination mechanic. You absorb things by engulfing them, like a boss. Maybe even an extra core, or a needle that pops you and causes all your inner stuff to start gushing out: But here's the problem: I don't know how to make this. However, I would like some ideas on how to implement it. Should I explore physics libraries like Box2D? Or maybe something involving fluid physics? Any help would be much appreciated. P.S. Feel free to steal this idea. I have plenty of ideas. If you do, please tell me how you made it so I can try it myself.

    Read the article

  • How to synchronise the acceleration, velocity and position of the monsters on the server with the players?

    - by Nick
    I'm building an MMO using Node.js, and there are monsters roaming around. I can make them move around on the server using vector variables acceleration, velocity and position. acceleration = steeringForce / mass; velocity += acceleration * dTime; position += velocity * dTime; Right now I just send the positions over, and tell the players these are the "target positions" of the monsters, and let the monsters move towards the target positions on the client with a speed dependant on the distance of the target position. It works but looks rather strange. How do I synchronise these properly with the players without looking funny to them, taking into account the server lag? The problem is that I don't know how to make use of the correct acceleration/velocity values here; right now they just move directly in a straight line to the target position instead of accelerating/braking there properly. How can I implement such behaviour?

    Read the article

  • Accounting for waves when doing planar reflections

    - by CloseReflector
    I've been studying Nvidia's examples from the SDK, in particular the Island11 project and I've found something curious about a piece of HLSL code which corrects the reflections up and down depending on the state of the wave's height. Naturally, after examining the brief paragraph of code: // calculating correction that shifts reflection up/down according to water wave Y position float4 projected_waveheight = mul(float4(input.positionWS.x,input.positionWS.y,input.positionWS.z,1),g_ModelViewProjectionMatrix); float waveheight_correction=-0.5*projected_waveheight.y/projected_waveheight.w; projected_waveheight = mul(float4(input.positionWS.x,-0.8,input.positionWS.z,1),g_ModelViewProjectionMatrix); waveheight_correction+=0.5*projected_waveheight.y/projected_waveheight.w; reflection_disturbance.y=max(-0.15,waveheight_correction+reflection_disturbance.y); My first guess was that it compensates for the planar reflection when it is subjected to vertical perturbation (the waves), shifting the reflected geometry to a point where is nothing and the water is just rendered as if there is nothing there or just the sky: Now, that's the sky reflecting where we should see the terrain's green/grey/yellowish reflection lerped with the water's baseline. My problem is now that I cannot really pinpoint what is the logic behind it. Projecting the actual world space position of a point of the wave/water geometry and then multiplying by -.5f, only to take another projection of the same point, this time with its y coordinate changed to -0.8 (why -0.8?). Clues in the code seem to indicate it was derived with trial and error because there is redundancy. For example, the author takes the negative half of the projected y coordinate (after the w divide): float waveheight_correction=-0.5*projected_waveheight.y/projected_waveheight.w; And then does the same for the second point (only positive, to get a difference of some sort, I presume) and combines them: waveheight_correction+=0.5*projected_waveheight.y/projected_waveheight.w; By removing the divide by 2, I see no difference in quality improvement (if someone cares to correct me, please do). The crux of it seems to be the difference in the projected y, why is that? This redundancy and the seemingly arbitrary selection of -.8f and -0.15f lead me to conclude that this might be a combination of heuristics/guess work. Is there a logical underpinning to this or is it just a desperate hack? Here is an exaggeration of the initial problem which the code fragment fixes, observe on the lowest tessellation level. Hopefully, it might spark an idea I'm missing. The -.8f might be a reference height from which to deduce how much to disturb the texture coordinate sampling the planarly reflected geometry render and -.15f might be the lower bound, a security measure.

    Read the article

  • How to set sprite source coordinates?

    - by ChaosDev
    I am creating own sprite drawer with DX11 on C++. Works fine but I dont know how to apply source rectangle to texture coordinates of rendering surface(for animation sprite sheets) //source = (0,0,32,64); //RECT D3DXVECTOR2 t0 = D3DXVECTOR2( 1.0f, 0.0f); D3DXVECTOR2 t1 = D3DXVECTOR2( 1.0f, 1.0f); D3DXVECTOR2 t2 = D3DXVECTOR2( 0.0f, 1.0f); D3DXVECTOR2 t3 = D3DXVECTOR2( 0.0f, 1.0f); D3DXVECTOR2 t4 = D3DXVECTOR2( 0.0f, 0.0f); D3DXVECTOR2 t5 = D3DXVECTOR2( 1.0f, 0.0f); VertexPositionColorTexture vertices[] = { { D3DXVECTOR3( dest.left+dest.right, dest.top, z),D3DXVECTOR4(1,1,1,1), t0}, { D3DXVECTOR3( dest.left+dest.right, dest.top+dest.bottom, z),D3DXVECTOR4(1,1,1,1), t1}, { D3DXVECTOR3( dest.left, dest.top+dest.bottom, z),D3DXVECTOR4(1,1,1,1), t2}, { D3DXVECTOR3( dest.left, dest.top+dest.bottom, z),D3DXVECTOR4(1,1,1,1), t3}, { D3DXVECTOR3( dest.left , dest.top, z),D3DXVECTOR4(1,1,1,1), t4}, { D3DXVECTOR3( dest.left+dest.right, dest.top, z),D3DXVECTOR4(1,1,1,1), t5}, };

    Read the article

  • Viewport.Unproject - Checking if a model intersects a large sprite

    - by Fibericon
    Let's say I have a sprite, drawn like this: spriteBatch.Draw(levelCannons[i].texture, levelCannons[i].position, null, alpha, levelCannons[i].rotation, Vector2.Zero, scale, SpriteEffects.None, 0); Picture levelCannon as being a laser beam that goes across the entire screen. I need to see if my 3d model intersects with the screen space inhabited by the sprite. I managed to dig up Viewport.Unproject, but that seems to only be useful when dealing with a single point in 2d space, rather than an area. What can I do in my case?

    Read the article

  • OpenGL VBOs are slower then glDrawArrays.

    - by Arelius
    So, this seems odd to me. I upload a large buffer of vertices, then every frame I call glBindbuffer and then the appropriate gl*Pointer functions with offsets into the buffer, then I use glDrawArrays to draw all of my triangles. I'm only drawing about 100K triangles, however I'm getting about 15FPS. This is where it gets weird, if I change it to not call glBindBuffer, then change the gl*Pointer calls to be actual pointers into the array I have in system memory, and then call glDrawArrays the same, my framerate jumps up to about 50FPS. Any idea what I weird thing I could be doing that would cause this? Did I maybe forget to call glEnable(GL_ALLOW_VBOS_TO_RUN_FAST) or something?

    Read the article

  • non randomic enemy movement implementation

    - by user601836
    I would like to implement enemy movement on a X-Y grid. Would it be a good idea to have a predefined table with an initial X-Y position and a predefined "surveillance path"? Each enemy will follow its path until it detects a player, at this point it will start chasing it using a chasing algorithm. According to a friend of mine this implementation is good because the design of a good path will provide to the user a sort of reality sensation.

    Read the article

  • How to implement behavior in a component-based game architecture?

    - by ghostonline
    I am starting to implement player and enemy AI in a game, but I am confused about how to best implement this in a component-based game architecture. Say I have a following player character that can be stationary, running and swinging a sword. A player can transit to the swing sword state from both the stationary and running state, but then the swing must be completed before the player can resume standing or running around. During the swing, the player cannot walk around. As I see it, I have two implementation approaches: Create a single AI-component containing all player logic (either decoupled from the actual component or embedded as a PlayerAIComponent). I can easily how to enforce the state restrictions without creating coupling between individual components making up the player entity. However, the AI-component cannot be broken up. If I have, for example, an enemy that can only stand and walk around or only walks around and occasionally swing a sword, I have to create new AI-components. Break the behavior up in components, each identifying a specific state. I then get a StandComponent, WalkComponent and SwingComponent. To enforce the transition rules, I have to couple each component. SwingComponent must disable StandComponent and WalkComponent for the duration of the swing. When I have an enemy that only stands around, swinging a sword occasionally, I have to make sure SwingComponent only disables WalkComponent if it is present. Although this allows for better mix-and-matching components, it can lead to a maintainability nightmare as each time a dependency is added, the existing components must be updated to play nicely with the new requirements the dependency places on the character. The ideal situation would be that a designer can build new enemies/players by dragging components into a container, without having to touch a single line of engine or script code. Although I am not sure script coding can be avoided, I want to keep it as simple as possible. Summing it all up: Should I lob all AI logic into one component or break up each logic state into separate components to create entity variants more easily?

    Read the article

  • Handling game logic events by behavior components

    - by chehob
    My question continues on topic discussed here I have tried implementing attribute/behavior design and here is a quick example demonstrating the issue. class HealthAttribute : public ActorAttribute { public: HealthAttribute( float val ) : mValue( val ) { } float Get( void ) const { return mValue; } void Set( float val ) { mValue = val; } private: float mValue; }; class HealthBehavior : public ActorBehavior { public: HealthBehavior( shared_ptr< HealthAttribute > health ) : pHealth( health ) { // Set OnDamage() to listen for game logic event "DamageEvent" } void OnDamage( IEventDataPtr pEventData ) { // Check DamageEvent target entity // ( compare my entity ID with event's target entity ID ) // If not my entity, do nothing // Else, modify health attribute with received DamageEvent data } protected: shared_ptr< HealthAttribute > pHealth; }; My question - is it possible to get rid of this annoying check for game logic events? In the current implementation when some entity must receive damage, game logic just fires off event that contains damage value and the entity id which should receive that damage. And all HealthBehaviors are subscribed to the DamageEvent type, which leads to any entity possesing HealthBehavior call OnDamage() even if he is not the addressee.

    Read the article

  • Game Asset Storage: Archive vs Individual files

    - by David Colson
    As I am in the process of creating a 3D c++ game and I was wondering what would be more beneficial when dealing with game assets with regards to storage. I have seen some games have a single asset file compressed with everything in it and other with lots of little compressed files. If I had lots of individual files I would not need to load a large file at once and use up memory but the code would have to go about file seeking when the level loads to find all the correct files needed. There is no file seeking needed when dealing with one large file, but again, what about all the assets not currently needed that would get loaded with the one file? I could also have an asset file for each level, but then how do I deal with shared assets This has been bothering me for a while so tell me what other advantages and disadvantages are there to either way of doing things.

    Read the article

  • Make Sprite Jump Upon a Platform

    - by Geore Shg
    I have been struggling to make a game like Doodle Jump where the sprite jumps on a platform. So how do you make a sprite jump upon platforms in XNA? Th platforms are represented by a list of positions like Public platformList As List(Of Vector2) This is the collision detection under update() Dim mainSpriteRect As Rectangle = New Rectangle(CInt(mainSprite.Position.X), CInt(mainSprite.Position.Y), mainSprite.texture.Width, mainSprite.texture.Height) 'a node is simply a class with the texture and position' For Each _node As Node In _gameMap.nodeList Dim blockRect As Rectangle = New Rectangle(CInt(_node.Position.X), CInt(_node.Position.Y), _BlocksTexture.Width, _BlocksTexture.Height) If mainSpriteRect.Intersects(blockRect) Then 'what should I do here? For example velocity and position?' End If If (_node.Position.Y > 800) Then nodeList.Remove(_node) End If Next

    Read the article

  • Predicted target location

    - by user3256944
    I'm having an issue with calculating the predicted linear angle a projectile needs to move in to intersect a moving enemy ship for my 2D game. I've tried following the document here, but what I've have come up with is simply awful. protected Vector2 GetPredictedPosition(float angleToEnemy, ShipCompartment origin, ShipCompartment target) { // Below obviously won't compile (document wants a Vector, not sure how to get that from a single float?) Vector2 velocity = target.Thrust - 25f; // Closing velocity (25 is example projectile velocity) Vector2 distance = target.Position - origin.Position; // Range to close double time = distance.Length() / velocity.Length(); // Time // Garbage code, doesn't compile, this method is incorrect return target.Position + (target.Thrust * time); } I would be grateful if the community can help point out how this is done correctly.

    Read the article

  • How to Create a Grid for a 2D Game?

    - by SoulBeaver
    So I'm currently writing the engine for my videogame. I've almost integrated Tiled (I think) so I should have a map-creator here soon. My question is, how do I actually make the grid? I'm really confused here. If I create a large map with, say, 20x20 grids the size of 32x32 (screen size 640x640), then what do I do with it? Let's say I have the code for creating a window, and then place a player sprite that I can move with input, that's fine. If I use one map that's as big as the screen, then every pixel on the map is also a pixel on the game screen. The mapping is exact. Now what happens if I have a 2000x2000 map, for example? My character would have to keep moving and move the map around (or rather the camera focused on the player moves). Then I can no longer say that the screen maps exactly to the pixel position of the map. I tried making a Grid class that maps out the screen area to 32x32 tiles, but I'm not sure if that makes any sense. Once the map moves each tile would have to update its information, or something. I'm just really confused here. How do I actually make the tiles and a grid and map them to the data I get from tiled, or that I make myself? Are there any good examples of source code that I could look at?

    Read the article

  • Which API for cross platform mobile audio?

    - by deft_code
    This question focuses on the API's available on phones. I'd been planning to use OpenAL in my game for maximum portability. It runs great on Linux so I can quickly develop the Game and leverage it's superior debugging tools. However I've recently heard that Android doesn't support OpenAL well. Instead they've gone with a OpenSL ES library. What I'm looking for is a free Audio library that I can use with minimal custom code on iPhone, Android, and my Linux desktop. Does such an API exists? Some extra details: The game is written in C++ with custom minimal front ends. ObjC for iPhone, Java for Android, and SFML for Desktops. I'm using OpenGL ES for portability as iPhone doesn't support the more advanced OpenGL APIs.

    Read the article

  • BOX2D Kinematic Platform with parallax layer

    - by Marcell
    I am using a kinematic body for my moving platform on x-axis, so I set the linear velocity to b2vec2(5,0). When the player jump on the platform, it works like it is suppose to. But the thing is that my platform is on the obstacle layer and I am moving it with the parallax layer. So if I setTransform the kinematic platform to follow the obstacle layer than it's physics will not work and the player will slip-off the platform. I'm developing for iOS and using cocos2d api. Anyway around this?

    Read the article

  • Facing a character towards the mouse

    - by ratata
    I'm trying to port a simple 2d top down shooter game from C++(Allegro) to Java and i'm having problems with rotating my character. Here's the code i used in c++ if (keys[A]) RotateRight(player, degree); if (keys[D]) RotateLeft(player, degree); void RotateLeft(Player& player, float& degree) { degree += player.rotatingSpeed; if ( degree >= 360 ) degree = 0; } void RotateRight(Player& player, float& degree) { degree -= player.rotatingSpeed; if ( degree <= 0) degree = 360; } And this is what i have in render section: al_draw_rotated_bitmap(player.image, player.frameWidth / 2, player.frameHeight / 2, player.x, player.y, degree * 3.14159 / 180, 0); Instead of using A-D keys i want to use mouse this time. I've been searching since last night and came up to few sample codes however noone of them worked. For example this just made my character to circle around the map: int centerX = width / 2; int centerY = height / 2; double angle = Math.atan2(centerY - mouseY, centerX - mouseX) - Math.PI / 2; ((Graphics2D)g).rotate(angle, centerX, centerY); g.fillRect(...); // draw your rectangle Any help is much appreciated.

    Read the article

< Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >