Search Results

Search found 19338 results on 774 pages for 'game loop'.

Page 377/774 | < Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >

  • Contricted A* problem

    - by Ragekit
    I've got a little problem with an A* algorithm that I need to constrict a little bit. Basically : I use an A* to find the shortest path between 2 randomly placed room in 3D space, and then build a corridor between them. The problem I found is that sometimes it makes chimney like corridors that are not ideal, so I constrict the A* so that if the last movement was up or down, you go sideways. Everything is fine, but in some corner cases, it fails to find a path (when there is obviously one). Like here between the blue and red dot : (i'm in unity btw, but i don't think it matters) Here is the code of the actual A* (a bit long, and some redundency) while(current != goal) { //add stair up / stair down foreach(Node<GridUnit> test in current.Neighbors) { if(!test.Data.empty && test != goal) continue; //bug at arrival; if(test == goal && penul !=null) { Vector3 currentDiff = current.Data.bounds.center - test.Data.bounds.center; if(!Mathf.Approximately(currentDiff.y,0)) { //wanna drop on the last if(!coplanar(test.Data.bounds.center,current.Data.bounds.center,current.Data.parentUnit.bounds.center,to.Data.bounds.center)) { continue; } else { if(Mathf.Approximately(to.Data.bounds.center.x, current.Data.parentUnit.bounds.center.x) && Mathf.Approximately(to.Data.bounds.center.z, current.Data.parentUnit.bounds.center.z)) { continue; } } } } if(current.Data.parentUnit != null) { Vector3 previousDiff = current.Data.parentUnit.bounds.center - current.Data.bounds.center; Vector3 currentDiff = current.Data.bounds.center - test.Data.bounds.center; if(!Mathf.Approximately(previousDiff.y,0)) { if(!Mathf.Approximately(currentDiff.y,0)) { //you wanna drop now : continue; } if(current.Data.parentUnit.parentUnit != null) { if(!coplanar(test.Data.bounds.center,current.Data.bounds.center,current.Data.parentUnit.bounds.center,current.Data.parentUnit.parentUnit.bounds.center)) { continue; }else { if(Mathf.Approximately(test.Data.bounds.center.x, current.Data.parentUnit.parentUnit.bounds.center.x) && Mathf.Approximately(test.Data.bounds.center.z, current.Data.parentUnit.parentUnit.bounds.center.z)) { continue; } } } } } g = current.Data.g + HEURISTIC(current.Data,test.Data); h = HEURISTIC(test.Data,goal.Data); f = g + h; if(open.Contains(test) || closed.Contains(test)) { if(test.Data.f > f) { //found a shorter path going passing through that point test.Data.f = f; test.Data.g = g; test.Data.h = h; test.Data.parentUnit = current.Data; } } else { //jamais rencontré test.Data.f = f; test.Data.h = h; test.Data.g = g; test.Data.parentUnit = current.Data; open.Add(test); } } closed.Add (current); if(open.Count == 0) { Debug.Log("nothingfound"); //nothing more to test no path found, stay to from; List<GridUnit> r = new List<GridUnit>(); r.Add(from.Data); return r; } //sort open from small to biggest travel cost open.Sort(delegate(Node<GridUnit> x, Node<GridUnit> y) { return (int)(x.Data.f-y.Data.f); }); //get the smallest travel cost node; Node<GridUnit> smallest = open[0]; current = smallest; open.RemoveAt(0); } //build the path going backward; List<GridUnit> ret = new List<GridUnit>(); if(penul != null) { ret.Insert(0,to.Data); } GridUnit cur = goal.Data; ret.Insert(0,cur); do{ cur = cur.parentUnit; ret.Insert(0,cur); } while(cur != from.Data); return ret; You see at the start of the foreach i constrict the A* like i said. If you have any insight it would be cool. Thanks

    Read the article

  • Offset Forward vector of object based on Rotation

    - by Taylor
    I'm using the Bullet 3D physics engine in a iOS application running openGL ES 1.1 Currently I'm accepting info from the gyroscope to allow the user to "look around" a 3d world that follows a bouncing ball (note: it only takes in the yaw to look around 360 degrees). Im also accepting information from the accelerometer based on the tilt to push the ball. As of right now, to move forward, the user tilts the devise forward (using the accelerometer); to move to the right, the user tilts the devise to the right and so on. The forward vector is currently along it's local Z-axis. The problem is that I want to change the ball bounce based on where the user has changed the view. If I change the view, the ball bounces in the fixed direction. I want to change the forward facing direction so that when a user changes the view (say to the look at the right of the world, the user rotates the device), tilting the devise forward will result in a forward force in that direction. Basically, I want the forward vector to take the rotation into consideration. Sorry if I didn't explain the issue well enough, its kind of confusing to write down.

    Read the article

  • Problem with alleg42.dll / program crashes / Allegro & Codeblocks

    - by user24152
    I'm having a serious problem with allegro. The program should display random pixels on the screen and when I build and run it I get the following error message: Below is the full code of my program: #include <stdio.h> #include <stdlib.h> #include <time.h> #include "allegro.h" #define Text_Color_Red makecol(255,0,0) int main() { int ret; int color_depth = 32; int x; int y; int red; int green; int blue; int color; //init allegro allegro_init(); //install keyboard install_keyboard(); //set color depth to 32 bits set_color_depth(color_depth); //init random seed srand(time(NULL)); //init video mode to 640 x 480 ret = set_gfx_mode(GFX_AUTODETECT_WINDOWED,640,480,0,0); if(ret !=0) { allegro_message(allegro_error); return 1; } //Display string textprintf(screen,font,0,0,10,0,Text_Color_Red,"Screen Resolution is: %dx%d -- Press ESC to quit !",SCREEN_W,SCREEN_H); //display pixels until ESC key is pressed //wait for keypress while(!key[KEY_ESC]) { //set a random location x = 10 + rand() % (SCREEN_W-20); y = 10 + rand() % (SCREEN_H-20); //set a random color red = rand() % 255; green = rand() % 255; blue = rand() % 255; color = makecol(red,green,blue); //draw the pixel putpixel(screen, x, y, color); } //quit allegro allegro_exit(); } END_OF_MAIN() Error message: AllegroPixels1.exe has encountered a problem and needs to close. We are sorry for the inconvenience. Error signature: AppName: allegropixels1.exe AppVer: 0.0.0.0 ModName: alleg42.dll ModVer: 4.2.3.0 Offset: 0006c05c I am using Windows XP inside a virtual machine under Parallels 7.0

    Read the article

  • Separate collision mesh model?

    - by Menno Gouw
    I want to have another go at 3D within XNA. What I have seen from some other games that they just have a separate very low poly model "cage" around the environment model. However I can not find any reference to this. I have not that much experience with XNA 3D either. Is it possible to have this cage within each of my environmental models already? Lets just say I call the mesh within the .FBX wall and col_wall. How would I call to these different meshes within XNA? The player would just have a tight collision cube around. To make it a bit more efficient I will be making divide the map up by cubes and only calculate collision if the player is in it. Question two: I can't find anywhere to do cube vs mesh collision. Is there a method for this? Or perhaps it is possible to build my collision cage out of cubes in the 3D app and on loading of the models in XNA replace them directly by cubes? So I could just do box to box collision which should be very cheap and still give the player the ability to move over ledges on the static models.

    Read the article

  • Connecting 2 Vertices in 3DS Max?

    - by Reanimation
    How do you connect two vertices in 3DS Max 2013? I have two vertices which I wish to connect with a line to create an edge. (actually several) I have tried all I can think and done several Google searches but it only comes up with older versions method which say use the "connect" button... But I can't find the connect button on my version (see below) This is what my menu looks like: These are the vertices I'm trying to connect: Basically, I've edited an STL file and deleted some edges and vertices. Now I want to fill the gaps and triangulate what's left. Thanks.

    Read the article

  • How should I determine direction from a phone's orientation & accelerometer?

    - by Manoj Kumar
    I have an Android application which moves a ball based on the orientation of the phone. I've been using the following code to extract the data - but how do I use it to determine what direction the ball should actually travel in? public void onSensorChanged(int sensor, float[] values) { // TODO Auto-generated method stub synchronized (this) { Log.d("HIIIII :- ", "onSensorChanged: " + sensor + ", x: " + values[0] + ", y: " + values[1] + ", z: " + values[2]); if (sensor == SensorManager.SENSOR_ORIENTATION) { System.out.println("Orientation X: " + values[0]); System.out.println("Orientation Y: " + values[1]); System.out.println("Orientation Z: " + values[2]); } if (sensor == SensorManager.SENSOR_ACCELEROMETER) { System.out.println("Accel X: " + values[0]); System.out.println("Accel Y: " + values[1]); System.out.println("Accel Z: " + values[2]); } } }

    Read the article

  • Locomotion-system with irregular IK

    - by htaunay
    Im having some trouble with locomtions (Unity3D asset) IK feet placement. I wouldn't call it "very bad", but it definitely isn't as smooth as the Locomotion System Examples. The strangest behavior (that is probably linked to the problem) are the rendered foot markers that "guess" where the characters next step will be. In the demo, they are smooth and stable. However, in my project, they keep flickering, as if Locomotion changed its "guess" every frame, and sometimes, the automatic defined step is too close to the previous step, or sometimes, too distant, creating a very irregular pattern. The configuration is (apparently)Identical to the human example in the demo, so I guessing the problem is my model and/or animation. Problem is, I can't figure out was it is =S Has anyone experienced the same problem? I uploaded a video of the bug to help interpreting the issue (excuse the HORRIBLE quality, I was in a hurry).

    Read the article

  • OOP implementation of BUFFS and Stats. Suggestion

    - by Mattia Manzo Manzati
    I am developing an MMORPG server using NodeJS. I am not sure how to implement Buffs, i mean, equipped objects or used skills have effects on the Player() which has many Stats(), some of them have a max cap... Effects can change the Stat value, increasing or decreasing it by a value, a percentage or completly rewrite the value of the stat. After a while I have decided to create a base class for buffs, which can be hidden (if they are casted from an equipped object) or shown if they came from an ability (Spell). Anyway I need suggestion how to implement it, use an array for all active buffs for a stat and have a function calculate the value of the stat affected by buffs each time I need the value of the stat or...? Other more OOP's ways to do it? I have read this What's a way to implement a flexible buff/debuff system? but this implements only a percentage system, which buffs can only say "+10%, +20%, etc...", but I would love to have an hybrid system, which can have percentage values or static values (like WoW does), and using modifiers it's hard to implement, because modifiers refers to the current value of stat :/ Thanks for suggestions :)

    Read the article

  • what is the easiest way to make a hitbox that rotates with it's texture

    - by Matthew Optional Meehan
    In xna when you have a sprite that doesnt rotate it's very easy to get the four corner of a sprite to make a hitbox, but when you do a rotation the points get moved and I assume there is some kind of math that I can use to aquire them. I am using the four points to draw a rectangle that visually represents the hitboxes. I have seen some per-pixel collission examples but I can forsee they would be hard to draw a box/'convex hull' around. I have also seen physics like farseer but I'm not sure if there is a quick tutorial to do what I want. What do you guys think is the best approach becuase I am looking to complete this work by the end of the week.

    Read the article

  • Opengl + SDL linking error

    - by me2loveit2
    I am trying to load an image as a texture with opengl using c++ in visual studio 2010. I researched a couple hours online and found the SDL library, then I implemented a simple example and got some linking error I can not seem to figure out. The error log is here: 1Build started 10/20/2012 12:09:17 AM. 1InitializeBuildStatus: 1 Touching "Debug\texture mapping test.unsuccessfulbuild". 1ClCompile: 1 All outputs are up-to-date. 1 texture mapping test.cpp 1ManifestResourceCompile: 1 All outputs are up-to-date. 1texture mapping test.obj : error LNK2019: unresolved external symbol _IMG_Load referenced in function "void __cdecl display(void)" (?display@@YAXXZ) 1MSVCRTD.lib(crtexe.obj) : error LNK2019: unresolved external symbol main referenced in function __tmainCRTStartup 1C:\Users\Me\Documents\Visual Studio 2010\Projects\Programming projects\texture mapping test\Debug\texture mapping test.exe : fatal error LNK1120: 2 unresolved externals 1 1Build FAILED. 1 1Time Elapsed 00:00:02.45 ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== Can someone please help me!! I am at a desperate point right now. I downloaded the SDL, and copied all the .h file into: C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Include I added the .lib (x86) files into://as a not i tried the (x64) file too but got the exact same error C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Lib and the .dll(x86) into: C:\Windows\System32 For implementing textures, I used the simple sample code from: http://www.sdltutorials.com/sdl-tip-sdl-surface-to-opengl-texture Please let me know if you can see me doing something wrong, or know how I can fix this!! Thanks Phil

    Read the article

  • Where have the Direct3D 11 tutorials on MSDN have gone?

    - by Cam Jackson
    I've had this tutorial bookmarked for ages. I've just decided to give DX11 a real go, so I've gone through that tutorial, but I can't find where the next one in the series is! There are no links from that page to either the next in the series, or back up to the table of contents that lists all of the tutorials. These are just companion tutorials to the samples that come with the SDK, but I find them very helpful. Searching MSDN from google and the MSDN Bing search box has turned up nothing, it's like they've removed all links to these tutorials, but the pages are still there if you have the URLs. Unfortunately, MSDN URLs are akin to youtube URLs, so I can't just guess the URL of the next tutorial. Anyone have any idea what happened to these tutorials, or how I can find the others?

    Read the article

  • Drag camera/view in a 3D world

    - by Dono
    I'm trying to make a Draggable view in a 3D world. Currently, I've made it using mouse position on the screen, but, when I move the distance traveled by my mouse is not equal to the distance traveled in the 3D world. So, I've tried to do that : Compute a ray from mouse position to 3D world. Calculate intersection with the ground. Check intersection difference old position <- new position. Translate camera with the difference. I've got a problem with this method: The ray is computed with the current camera's position I move the camera I compute the new ray with new camera position. The difference between old ray and new ray is now invalid. So, graphically my camera don't stop to move to previous/new position everytime. How can I do a draggable camera with another solution ? Thanks!

    Read the article

  • The most efficent ways for drawing lines all day long with OpenGL

    - by nkint
    I'd like to put a computer screen that is running an OpenGL programs in a room. It has to run all day long (not in the night). I'd like to draw lines that are slowly fading in the background. The setting is simple: a uniform color background (say, black) and colored lines (say, white) that are slowly fading out. With slowly I mean.. hours. Say that the first line I draw is with alpha 255 (fully visible), after one hours is 240. After 10 hours is 105. One line could have 250 points and there will be like 300 line in one day. For now I have done a prototype with very rudimentary method like: glBegin( GL_LINE_STRIP ); iterator = point_list.begin(); for (++iterator, end = point_list.end(); iterator != end; ++iterator) { const Vec3D &v = *iterator; glVertex2f(v.x(), v.y()); } glEnd(); More efficient method?

    Read the article

  • Simple 2 player server

    - by Sourabh Lal
    I have recently started learning javascript and html and have developed simple 2 player games such as tick-tack-toe, battleship, and dots&boxes. However these 2 player games can only be played on one computer (i.e. the 2 players must sit together) However, I want to modify this so that one can play with a friend on a different computer. Any suggestions on how this is possible? Also since I am a beginner please do not assume that I know all the jargon.

    Read the article

  • Sensor based vs. AABB based collision

    - by Hillel
    I'm trying to write a simple collision system, which will probably be primarily used for 2D platformers, and I've been planning out an AABB system for a few weeks now, which will work seamlessly with my grid data structure optimization. I picked AABB because I want a simple system, but I also want it to be perfect. Now, I've been hearing a lot lately about a different method to handle collision, using sensors, which are placed in the important parts of the entity. I understand it's a good way to handle slopes, better than AABB collision. The thing is, I can't find a basic explanation of how it works, let alone a comparison of it and the AABB method. If someone could explain it to me, or point me to a good tutorial, I'd very much appreciate it, and also a comparison of the advantages and disadvantages of the two techniques would be nice.

    Read the article

  • OpenGL Vertex Attributes - Normalisation

    - by Daniel
    Alas, I have searched, and have found no definitive answer. When would you normalize the vertex data in OpenGL using the following command: glVertexAttribPointer(index, size, type, normalize, stride, pointer); I.e when would normalize == GL_TRUE; what situations, and why would you choose to let the GPU do the calculations instead of preprocessing it? All examples I have ever seen, have this set to GL_FALSE; and I cannot personally see a use for it. But Khronos aren't stupid, so it must be there for something useful (and probably common).

    Read the article

  • Farseer Physics: Ways to create a Body?

    - by EdgarT
    I want to create something similar to this using farsser and Kinect: https://vimeo.com/33500649 This is my implementation until now: http://www.youtube.com/watch?v=GlIvJRhco4U I have the outline vertices and the triangulation of the user. And following the Texture to Polygonmsample i used this line to create the shape, where farseerObject is a list of vertices of the triangles: _compound = BodyFactory.CreateCompoundPolygon(World, farseerObject, 1f, BodyType.Dynamic); But I have to update the body each frame (like 30 fps) and this is very slow. I get just 2 or 3 fps. There's another (faster) way to create the Body from a list of triangles or the contour vertices?

    Read the article

  • Open GL polygons not displaying

    - by Darestium
    I have tried to follow nehe's opengl tutorial lesson 2. I use sfml for my window creation. The problem I have is that both the triangle and the quad don't show up on the screen: #include <SFML/System.hpp> #include <SFML/Window.hpp> #include <iostream> void processEvents(sf::Window *app); void processInput(sf::Window *app, const sf::Input &input); void renderCube(sf::Window *app, sf::Clock *clock); void renderGlScene(sf::Window *app); void init(); int main() { sf::Window app(sf::VideoMode(800, 600, 32), "Nehe Lesson 2"); app.UseVerticalSync(false); init(); while (app.IsOpened()) { processEvents(&app); renderGlScene(&app); app.Display(); } return EXIT_SUCCESS; } void init() { glClearDepth(1.f); glClearColor(0.f, 0.f, 0.f, 0.f); // Enable z-buffer and read and write glEnable(GL_DEPTH_TEST); glDepthMask(GL_TRUE); // Setup a perpective projection glMatrixMode(GL_MODELVIEW); glLoadIdentity(); gluPerspective(45.f, 1.f, 1.f, 500.f); glShadeModel(GL_SMOOTH); } void processEvents(sf::Window *app) { sf::Event event; while (app->GetEvent(event)) { if (event.Type == sf::Event::Closed) { app->Close(); } if (event.Type == sf::Event::KeyPressed && event.Key.Code == sf::Key::Escape) { app->Close(); } } } void renderGlScene(sf::Window *app) { app->SetActive(); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear the screen and the depth buffer glLoadIdentity(); // Reset the view glTranslatef(-1.5f, 0.0f, -6.0f); // Move Left 1.5 units and into the screen 6.0 glBegin(GL_TRIANGLES); glVertex3f( 0.0f, 1.0f, 0.0f); // Top glVertex3f(-1.0,-1.0f, 0.0f); // Bottom Left glVertex3f( 1.0f,-1.0f, 0.0f); // Bottom Right glEnd(); glTranslatef(3.0f, 0.0f, 0.0f); glBegin(GL_QUADS); // Draw a quad glVertex3f(-1.0f, 1.0f, 0.0f); glVertex3f( 1.0f, 1.0f, 0.0f); glVertex3f( 1.0f,-1.0f, 0.0f); glVertex3f(-1.0f,-1.0f, 0.0f); glEnd(); } I would greatly appreciate it if someone could help me resolve my issue.

    Read the article

  • Rain drops on screen

    - by user1075940
    I am trying to make simple rain drop effect on screen.Something like this http://fc00.deviantart.net/fs20/f/2007/302/5/6/Rain_drops_by_rockraikar.png My idea is to: Create small drop shaped normal textures,randomly put few on screen,apply texture perturbation and mix with current frame pixels. Here are my questions: -Does this idea even have sense?How professionals do this effect?Everything from text to code will be appreciated -How to pass pixels to shader of already rendered frame?

    Read the article

  • How to create games with scrolling?

    - by Chandan Shetty SP
    In games like city story or we farm how do they implement scrolling? To do scrolling using UIScrollView the EAGLView size has to be bigger. In those games EAGLView size look like more than 1024*1024. But there is limitation in viewport size in iphone devices(in 3G iphone max is 1024). I played those games in 3G iphone they are working fine. Any idea how they implemented their scrolling mechanism?

    Read the article

  • How to make my sprite jump properly?

    - by Matthew Morgan
    I'm currently working on a 2D platformer in XNA. I have, however been having some trouble with creating a fully functional jumping algorithm. This is what I have so far: if (keystate.IsKeyDown(Keys.W)) if (onGround = true) //"onground" is true when the collision between the main sprite and the ground is detected { spritePosition.Y = velocity.Y = -5; } So, the problem I am now having is that as soon as the jump starts the variable "onGround" = false and the sprite is brought back the ground by the simple gravity I have implemented. The other problem I have is creating a limit to the height after which the sprite should automatically return to the ground. Any advice or suggestions would be greatly appreciated.

    Read the article

  • Cocos2d rotating sprite while moving with CCBezierBy

    - by marcg11
    I've done my moving actions which consists of sequences of CCBezierBy. However I would like the sprite to rotate by following the direction of the movement (like an airplane). How sould I do this with cocos2d? I've done the following to test this out. CCSprite *green = [CCSprite spriteWithFile:@"enemy_green.png"]; [green setPosition:ccp(50, 160)]; [self addChild:green]; ccBezierConfig bezier; bezier.controlPoint_1 = ccp(100, 200); bezier.controlPoint_2 = ccp(400, 200); bezier.endPosition = ccp(300,160); [green runAction:[CCAutoBezier actionWithDuration:4.0 bezier:bezier]]; In my subclass: @interface CCAutoBezier : CCBezierBy @end @implementation CCAutoBezier - (id)init { self = [super init]; if (self) { // Initialization code here. } return self; } -(void) update:(ccTime) t { CGPoint oldpos=[self.target position]; [super update:t]; CGPoint newpos=[self.target position]; float angle = atan2(newpos.y - oldpos.y, newpos.x - oldpos.x); [self.target setRotation: angle]; } @end However it rotating, but not following the path...

    Read the article

  • XNA - Am I screwing up the LoadContent for Texture2D?

    - by Bombcode
    I've read forum threads and questions and I done just about everything. I need to know what am I screwing up here. Here's the code in the constructor. Content.RootDirectory = "GameStateContent"; //Content.RootDirectory = "Content"; And this is in the LoadContent method menu = this.Content.Load<Texture2D>("mainmenu"); And here's the image screen shot of the folder structure. http://i.imgur.com/HnndE.png Any helps on this? Thanks.

    Read the article

  • exact point oh a rotating sphere

    - by nkint
    I have a sphere that represents the heart textured with real pictures. It's rotating about the x axis, and when user click down it has to show me the exact place he clicked on. For example if he clicked on Singapore and the system should be able to: understand that user clicked on the sphere (OK, I'll do it with unProject) understand where user clicked on the sphere (ray-sphere collision?) and take into account the rotation transform sphere-coordinate to some coordinate system good for some web-api service ask to api (OK, this is the simpler thing for me ;-) some advice?

    Read the article

  • Could someone explain why my world reconstructed from depth position is incorrect?

    - by yuumei
    I am attempting to reconstruct the world position in the fragment shader from a depth texture. I pass in the 8 frustum points in world space and interpolate them across fragments and then interpolate from near to far by the depth: highp float depth = (2.0 * CameraPlanes.x) / (CameraPlanes.y + CameraPlanes.x - texture( depthTexture, textureCoord ).x * (CameraPlanes.y - CameraPlanes.x)); // Reconstruct the world position from the linear depth highp vec3 world = mix( nearWorldPos, farWorldPos, depth ); CameraPlanes.x is the near plane CameraPlanes.y is the far. Assuming that my frustum positions are correct, and my depth looks correct, why is my world position wrong? (My depth texture is of format GL_DEPTH_COMPONENT32F if that matters) Thanks! :D Update: Screenshot of world position http://imgur.com/sSlHd So you can see it looks nearly correct. However as the camera moves, the colours (positions) change, which they shouldnt. I can get this to work, if I do the following: Write this into the depth attachment in the previous pass: gl_FragDepth = gl_FragCoord.z / gl_FragCoord.w / CameraPlanes.y; and then read the depth texture like so: depth = texture( depthTexture, textureCoord ).x However this will kill the hardware z buffer optimizations.

    Read the article

< Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >