Search Results

Search found 19338 results on 774 pages for 'game loop'.

Page 364/774 | < Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >

  • When mapping the surface of a sphere with tiles, how might you deal with polar distortion?

    - by clweeks
    It's easy to deal with the way locations interact on a clean Cartesian grid. It's just vanilla math. And you can kind of ignore the geometry of the sphere's surface for a bunch of it if you want to just truncate the poles or something. But I keep coming up with ideas for games where the polar space matters. Geo-coded ARGs and global roguelikes and stuff. I want square(ish?) locations -- reasonably representable by square tiles of the same size across the globe, anyway. This has to be a solved problem, right? What are the solutions? ETA: At the equator -- and assuming that your square locations are reasonably small, it's close enough to true that you can get away with having one square in the rows north and south of the most equatorial row. And you could probably get away with that by just hand-waving the difference up to like 45-degrees or so. But eventually, you need to have fewer squares in a pole-ward circumferential row. If I reduce the length of the row by one and offset the squares by 1/2 then they're just like hexes and it's relatively easy to do the coding to keep track of the connections. But as you get pole-ward, it gets more and more extreme. Projecting the surface of the world onto the surface of a cube is tempting. But I figured there must be more elegant solutions already in use. If I did the cube thing (not dissecting it further through geodesy) Are there any pros and cons related to placing the pole at the center of a face or at the vertex of three sides?

    Read the article

  • Moving objects colliding when using unalligned collision avoidance (steering)

    - by James Bedford
    I'm having trouble with unaligned collision avoidance for what I think is a rare case. I have set two objects to move towards each other but with a slight offset, so one of the objects is moving slightly upwards, and one of the objects is moving slightly downwards. In my unaligned collision avoidance steering algorithm I'm finding the points on the object's forward line and the other object's forward line where these two lines are the closest. If these closest points are within a collision avoidance distance, and if the distance between them is smaller than the two radii of the two object's bounding spheres, then the objects should steer away in the appropriate direction. The problem is that for my case, the closest points on the lines are calculated to be really far away from the actual collision point. This is because the two forward lines for each object are moving away from each other as the objects pass. The problem is that because of this, no steering takes place, and the two objects partially collide. Does anyone have any suggestions as to how I can correctly calculate the point of collision? Perhaps by somehow taking into account the size of the two objects?

    Read the article

  • Change collision action

    - by PatrickR
    I have a collision detection and its working fine, the problem is, that whenever my "bird" is hitting a "cloud", the cloud dissapers and i get some points. The same happens for the "sol" which it should, but not with the clouds. How can this be changed ? ive tryed a lot, but can seem to figger it out. Collision Code - (void)update:(ccTime)dt { bird.position = ccpAdd(bird.position, skyVelocity); NSMutableArray *projectilesToDelete = [[NSMutableArray alloc] init]; for (CCSprite *bird in _projectiles) { bird.anchorPoint = ccp(0, 0); CGRect absoluteBox = CGRectMake(bird.position.x, bird.position.y, [bird boundingBox].size.width, [bird boundingBox].size.height); NSMutableArray *targetsToDelete = [[NSMutableArray alloc] init]; for (CCSprite *cloudSprite in _targets) { cloudSprite.anchorPoint = ccp(0, 0); CGRect absoluteBox = CGRectMake(cloudSprite.position.x, cloudSprite.position.y, [cloudSprite boundingBox].size.width, [cloudSprite boundingBox].size.height); if (CGRectIntersectsRect([bird boundingBox], [cloudSprite boundingBox])) { [targetsToDelete addObject:cloudSprite]; } } for (CCSprite *solSprite in _targets) { solSprite.anchorPoint = ccp(0, 0); CGRect absoluteBox = CGRectMake(solSprite.position.x, solSprite.position.y, [solSprite boundingBox].size.width, [solSprite boundingBox].size.height); if (CGRectIntersectsRect([bird boundingBox], [solSprite boundingBox])) { [targetsToDelete addObject:solSprite]; score += 50/2; [scoreLabel setString:[NSString stringWithFormat:@"%d", score]]; } } // NÅR SKYEN BLIVER RAMT AF FUGLEN for (CCSprite *cloudSprite in targetsToDelete) { //[_targets removeObject:cloudSprite]; //[self removeChild:cloudSprite cleanup:YES]; } // NÅR SOLEN BLIVER RAMT AF FUGLEN for (CCSprite *solSprite in targetsToDelete) { [_targets removeObject:solSprite]; [self removeChild:solSprite cleanup:YES]; } if (targetsToDelete.count > 0) { [projectilesToDelete addObject:bird]; } [targetsToDelete release]; } // NÅR FUGLEN BLIVER RAMT AF ALT ANDET for (CCSprite *bird in projectilesToDelete) { //[_projectiles removeObject:bird]; //[self removeChild:bird cleanup:YES]; } [projectilesToDelete release]; }

    Read the article

  • How do I keep a 3D model on the screen in OpenGL?

    - by NoobScratcher
    I'm trying to keep a 3D model on the screen by placing my glDrawElement functions inside the draw function with the declarations at the top of .cpp. When I render the model, the model attaches it self to the current vertex buffer object. This is because my whole graphical user interface is in 2D quads except the window frame. Is there a way to avoid this from happening? or any common causes of this? Creating the file object: int index = IndexAssigner(1, 1); //make a fileobject and store list and the index of that list in a c string ifstream file (list[index].c_str() ); //Make another string //string line; points.push_back(Point()); Point p; int face[4]; Model rendering code: int numfloats = 4; float* point=reinterpret_cast<float*>(&points[0]); int num_bytes=numfloats*sizeof(float); cout << "Size Of Point" << sizeof(Point) << endl; GLuint vertexbuffer; glGenVertexArrays(1, &vao[3]); glGenBuffers(1, &vertexbuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBufferData(GL_ARRAY_BUFFER, points.size()*sizeof(points), points.data(), GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, num_bytes, &points[0]); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, points.size(), &points[0]); glEnableClientState(GL_INDEX_ARRAY); glIndexPointer(GL_FLOAT, faces.size(), faces.data()); glEnableVertexAttribArray(0); glDrawElements(GL_QUADS, points.size(), GL_UNSIGNED_INT, points.data()); glDrawElements(GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data());

    Read the article

  • Looping 3D environment in shmups

    - by kamziro
    So I was watching Ikaruga: http://www.youtube.com/watch?v=Aj23K8Ri68E And then raystorm: http://www.youtube.com/watch?v=TQ4V0G5ykAg After looking at their 3D backgrounds for a little bit, it appears that they use a lot of repeated segments. How would one start with the development with such systems? Would there be editors that can be used (or at least help) with creating the environments? Perhaps a 3D map with splines describing the path of the ship, as well as events on the splines?

    Read the article

  • Scaling along an arbitrary axis (Dealing with non-uniform scale)

    - by Jon
    I'm trying to build my own little engine to get more familiar with the concepts of 3D programming. I have a transform class that on each frame it creates a Scaling Matrix (S), a Rotation Matrix from a Quaternion (R) and concatenates them together (S*R). Once i have SR, I insert the translation values into the bottom of the three columns. So i end up with a transformation matrix that looks like: [SR SR SR 0] [SR SR SR 0] [SR SR SR 0] [tx ty tz 1] This works perfectly in all cases except when rotating an object that has a non-uniform scale. For example a unit cube with ScaleX = 4, ScaleY = 2, ScaleZ = 1 will give me a rectangular box that is 4 times as wide as the depth and twice as high as the depth. If i then translate this around, the box stays the same and looks normal. The problem happens whenever I try to rotate this scaled box. The shape itself becomes distorted and it appears as though the Scale factors are affecting the object on the World X,Y,Z axis rather than the local X,Y,Z axis of the object. I've done some pretty extensive research through a variety of textbooks (Eberly, Moller/Hoffman, Phar etc) and there isn't a ton there to go off of. Online, most of the answers say to avoid non-uniform scaling which I understand the desire to avoid it, but I'd still like to figure out how to support it. The only thing I can think off is that when constructing a Scale Matrix: [sx 0 0 0] [0 sy 0 0] [0 0 sz 0] [0 0 0 1] This is scaling along the World Axis instead of the object's local Direction, Up and Right vectors or it's local Z, Y, X axis. Does anyone have any tips or ideas on how to handle construction a transformation matrix that allows for non-uniform scaling and rotation? Thanks!

    Read the article

  • Isometric algorithm [fixed]

    - by David
    so i've been toying with isometric and i just cant get the tiles to be in the right order. im probably missing something obvious and i just cant see it... but even at the risk of looking stupid, heres my code. for (int i = 0; i < Tile.MapSize; i++) { for (int j = 0; j < Tile.MapSize; j++) { spriteBatch.Draw( Tile.TileSetTexture, new Rectangle( (-j * Tile.TileWidth / 2) + (i * Tile.TileWidth / 2), (i * (Tile.TileHeight - 9) / 2) - (-j * (Tile.TileHeight - 9) / 2), Tile.TileWidth, Tile.TileHeight), Tile.GetSourceRectangle(tileID), Color.White, 0.0f, new Vector2(-350, -60), SpriteEffects.None, 1.0f); } } and heres what i end up with delicious messed up map yep, bit of an issue. so if anyone could help, i'd appreciate it. edit* works now <_<

    Read the article

  • Attaching two objects and changing their world matrices accordingly

    - by A-Type
    I'm having a hard time wrapping my head around the transformations required to bind two objects together in either a two-way or one-way relationship. I will need to implement both types. For the first case, I want to be able to 'couple' two ships together in space. The ships have different mass, of course. Forces applied to either ship will use combined mass and moment of inertia to calculate and move both ships. The trick is, being sure that the point at which they are coupled remains the same, and they don't move at all relative to each other. The second case is similar: I want a ship to be able to enter the atmosphere of a planet and move relative to the planet. The planet will be orbiting the sun, which is fixed at 0,0,0. Essentially, when the ship is sitting still outside of the atmosphere, the planet will move past it on its course-- but when the ship is sitting still inside the atmosphere, it moves and rotates with the planet, so that it is always relative to the horizon. Essentially, the vertices which make up the ship are now transformed just like the ones that make up the planet, except that the ship can move itself around relative to the planet. I get the feeling I can implement both of these with the same code. Essentially, I am thinking of giving each object (which I call Fixtures) a list of "slave" Fixtures onto which that Fixture's world matrix is imposed. So, this would be the planet imposing its world on any contained ships. In the case of coupling, I would simply make each ship a slave of the other, somehow. Obviously I can't just multiply the ship's world matrix by the planet's, or each ship by the others. What I'd like some help with is what calculations to make in order to get a nice, seamless relative world to the other object. I was thinking maybe I could just multiply the world of the slave by the inverse of the master, but then when you couple two ships you would lose all that world data. So, perhaps I need an intermediate "world" which is the absolute world, but use a secondary "final world" to actually transform the objects?

    Read the article

  • How to snap a 2D Quad to the mouse cursor using OpenGL 3.0/WIN32?

    - by NoobScratcher
    I've been having issues trying to snap a 2D Quad to the mouse cursor position I'm able : 1.) To get values into posX, posY, posZ 2.) Translate with the values from those 3 variables But the quad positioning I'm not able to do correctly in such a way that the 2D Quad is near the mouse cursor using those values from those 3 variables eg."posX, posY, posZ" I need the mouse cursor in the center of the 2D Quad. I'm hoping someone can help me achieve this. I've tried searching around with no avail. Heres the function that is ment to do the snapping but instead creates weird flicker or shows nothing at all only the 3d models show up : void display() { glClearColor(0.0,0.0,0.0,1.0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); for(std::vector<GLuint>::iterator I = cube.begin(); I != cube.end(); ++I) { glCallList(*I); } if(DrawArea == true) { glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ); cerr << winZ << endl; glGetDoublev(GL_MODELVIEW_MATRIX, modelview); glGetDoublev(GL_PROJECTION_MATRIX, projection); glGetIntegerv(GL_VIEWPORT, viewport); gluUnProject(winX, winY, winZ , modelview, projection, viewport, &posX, &posY, & posZ); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL); glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, DrawAreaSurface->w, DrawAreaSurface->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, DrawAreaSurface->pixels); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, DrawAreaTexture); glTranslatef(posX , posY, posZ); glBegin(GL_QUADS); glTexCoord2f (0.0, 0.0); glVertex3f(0.5, 0.5, 0); glTexCoord2f (1.0, 0.0); glVertex3f(0, 0.5, 0); glTexCoord2f (1.0, 1.0); glVertex3f(0, 0, 0); glTexCoord2f (0.0, 1.0); glVertex3f(0.5, 0, 0); glEnd(); } SwapBuffers(hDC); } I'm using : OpenGL 3.0 WIN32 API C++ GLSL if you really want the full source here it is - http://pastebin.com/1Ncm9HNf , Its pretty messy.

    Read the article

  • Calculate velocity of a bullet ricocheting on a circle

    - by SteveL
    I made a picture to demostrate what I need,basecaly I have a bullet with velocity and I want it to bounce with the correct angle after it hits a circle Solved(look the accepted answer for explain): Vector.vector.set(bullet.vel); //->v Vector.vector2.setDirection(pos, bullet.pos); //->n normal from center of circle to bullet float dot=Vector.vector.dot(Vector.vector2); //->dot product Vector.vector2.mul(dot).mul(2); Vector.vector.sub(Vector.vector2); Vector.vector.y=-Vector.vector.y; //->for some reason i had to invert the y bullet.vel.set(Vector.vector);

    Read the article

  • Camera doesn't move

    - by hugo
    Here is my code, as my subject indicates i have implemented a camera but I couldn't make it move. #define PI_OVER_180 0.0174532925f #define GL_CLAMP_TO_EDGE 0x812F #include "metinalifeyyaz.h" #include <GL/glu.h> #include <GL/glut.h> #include <QTimer> #include <cmath> #include <QKeyEvent> #include <QWidget> #include <QDebug> metinalifeyyaz::metinalifeyyaz(QWidget *parent) : QGLWidget(parent) { this->setFocusPolicy(Qt:: StrongFocus); time = QTime::currentTime(); timer = new QTimer(this); timer->setSingleShot(true); connect(timer, SIGNAL(timeout()), this, SLOT(updateGL())); xpos = yrot = zpos = 0; walkbias = walkbiasangle = lookupdown = 0.0f; keyUp = keyDown = keyLeft = keyRight = keyPageUp = keyPageDown = false; } void metinalifeyyaz::drawBall() { //glTranslatef(6,0,4); glutSolidSphere(0.10005,300,30); } metinalifeyyaz:: ~metinalifeyyaz(){ glDeleteTextures(1,texture); } void metinalifeyyaz::initializeGL(){ glShadeModel(GL_SMOOTH); glClearColor(1.0,1.0,1.0,0.5); glClearDepth(1.0f); glEnable(GL_DEPTH_TEST); glEnable(GL_TEXTURE_2D); glDepthFunc(GL_LEQUAL); glClearColor(1.0,1.0,1.0,1.0); glShadeModel(GL_SMOOTH); GLfloat mat_specular[]={1.0,1.0,1.0,1.0}; GLfloat mat_shininess []={30.0}; GLfloat light_position[]={1.0,1.0,1.0}; glMaterialfv(GL_FRONT, GL_SPECULAR, mat_specular); glMaterialfv(GL_FRONT,GL_SHININESS,mat_shininess); glLightfv(GL_LIGHT0, GL_POSITION, light_position); glEnable(GL_LIGHT0); glEnable(GL_LIGHTING); QImage img1 = convertToGLFormat(QImage(":/new/prefix1/halisaha2.bmp")); QImage img2 = convertToGLFormat(QImage(":/new/prefix1/white.bmp")); glGenTextures(2,texture); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, img1.width(), img1.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, img1.bits()); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glBindTexture(GL_TEXTURE_2D, texture[1]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, img2.width(), img2.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, img2.bits()); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); // Really nice perspective calculations } void metinalifeyyaz::resizeGL(int w, int h){ if(h==0) h=1; glViewport(0,0,w,h); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0f, static_cast<GLfloat>(w)/h,0.1f,100.0f); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); } void metinalifeyyaz::paintGL(){ movePlayer(); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); GLfloat xtrans = -xpos; GLfloat ytrans = -walkbias - 0.50f; GLfloat ztrans = -zpos; GLfloat sceneroty = 360.0f - yrot; glLoadIdentity(); glRotatef(lookupdown, 1.0f, 0.0f, 0.0f); glRotatef(sceneroty, 0.0f, 1.0f, 0.0f); glTranslatef(xtrans, ytrans+50, ztrans-130); glLoadIdentity(); glTranslatef(1.0f,0.0f,-18.0f); glRotatef(45,1,0,0); drawScene(); int delay = time.msecsTo(QTime::currentTime()); if (delay == 0) delay = 1; time = QTime::currentTime(); timer->start(qMax(0,10 - delay)); } void metinalifeyyaz::movePlayer() { if (keyUp) { xpos -= sin(yrot * PI_OVER_180) * 0.5f; zpos -= cos(yrot * PI_OVER_180) * 0.5f; if (walkbiasangle >= 360.0f) walkbiasangle = 0.0f; else walkbiasangle += 7.0f; walkbias = sin(walkbiasangle * PI_OVER_180) / 10.0f; } else if (keyDown) { xpos += sin(yrot * PI_OVER_180)*0.5f; zpos += cos(yrot * PI_OVER_180)*0.5f ; if (walkbiasangle <= 7.0f) walkbiasangle = 360.0f; else walkbiasangle -= 7.0f; walkbias = sin(walkbiasangle * PI_OVER_180) / 10.0f; } if (keyLeft) yrot += 0.5f; else if (keyRight) yrot -= 0.5f; if (keyPageUp) lookupdown -= 0.5; else if (keyPageDown) lookupdown += 0.5; } void metinalifeyyaz::keyPressEvent(QKeyEvent *event) { switch (event->key()) { case Qt::Key_Escape: close(); break; case Qt::Key_F1: setWindowState(windowState() ^ Qt::WindowFullScreen); break; default: QGLWidget::keyPressEvent(event); case Qt::Key_PageUp: keyPageUp = true; break; case Qt::Key_PageDown: keyPageDown = true; break; case Qt::Key_Left: keyLeft = true; break; case Qt::Key_Right: keyRight = true; break; case Qt::Key_Up: keyUp = true; break; case Qt::Key_Down: keyDown = true; break; } } void metinalifeyyaz::changeEvent(QEvent *event) { switch (event->type()) { case QEvent::WindowStateChange: if (windowState() == Qt::WindowFullScreen) setCursor(Qt::BlankCursor); else unsetCursor(); break; default: break; } } void metinalifeyyaz::keyReleaseEvent(QKeyEvent *event) { switch (event->key()) { case Qt::Key_PageUp: keyPageUp = false; break; case Qt::Key_PageDown: keyPageDown = false; break; case Qt::Key_Left: keyLeft = false; break; case Qt::Key_Right: keyRight = false; break; case Qt::Key_Up: keyUp = false; break; case Qt::Key_Down: keyDown = false; break; default: QGLWidget::keyReleaseEvent(event); } } void metinalifeyyaz::drawScene(){ glBegin(GL_QUADS); glNormal3f(0.0f,0.0f,1.0f); // glColor3f(0,0,1); //back glVertex3f(-6,0,-4); glVertex3f(-6,-0.5,-4); glVertex3f(6,-0.5,-4); glVertex3f(6,0,-4); glEnd(); glBegin(GL_QUADS); glNormal3f(0.0f,0.0f,-1.0f); //front glVertex3f(6,0,4); glVertex3f(6,-0.5,4); glVertex3f(-6,-0.5,4); glVertex3f(-6,0,4); glEnd(); glBegin(GL_QUADS); glNormal3f(-1.0f,0.0f,0.0f); // glColor3f(0,0,1); //left glVertex3f(-6,0,4); glVertex3f(-6,-0.5,4); glVertex3f(-6,-0.5,-4); glVertex3f(-6,0,-4); glEnd(); glBegin(GL_QUADS); glNormal3f(1.0f,0.0f,0.0f); // glColor3f(0,0,1); //right glVertex3f(6,0,-4); glVertex3f(6,-0.5,-4); glVertex3f(6,-0.5,4); glVertex3f(6,0,4); glEnd(); glBindTexture(GL_TEXTURE_2D, texture[0]); glBegin(GL_QUADS); glNormal3f(0.0f,1.0f,0.0f);//top glTexCoord2f(1.0f,0.0f); glVertex3f(6,0,-4); glTexCoord2f(1.0f,1.0f); glVertex3f(6,0,4); glTexCoord2f(0.0f,1.0f); glVertex3f(-6,0,4); glTexCoord2f(0.0f,0.0f); glVertex3f(-6,0,-4); glEnd(); glBegin(GL_QUADS); glNormal3f(0.0f,-1.0f,0.0f); //glColor3f(0,0,1); //bottom glVertex3f(6,-0.5,-4); glVertex3f(6,-0.5,4); glVertex3f(-6,-0.5,4); glVertex3f(-6,-0.5,-4); glEnd(); // glPushMatrix(); glBindTexture(GL_TEXTURE_2D, texture[1]); glBegin(GL_QUADS); glNormal3f(1.0f,0.0f,0.0f); glTexCoord2f(1.0f,0.0f); //right far goal post front face glVertex3f(5,0.5,-0.95); glTexCoord2f(1.0f,1.0f); glVertex3f(5,0,-0.95); glTexCoord2f(0.0f,1.0f); glVertex3f(5,0,-1); glTexCoord2f(0.0f,0.0f); glVertex3f(5, 0.5, -1); glColor3f(1,1,1); //right far goal post back face glVertex3f(5.05,0.5,-0.95); glVertex3f(5.05,0,-0.95); glVertex3f(5.05,0,-1); glVertex3f(5.05, 0.5, -1); glColor3f(1,1,1); //right far goal post left face glVertex3f(5,0.5,-1); glVertex3f(5,0,-1); glVertex3f(5.05,0,-1); glVertex3f(5.05, 0.5, -1); glColor3f(1,1,1); //right far goal post right face glVertex3f(5.05,0.5,-0.95); glVertex3f(5.05,0,-0.95); glVertex3f(5,0,-0.95); glVertex3f(5, 0.5, -0.95); glColor3f(1,1,1); //right near goal post front face glVertex3f(5,0.5,0.95); glVertex3f(5,0,0.95); glVertex3f(5,0,1); glVertex3f(5,0.5, 1); glColor3f(1,1,1); //right near goal post back face glVertex3f(5.05,0.5,0.95); glVertex3f(5.05,0,0.95); glVertex3f(5.05,0,1); glVertex3f(5.05,0.5, 1); glColor3f(1,1,1); //right near goal post left face glVertex3f(5,0.5,1); glVertex3f(5,0,1); glVertex3f(5.05,0,1); glVertex3f(5.05,0.5, 1); glColor3f(1,1,1); //right near goal post right face glVertex3f(5.05,0.5,0.95); glVertex3f(5.05,0,0.95); glVertex3f(5,0,0.95); glVertex3f(5,0.5, 0.95); glColor3f(1,1,1); //right crossbar front face glVertex3f(5,0.55,-1); glVertex3f(5,0.55,1); glVertex3f(5,0.5,1); glVertex3f(5,0.5,-1); glColor3f(1,1,1); //right crossbar back face glVertex3f(5.05,0.55,-1); glVertex3f(5.05,0.55,1); glVertex3f(5.05,0.5,1); glVertex3f(5.05,0.5,-1); glColor3f(1,1,1); //right crossbar bottom face glVertex3f(5.05,0.5,-1); glVertex3f(5.05,0.5,1); glVertex3f(5,0.5,1); glVertex3f(5,0.5,-1); glColor3f(1,1,1); //right crossbar top face glVertex3f(5.05,0.55,-1); glVertex3f(5.05,0.55,1); glVertex3f(5,0.55,1); glVertex3f(5,0.55,-1); glColor3f(1,1,1); //left far goal post front face glVertex3f(-5,0.5,-0.95); glVertex3f(-5,0,-0.95); glVertex3f(-5,0,-1); glVertex3f(-5, 0.5, -1); glColor3f(1,1,1); //right far goal post back face glVertex3f(-5.05,0.5,-0.95); glVertex3f(-5.05,0,-0.95); glVertex3f(-5.05,0,-1); glVertex3f(-5.05, 0.5, -1); glColor3f(1,1,1); //right far goal post left face glVertex3f(-5,0.5,-1); glVertex3f(-5,0,-1); glVertex3f(-5.05,0,-1); glVertex3f(-5.05, 0.5, -1); glColor3f(1,1,1); //right far goal post right face glVertex3f(-5.05,0.5,-0.95); glVertex3f(-5.05,0,-0.95); glVertex3f(-5,0,-0.95); glVertex3f(-5, 0.5, -0.95); glColor3f(1,1,1); //left near goal post front face glVertex3f(-5,0.5,0.95); glVertex3f(-5,0,0.95); glVertex3f(-5,0,1); glVertex3f(-5,0.5, 1); glColor3f(1,1,1); //right near goal post back face glVertex3f(-5.05,0.5,0.95); glVertex3f(-5.05,0,0.95); glVertex3f(-5.05,0,1); glVertex3f(-5.05,0.5, 1); glColor3f(1,1,1); //right near goal post left face glVertex3f(-5,0.5,1); glVertex3f(-5,0,1); glVertex3f(-5.05,0,1); glVertex3f(-5.05,0.5, 1); glColor3f(1,1,1); //right near goal post right face glVertex3f(-5.05,0.5,0.95); glVertex3f(-5.05,0,0.95); glVertex3f(-5,0,0.95); glVertex3f(-5,0.5, 0.95); glColor3f(1,1,1); //left crossbar front face glVertex3f(-5,0.55,-1); glVertex3f(-5,0.55,1); glVertex3f(-5,0.5,1); glVertex3f(-5,0.5,-1); glColor3f(1,1,1); //right crossbar back face glVertex3f(-5.05,0.55,-1); glVertex3f(-5.05,0.55,1); glVertex3f(-5.05,0.5,1); glVertex3f(-5.05,0.5,-1); glColor3f(1,1,1); //right crossbar bottom face glVertex3f(-5.05,0.5,-1); glVertex3f(-5.05,0.5,1); glVertex3f(-5,0.5,1); glVertex3f(-5,0.5,-1); glColor3f(1,1,1); //right crossbar top face glVertex3f(-5.05,0.55,-1); glVertex3f(-5.05,0.55,1); glVertex3f(-5,0.55,1); glVertex3f(-5,0.55,-1); glEnd(); // glPopMatrix(); // glPushMatrix(); // glTranslatef(0,0,0); // glutSolidSphere(0.10005,500,30); // glPopMatrix(); }

    Read the article

  • One Step-Ahead A-Star

    - by Jonathan Dickinson
    I am attempting to create a server-centric RTS (as opposed to usual parallel synchronised simulation route of most RTS games today) - however I am still leveraging the discreet N-turns-ahead paradigm discussed by one of the AOE developers on Gamasutra. I have [possibly questionably?] decided that the path finding should only ever find the next cell the entity needs to move to, and was wondering if anyone has any clever ideas on how to optimize the algorithm for this specific scenario - or any other ideas on how to keep the pathfinding as lean as possible on the server. I have investigated a few possible algorithms but could only come up with one appropriation: Tiered A-Star - Relatively large T1 tiles, work out (and cache) each cell as you enter it. Other than that: doing the full A-Star pass and caching the entire path, which might use too much memory if a large amount of units are present. I know about the existence of naive progressive pathfinding algorithms (if you hit a block, turn in the direction closer to your target etc.) but they suffer from infinite feedback loops - and very poor pathing even if visited blocks are memorised. Not an option. Many thanks.

    Read the article

  • What should I worry about when changing OpenGL origin to upper left of screen?

    - by derivative
    For self education, I'm writing a 2D platformer engine in C++ using SDL / OpenGL. I initially began with pure SDL using the tutorials on sdltutorials.com and lazyfoo.net, but I'm now rendering in an OpenGL context (specifically immediate mode but I'm learning about VAOs/VBOs) and using SDL for interface, audio, etc. SDL uses a coordinate system with the origin in the upper left of the screen and the positive y-axis pointing down. It's easy to set up my orthographic projection in OpenGL to mirror this. I know that texture coordinates are a right-hand system with values from 0 to 1 -- flipping the texture vertically before rendering (well, flip the file before loading) yields textures that render correctly... which is fine if I'm drawing the entire texture, but ultimately I'll be using tilesets and can imagine problems. What should I be concerned about in terms of rendering when I do this? If anybody has any advice or they've done this themselves and can point out future pitfalls, that would be great, but really any thoughts would be appreciated.

    Read the article

  • Class Design - Space Simulator

    - by Peteyslatts
    I have pretty much taught myself everything I know about programming, so while I know how to teach myself (books, internet and reading API's), I'm finding that there hasn't been a whole lot in the way of good programming. So I have two questions: First the broad one: Does anyone have suggestions as to sources for learning about good programming habits and techniques? I'd prefer it if the resource wasn't a 5000 page tome. The more I can read it in installments the better. More specifically: I am finishing up learning the basics of XNA and I want to create a space simulator to test my knowledge. This isn't a full scale simulator, but just something that covers everything I learned. It's also going to be modular so I can build on it, after I get the basics down. One of the early features I want to implement is AI. And I want to take this into account as I'm designing my classes so I can minimize rewriting code. So my question: How should I design ship classes so that both the player and AI can use them? The only idea I have so far is: Create a ship class that contains stats, models, textures, collision data etc. The player and AI would then have the data for position, rotation, health, etc and would base their status off of the ship stats.

    Read the article

  • 2-d lighting day/night cycle

    - by Richard
    Off the back of this post in which I asked two questions and received one answer, which I accepted as a valid answer. I have decided to re-ask the outstanding question. I have implemented light points with shadow casting as shown here but I would like an overall map light with no point/light source. The map setup is a top-down 2-d 50X50 pixel grid. How would I go about implementing a day/night cycle lighting across a map?

    Read the article

  • How to implement explosion in OpenGL with a particle effect?

    - by Chan
    I'm relatively new to OpenGL and I'm clueless how to implement explosion. So could anyone give me some ideas how to start? Suppose the explosion occurs at location $(x, y, z)$, then I'm thinking of randomly generate a collection of vectors with $(x, y, z)$ as origin, then draw some particle (glutSolidCube) which move along this vector for some period of time, says after 1000 updates, it disappear. Is this approach feasible? A minimal example would be greatly appreciated.

    Read the article

  • Circle collision detection and Vector math: HELP?

    - by Griffin
    Hey so i'm currently going through the wildbunny blog to learn about collision detection, but i'm a bit confused on how the vectors he's talking about come into play QUOTED BLOG: p = ||A-B|| – (r1+r2) The two spheres are penetrating by distance p. We would also like the penetration vector so that we can correct the penetration once we discover it. This is the vector that moves both circles to the point where they just touch, correcting the penetration. Importantly it is not only just a vector that does this, it is the only vector which corrects the penetration by moving the minimum amount. This is important because we only want to correct the error, not introduce more by moving too much when we correct, or too little. N = (A-B) / ||A-B|| P = N*p Here we have calculated the normalised vector N between the two centres and the penetration vector P by multiplying our unit direction by the penetration distance. Ok so i understand that p is the distance each circle is penetrating each other, but i don't get what exactly N and P is. it seems to me N is just the coordinates of the 3rd point of the right trianlge formed by point A and B (A-B) then being divided by the hypotenuse of that triangle or distance between A and B (||A-B||) Whats the significance of this? Also, what is the penetration vector used for? It seems to me like a movement that one of the circles would perform to get un-penetrated.

    Read the article

  • Which Graphics/Geometry abstraction to choose?

    - by Robz
    I've been thinking about the design for a browser app on the HTML5 canvas that simulates a 2D robot zooming around, sensing the world around it. I decided to do this from scratch just for fun. I need shapes, like polygons, circles, and lines in order to model the robot and the world it lives in. These shapes need to be drawn with different appearance attributes, like border/fill style/width/color. I also need to have geometry functions to detect intersections and containment for the robot's sensors and so that the robot doesn't go inside stuff. One idea for functions is to have two totally separate libraries, one to implement graphics (like drawShape(context, shape)) and one for geometry operations (like shapeIntersectsShape(shape1, shape2)). Or, in a more object-oriented approach, the shape objects themselves could implement methods to do their own graphics (shape.draw(context)) and geometry operations (shape1.intersects(shape2)). Then there is the data itself: whether the data to draw a shape and the data to do geometric operations on that shape should be encapsulated within the same object, or be separate structures (where one would contain the other, or both be contained inside another structure). How do existing applications that do graphics/geometry stuff deal with this? Is there one model that is best, or is each good for certain applications? Should the fact that I'm using Javascript instead of a more classical language change how I approach the design?

    Read the article

  • Creating my own kill cam

    - by DalexL
    I plan on creating my own kill cam system for a sandbox tool set. After thinking about the mechanics of the kill cam itself, however, I'm quite lost. I'm trying to recreate the ones commonly seen in call of duty games that show, from the view of the killer, the actual killing scene. My Thoughts: -I can't just keep in memory when people kill others because I wouldn't know when to start the 'recording process'. There is on way for me to accurately determine when somebody is 'about' to kill someone. -My only real idea so far is to have a complete duplicate of everything loaded off to the side copying all the movement from the original world but with a 10 second delay. That way, all the kill cams would be 10 seconds long and the persons camera would just be moved to the second world of their killer. My Questions: Is there already an accepted way to do this? Does anybody have any good ideas for something like this? Thanks if you can!

    Read the article

  • OpenGL VertexBuffer won'e render in GLFW3

    - by sm81095
    So I have started to try to learn OpenGL, and I decided to use GLFW to assist in window creation. The problem is, since GLFW3 is so new, there are no tutorials on it yet and how to use it with modern OpenGL (3.3, specifically). Using the GLFW3 tutorial found on the website, which uses older OpenGL rendering (glBegin(GL_TRIANGLES), glVertex3f()), and such, I can get a triangle to render to the screen. The problem is, using new OpenGL, I can't get the same triangle to render to the screen. I am new to OpenGL, and GLFW3 is new to most people, so I may be completely missing something obvious, but here is my code: static const GLuint g_vertex_buffer_data[] = { -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; int main(void) { GLFWwindow* window; if(!glfwInit()) { fprintf(stderr, "Failed to initialize GLFW."); return -1; } glfwWindowHint(GLFW_SAMPLES, 4); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); window = glfwCreateWindow(800, 600, "Test Window", NULL, NULL); if(!window) { glfwTerminate(); fprintf(stderr, "Failed to create a GLFW window"); return -1; } glfwMakeContextCurrent(window); glewExperimental = GL_TRUE; GLenum err = glewInit(); if(err != GLEW_OK) { glfwTerminate(); fprintf(stderr, "Failed to initialize GLEW"); fprintf(stderr, (char*)glewGetErrorString(err)); return -1; } GLuint VertexArrayID; glGenVertexArrays(1, &VertexArrayID); glBindVertexArray(VertexArrayID); GLuint programID = LoadShaders("SimpleVertexShader.glsl", "SimpleFragmentShader.glsl"); GLuint vertexBuffer; glGenBuffers(1, &vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW); while(!glfwWindowShouldClose(window)) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glUseProgram(programID); glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_TRIANGLES, 0, 3); glDisableVertexAttribArray(0); glfwSwapBuffers(window); glfwPollEvents(); } glDeleteBuffers(1, &vertexBuffer); glDeleteProgram(programID); glfwDestroyWindow(window); glfwTerminate(); exit(EXIT_SUCCESS); } I know it is not my shaders, they are super simple and I've checked them against GLFW 2.7 so I know that they work. I'm assuming that I've missed something crucial to using the OpenGL context with GLFW3, so any help locating the problem would be greatly appreciated.

    Read the article

  • Collision Detection problems in Voxel Engine (XNA)

    - by Darestium
    I am creating a minecraft like terrain engine in XNA and have had some collision problems for quite some time. I have checked and changed my code based on other peoples collision code and I still have the same problem. It always seems to be off by about a block. for instance, if I walk across a bridge which is one block high I fall through it. Also, if you walk towards a "row" of blocks like this: You are able to stand "inside" the left most one, and you collide with nothing in the right most side (where there is no block and is not visible on this image). Here is all my collision code: private void Move(GameTime gameTime, Vector3 direction) { float speed = playermovespeed * (float)gameTime.ElapsedGameTime.TotalSeconds; Matrix rotationMatrix = Matrix.CreateRotationY(player.Camera.LeftRightRotation); Vector3 rotatedVector = Vector3.Transform(direction, rotationMatrix); rotatedVector.Normalize(); Vector3 testVector = rotatedVector; testVector.Normalize(); Vector3 movePosition = player.position + testVector * speed; Vector3 midBodyPoint = movePosition + new Vector3(0, -0.7f, 0); Vector3 headPosition = movePosition + new Vector3(0, 0.1f, 0); if (!world.GetBlock(movePosition).IsSolid && !world.GetBlock(midBodyPoint).IsSolid && !world.GetBlock(headPosition).IsSolid) { player.position += rotatedVector * speed; } //player.position += rotatedVector * speed; } ... public void UpdatePosition(GameTime gameTime) { player.velocity.Y += playergravity * (float)gameTime.ElapsedGameTime.TotalSeconds; Vector3 footPosition = player.Position + new Vector3(0f, -1.5f, 0f); Vector3 headPosition = player.Position + new Vector3(0f, 0.1f, 0f); // If the block below the player is solid the Y velocity should be zero if (world.GetBlock(footPosition).IsSolid || world.GetBlock(headPosition).IsSolid) { player.velocity.Y = 0; } UpdateJump(gameTime); UpdateCounter(gameTime); ProcessInput(gameTime); player.Position = player.Position + player.velocity * (float)gameTime.ElapsedGameTime.TotalSeconds; velocity = Vector3.Zero; } and the one and only function in the camera class: protected void CalculateView() { Matrix rotationMatrix = Matrix.CreateRotationX(upDownRotation) * Matrix.CreateRotationY(leftRightRotation); lookVector = Vector3.Transform(Vector3.Forward, rotationMatrix); cameraFinalTarget = Position + lookVector; Vector3 cameraRotatedUpVector = Vector3.Transform(Vector3.Up, rotationMatrix); viewMatrix = Matrix.CreateLookAt(Position, cameraFinalTarget, cameraRotatedUpVector); } which is called when the rotation variables are changed: public float LeftRightRotation { get { return leftRightRotation; } set { leftRightRotation = value; CalculateView(); } } public float UpDownRotation { get { return upDownRotation; } set { upDownRotation = value; CalculateView(); } } World class: public Block GetBlock(int x, int y, int z) { if (InBounds(x, y, z)) { Vector3i regionalPosition = GetRegionalPosition(x, y, z); Vector3i region = GetRegionPosition(x, y, z); return regions[region.X, region.Y, region.Z].Blocks[regionalPosition.X, regionalPosition.Y, regionalPosition.Z]; } return new Block(BlockType.none); } public Vector3i GetRegionPosition(int x, int y, int z) { int regionx = x == 0 ? 0 : x / Variables.REGION_SIZE_X; int regiony = y == 0 ? 0 : y / Variables.REGION_SIZE_Y; int regionz = z == 0 ? 0 : z / Variables.REGION_SIZE_Z; return new Vector3i(regionx, regiony, regionz); } public Vector3i GetRegionalPosition(int x, int y, int z) { int regionx = x == 0 ? 0 : x / Variables.REGION_SIZE_X; int X = x % Variables.REGION_SIZE_X; int regiony = y == 0 ? 0 : y / Variables.REGION_SIZE_Y; int Y = y % Variables.REGION_SIZE_Y; int regionz = z == 0 ? 0 : z / Variables.REGION_SIZE_Z; int Z = z % Variables.REGION_SIZE_Z; return new Vector3i(X, Y, Z); } Any ideas how to fix this problem? EDIT 1: Graphic of the problem: EDIT 2 GetBlock, Vector3 version: public Block GetBlock(Vector3 position) { int x = (int)Math.Floor(position.X); int y = (int)Math.Floor(position.Y); int z = (int)Math.Ceiling(position.Z); Block block = GetBlock(x, y, z); return block; } Now, the thing is I tested the theroy that the Z is always "off by one" and by ceiling the value it actually works as intended. Altough it still could be greatly more accurate (when you go down holes you can see through the sides, and I doubt it will work with negitive positions). I also does not feel clean Flooring the X and Y values and just Ceiling the Z. I am surely not doing something correctly still.

    Read the article

  • Interesting 3d zooming technique

    - by stark
    Is it possible to zoom to a certain point on screen by modifying the field of view and rotating the camera as to keep that point/object in the same place on screen while zooming ? Changing the camera position is not allowed.. I projected the 3d pos of the object on screen and remembered it. Then on each frame I calculate the direction to it in camera space and then I construct a rotation matrix to align this direction to Z axis (in cam space). After this, I calculate the direction from the camera to the object in world space and transform this vector with the matrix I obtained earlier and then use this final vector as the camera's new direction. And it's actually "kinda working", the problem is that it is more/less off than the camera's rotation before starting to zoom depending on the area you are trying to zoom in (larger error on edges/corners). It looks acceptable, but I'm not settling for only this. Any suggestions/resources for doing this technique perfectly ? If some of you want to explain the math in detail, be my guests, I can understand these things well. Thanks. Edit: I'll check often for responses, I'm really curious about this :D

    Read the article

  • What would be a good filter to create 'magnetic deformers' from a depth map?

    - by sebf
    In my project, I am creating a system for deforming a highly detailed mesh (clothing) so that it 'fits' a convex mesh. To do this I use depth maps of the item and the 'hull' to determine at what point in world space the deviation occurs and the extent. Simply transforming all occluded vertices to the depths as defined by the 'hull' is fairly effective, and has good performance, but it suffers the problem of not preserving the features of the mesh and requires extensive culling to avoid false-positives. I would like instead to generate from the depth deviation map a set of simple 'deformers' which will 'push'* all vertices of the deformed mesh outwards (in world space). This way, all features of the mesh are preserved and there is no need to have complex heuristics to cull inappropriate vertices. I am not sure how to go about generating this deformer set however. I am imagining something like an algorithm that attempts to match a spherical surface to each patch of contiguous deviations within a certain range, but do not know where to start doing this. Can anyone suggest a suitable filter or algorithm for generating deformers? Or to put it another way 'compressing' a depth map? (*Push because its fitting to a convex 'bulgy' humanoid so transforms are likely to be 'spherical' from the POV of the surface.)

    Read the article

  • Incorrect results for frustum cull

    - by DeadMG
    Previously, I had a problem with my frustum culling producing too optimistic results- that is, including many objects that were not in the view volume. Now I have refactored that code and produced a cull that should be accurate to the actual frustum, instead of an axis-aligned box approximation. The problem is that now it never returns anything to be in the view volume. As the mathematical support library I'm using does not provide plane support functions, I had to code much of this functionality myself, and I'm not really the mathematical type, so it's likely that I've made some silly error somewhere. As follows is the relevant code: class Plane { public: Plane() { r0 = Math::Vector(0,0,0); normal = Math::Vector(0,1,0); } Plane(Math::Vector p1, Math::Vector p2, Math::Vector p3) { r0 = p1; normal = Math::Cross((p2 - p1), (p3 - p1)); } Math::Vector r0; Math::Vector normal; }; This class represents one plane as a point and a normal vector. class Frustum { public: Frustum( const std::array<Math::Vector, 8>& points ) { planes[0] = Plane(points[0], points[1], points[2]); planes[1] = Plane(points[4], points[5], points[6]); planes[2] = Plane(points[0], points[1], points[4]); planes[3] = Plane(points[2], points[3], points[6]); planes[4] = Plane(points[0], points[2], points[4]); planes[5] = Plane(points[1], points[3], points[5]); } Plane planes[6]; }; The points are passed in order where (the inverse of) each bit of the index of each point indicates whether it's the left, top, and back of the frustum, respectively. As such, I just picked any three points where they all shared one bit in common to define the planes. My intersection test is as follows (based on this): bool Intersects(Math::AABB lhs, const Frustum& rhs) const { for(int i = 0; i < 6; i++) { Math::Vector pvertex = lhs.TopRightFurthest; Math::Vector nvertex = lhs.BottomLeftClosest; if (rhs.planes[i].normal.x <= -0.0f) { std::swap(pvertex.x, nvertex.x); } if (rhs.planes[i].normal.y <= -0.0f) { std::swap(pvertex.y, nvertex.y); } if (rhs.planes[i].normal.z <= -0.0f) { std::swap(pvertex.z, nvertex.z); } if (Math::Dot(rhs.planes[i].r0, nvertex) < 0.0f) { return false; } } return true; } Also of note is that because I'm using a left-handed co-ordinate system, I wrote my Cross function to return the negative of the formula given on Wikipedia. Any suggestions as to where I've made a mistake?

    Read the article

  • Depth is disabled - How to turn on?

    - by marc wellman
    In XNA 3.1 is there any other way to disable depth in 3D Worlds using DirectX models other than GraphicsDevice.RenderState.DepthBufferEnable = false; ? The reason for my question is I have quite a huge program which offers a 3D World with a couple of 3D DirectX models inside. Depth was never an issue since it ever worked fine but since a few days after doing some modifications my models are all depth-translucent i.e. depth-buffering and/or culling seems to be disabled. But in my whole source code I never touch any of the options related to Depth or Culling which means I never turn these settings on explicitly nor turn it off somewhere. So I am searching for some other statement maybe related to the GraphicsDevice that implicitly turns depth off - but I can't find it. (Sorry that I don't post any source code but I have too much source code and I simply don't know where to search) UPDATE: These are a couple of simple objects seen with correct depth. These are the same objects in their current state.

    Read the article

< Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >