Search Results

Search found 32277 results on 1292 pages for 'module development'.

Page 535/1292 | < Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >

  • How do you set up PhysFS for use in a game?

    - by ThePlan
    After my recent question on GD I've been advised to use PhysFS to pack all my game data in 1 file. So I have, and the decission wasn't light, because I've tried out every library in my answers but none contained a single good tutorial whatsoever, in fact PhysFS is the poorest documented library I've ever seen. After attempting to set up PhysFS in my game I realized it's not as simple as adding the headers to the project, it appears something much more complicated, in fact after my first attempt to install PhysFS the compiler ran out of memory to display errors, it reached the critical count of 50 errors. So basically what I'm asking here is: How can I set up PhysFS on my game? I'm using Code::Blocks IDE on Windows XP SP3;

    Read the article

  • Camera doesn't move

    - by hugo
    Here is my code, as my subject indicates i have implemented a camera but I couldn't make it move. #define PI_OVER_180 0.0174532925f #define GL_CLAMP_TO_EDGE 0x812F #include "metinalifeyyaz.h" #include <GL/glu.h> #include <GL/glut.h> #include <QTimer> #include <cmath> #include <QKeyEvent> #include <QWidget> #include <QDebug> metinalifeyyaz::metinalifeyyaz(QWidget *parent) : QGLWidget(parent) { this->setFocusPolicy(Qt:: StrongFocus); time = QTime::currentTime(); timer = new QTimer(this); timer->setSingleShot(true); connect(timer, SIGNAL(timeout()), this, SLOT(updateGL())); xpos = yrot = zpos = 0; walkbias = walkbiasangle = lookupdown = 0.0f; keyUp = keyDown = keyLeft = keyRight = keyPageUp = keyPageDown = false; } void metinalifeyyaz::drawBall() { //glTranslatef(6,0,4); glutSolidSphere(0.10005,300,30); } metinalifeyyaz:: ~metinalifeyyaz(){ glDeleteTextures(1,texture); } void metinalifeyyaz::initializeGL(){ glShadeModel(GL_SMOOTH); glClearColor(1.0,1.0,1.0,0.5); glClearDepth(1.0f); glEnable(GL_DEPTH_TEST); glEnable(GL_TEXTURE_2D); glDepthFunc(GL_LEQUAL); glClearColor(1.0,1.0,1.0,1.0); glShadeModel(GL_SMOOTH); GLfloat mat_specular[]={1.0,1.0,1.0,1.0}; GLfloat mat_shininess []={30.0}; GLfloat light_position[]={1.0,1.0,1.0}; glMaterialfv(GL_FRONT, GL_SPECULAR, mat_specular); glMaterialfv(GL_FRONT,GL_SHININESS,mat_shininess); glLightfv(GL_LIGHT0, GL_POSITION, light_position); glEnable(GL_LIGHT0); glEnable(GL_LIGHTING); QImage img1 = convertToGLFormat(QImage(":/new/prefix1/halisaha2.bmp")); QImage img2 = convertToGLFormat(QImage(":/new/prefix1/white.bmp")); glGenTextures(2,texture); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, img1.width(), img1.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, img1.bits()); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glBindTexture(GL_TEXTURE_2D, texture[1]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, img2.width(), img2.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, img2.bits()); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); // Really nice perspective calculations } void metinalifeyyaz::resizeGL(int w, int h){ if(h==0) h=1; glViewport(0,0,w,h); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0f, static_cast<GLfloat>(w)/h,0.1f,100.0f); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); } void metinalifeyyaz::paintGL(){ movePlayer(); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); GLfloat xtrans = -xpos; GLfloat ytrans = -walkbias - 0.50f; GLfloat ztrans = -zpos; GLfloat sceneroty = 360.0f - yrot; glLoadIdentity(); glRotatef(lookupdown, 1.0f, 0.0f, 0.0f); glRotatef(sceneroty, 0.0f, 1.0f, 0.0f); glTranslatef(xtrans, ytrans+50, ztrans-130); glLoadIdentity(); glTranslatef(1.0f,0.0f,-18.0f); glRotatef(45,1,0,0); drawScene(); int delay = time.msecsTo(QTime::currentTime()); if (delay == 0) delay = 1; time = QTime::currentTime(); timer->start(qMax(0,10 - delay)); } void metinalifeyyaz::movePlayer() { if (keyUp) { xpos -= sin(yrot * PI_OVER_180) * 0.5f; zpos -= cos(yrot * PI_OVER_180) * 0.5f; if (walkbiasangle >= 360.0f) walkbiasangle = 0.0f; else walkbiasangle += 7.0f; walkbias = sin(walkbiasangle * PI_OVER_180) / 10.0f; } else if (keyDown) { xpos += sin(yrot * PI_OVER_180)*0.5f; zpos += cos(yrot * PI_OVER_180)*0.5f ; if (walkbiasangle <= 7.0f) walkbiasangle = 360.0f; else walkbiasangle -= 7.0f; walkbias = sin(walkbiasangle * PI_OVER_180) / 10.0f; } if (keyLeft) yrot += 0.5f; else if (keyRight) yrot -= 0.5f; if (keyPageUp) lookupdown -= 0.5; else if (keyPageDown) lookupdown += 0.5; } void metinalifeyyaz::keyPressEvent(QKeyEvent *event) { switch (event->key()) { case Qt::Key_Escape: close(); break; case Qt::Key_F1: setWindowState(windowState() ^ Qt::WindowFullScreen); break; default: QGLWidget::keyPressEvent(event); case Qt::Key_PageUp: keyPageUp = true; break; case Qt::Key_PageDown: keyPageDown = true; break; case Qt::Key_Left: keyLeft = true; break; case Qt::Key_Right: keyRight = true; break; case Qt::Key_Up: keyUp = true; break; case Qt::Key_Down: keyDown = true; break; } } void metinalifeyyaz::changeEvent(QEvent *event) { switch (event->type()) { case QEvent::WindowStateChange: if (windowState() == Qt::WindowFullScreen) setCursor(Qt::BlankCursor); else unsetCursor(); break; default: break; } } void metinalifeyyaz::keyReleaseEvent(QKeyEvent *event) { switch (event->key()) { case Qt::Key_PageUp: keyPageUp = false; break; case Qt::Key_PageDown: keyPageDown = false; break; case Qt::Key_Left: keyLeft = false; break; case Qt::Key_Right: keyRight = false; break; case Qt::Key_Up: keyUp = false; break; case Qt::Key_Down: keyDown = false; break; default: QGLWidget::keyReleaseEvent(event); } } void metinalifeyyaz::drawScene(){ glBegin(GL_QUADS); glNormal3f(0.0f,0.0f,1.0f); // glColor3f(0,0,1); //back glVertex3f(-6,0,-4); glVertex3f(-6,-0.5,-4); glVertex3f(6,-0.5,-4); glVertex3f(6,0,-4); glEnd(); glBegin(GL_QUADS); glNormal3f(0.0f,0.0f,-1.0f); //front glVertex3f(6,0,4); glVertex3f(6,-0.5,4); glVertex3f(-6,-0.5,4); glVertex3f(-6,0,4); glEnd(); glBegin(GL_QUADS); glNormal3f(-1.0f,0.0f,0.0f); // glColor3f(0,0,1); //left glVertex3f(-6,0,4); glVertex3f(-6,-0.5,4); glVertex3f(-6,-0.5,-4); glVertex3f(-6,0,-4); glEnd(); glBegin(GL_QUADS); glNormal3f(1.0f,0.0f,0.0f); // glColor3f(0,0,1); //right glVertex3f(6,0,-4); glVertex3f(6,-0.5,-4); glVertex3f(6,-0.5,4); glVertex3f(6,0,4); glEnd(); glBindTexture(GL_TEXTURE_2D, texture[0]); glBegin(GL_QUADS); glNormal3f(0.0f,1.0f,0.0f);//top glTexCoord2f(1.0f,0.0f); glVertex3f(6,0,-4); glTexCoord2f(1.0f,1.0f); glVertex3f(6,0,4); glTexCoord2f(0.0f,1.0f); glVertex3f(-6,0,4); glTexCoord2f(0.0f,0.0f); glVertex3f(-6,0,-4); glEnd(); glBegin(GL_QUADS); glNormal3f(0.0f,-1.0f,0.0f); //glColor3f(0,0,1); //bottom glVertex3f(6,-0.5,-4); glVertex3f(6,-0.5,4); glVertex3f(-6,-0.5,4); glVertex3f(-6,-0.5,-4); glEnd(); // glPushMatrix(); glBindTexture(GL_TEXTURE_2D, texture[1]); glBegin(GL_QUADS); glNormal3f(1.0f,0.0f,0.0f); glTexCoord2f(1.0f,0.0f); //right far goal post front face glVertex3f(5,0.5,-0.95); glTexCoord2f(1.0f,1.0f); glVertex3f(5,0,-0.95); glTexCoord2f(0.0f,1.0f); glVertex3f(5,0,-1); glTexCoord2f(0.0f,0.0f); glVertex3f(5, 0.5, -1); glColor3f(1,1,1); //right far goal post back face glVertex3f(5.05,0.5,-0.95); glVertex3f(5.05,0,-0.95); glVertex3f(5.05,0,-1); glVertex3f(5.05, 0.5, -1); glColor3f(1,1,1); //right far goal post left face glVertex3f(5,0.5,-1); glVertex3f(5,0,-1); glVertex3f(5.05,0,-1); glVertex3f(5.05, 0.5, -1); glColor3f(1,1,1); //right far goal post right face glVertex3f(5.05,0.5,-0.95); glVertex3f(5.05,0,-0.95); glVertex3f(5,0,-0.95); glVertex3f(5, 0.5, -0.95); glColor3f(1,1,1); //right near goal post front face glVertex3f(5,0.5,0.95); glVertex3f(5,0,0.95); glVertex3f(5,0,1); glVertex3f(5,0.5, 1); glColor3f(1,1,1); //right near goal post back face glVertex3f(5.05,0.5,0.95); glVertex3f(5.05,0,0.95); glVertex3f(5.05,0,1); glVertex3f(5.05,0.5, 1); glColor3f(1,1,1); //right near goal post left face glVertex3f(5,0.5,1); glVertex3f(5,0,1); glVertex3f(5.05,0,1); glVertex3f(5.05,0.5, 1); glColor3f(1,1,1); //right near goal post right face glVertex3f(5.05,0.5,0.95); glVertex3f(5.05,0,0.95); glVertex3f(5,0,0.95); glVertex3f(5,0.5, 0.95); glColor3f(1,1,1); //right crossbar front face glVertex3f(5,0.55,-1); glVertex3f(5,0.55,1); glVertex3f(5,0.5,1); glVertex3f(5,0.5,-1); glColor3f(1,1,1); //right crossbar back face glVertex3f(5.05,0.55,-1); glVertex3f(5.05,0.55,1); glVertex3f(5.05,0.5,1); glVertex3f(5.05,0.5,-1); glColor3f(1,1,1); //right crossbar bottom face glVertex3f(5.05,0.5,-1); glVertex3f(5.05,0.5,1); glVertex3f(5,0.5,1); glVertex3f(5,0.5,-1); glColor3f(1,1,1); //right crossbar top face glVertex3f(5.05,0.55,-1); glVertex3f(5.05,0.55,1); glVertex3f(5,0.55,1); glVertex3f(5,0.55,-1); glColor3f(1,1,1); //left far goal post front face glVertex3f(-5,0.5,-0.95); glVertex3f(-5,0,-0.95); glVertex3f(-5,0,-1); glVertex3f(-5, 0.5, -1); glColor3f(1,1,1); //right far goal post back face glVertex3f(-5.05,0.5,-0.95); glVertex3f(-5.05,0,-0.95); glVertex3f(-5.05,0,-1); glVertex3f(-5.05, 0.5, -1); glColor3f(1,1,1); //right far goal post left face glVertex3f(-5,0.5,-1); glVertex3f(-5,0,-1); glVertex3f(-5.05,0,-1); glVertex3f(-5.05, 0.5, -1); glColor3f(1,1,1); //right far goal post right face glVertex3f(-5.05,0.5,-0.95); glVertex3f(-5.05,0,-0.95); glVertex3f(-5,0,-0.95); glVertex3f(-5, 0.5, -0.95); glColor3f(1,1,1); //left near goal post front face glVertex3f(-5,0.5,0.95); glVertex3f(-5,0,0.95); glVertex3f(-5,0,1); glVertex3f(-5,0.5, 1); glColor3f(1,1,1); //right near goal post back face glVertex3f(-5.05,0.5,0.95); glVertex3f(-5.05,0,0.95); glVertex3f(-5.05,0,1); glVertex3f(-5.05,0.5, 1); glColor3f(1,1,1); //right near goal post left face glVertex3f(-5,0.5,1); glVertex3f(-5,0,1); glVertex3f(-5.05,0,1); glVertex3f(-5.05,0.5, 1); glColor3f(1,1,1); //right near goal post right face glVertex3f(-5.05,0.5,0.95); glVertex3f(-5.05,0,0.95); glVertex3f(-5,0,0.95); glVertex3f(-5,0.5, 0.95); glColor3f(1,1,1); //left crossbar front face glVertex3f(-5,0.55,-1); glVertex3f(-5,0.55,1); glVertex3f(-5,0.5,1); glVertex3f(-5,0.5,-1); glColor3f(1,1,1); //right crossbar back face glVertex3f(-5.05,0.55,-1); glVertex3f(-5.05,0.55,1); glVertex3f(-5.05,0.5,1); glVertex3f(-5.05,0.5,-1); glColor3f(1,1,1); //right crossbar bottom face glVertex3f(-5.05,0.5,-1); glVertex3f(-5.05,0.5,1); glVertex3f(-5,0.5,1); glVertex3f(-5,0.5,-1); glColor3f(1,1,1); //right crossbar top face glVertex3f(-5.05,0.55,-1); glVertex3f(-5.05,0.55,1); glVertex3f(-5,0.55,1); glVertex3f(-5,0.55,-1); glEnd(); // glPopMatrix(); // glPushMatrix(); // glTranslatef(0,0,0); // glutSolidSphere(0.10005,500,30); // glPopMatrix(); }

    Read the article

  • Circle collision detection and Vector math: HELP?

    - by Griffin
    Hey so i'm currently going through the wildbunny blog to learn about collision detection, but i'm a bit confused on how the vectors he's talking about come into play QUOTED BLOG: p = ||A-B|| – (r1+r2) The two spheres are penetrating by distance p. We would also like the penetration vector so that we can correct the penetration once we discover it. This is the vector that moves both circles to the point where they just touch, correcting the penetration. Importantly it is not only just a vector that does this, it is the only vector which corrects the penetration by moving the minimum amount. This is important because we only want to correct the error, not introduce more by moving too much when we correct, or too little. N = (A-B) / ||A-B|| P = N*p Here we have calculated the normalised vector N between the two centres and the penetration vector P by multiplying our unit direction by the penetration distance. Ok so i understand that p is the distance each circle is penetrating each other, but i don't get what exactly N and P is. it seems to me N is just the coordinates of the 3rd point of the right trianlge formed by point A and B (A-B) then being divided by the hypotenuse of that triangle or distance between A and B (||A-B||) Whats the significance of this? Also, what is the penetration vector used for? It seems to me like a movement that one of the circles would perform to get un-penetrated.

    Read the article

  • How do I design a game framework for fast reaction to user input?

    - by Miro
    I've played some games at cca 30 fps and some of them had low reaction time - cca 0.1sec. I hadn't knew why. Now when I'm designing my framework for crossplatform game, I know why. Probably they've been preparing new frame during rendering the previous. RENDER 1 | RENDER 2 | RENDER 3 | RENDER 4 PREPARE 2 | PREPARE 3 | PREPARE 4 | PREPARE 5 I see first frame when second frame is being rendered and third frame being prepared. If I react in that time to 1st frame it will result in forth frame. So it takes 3/FPS seconds to appear results. In 30 fps it would be 100ms, what is quite bad. So i'm wondering what should I design my framework to response to user interaction quickly?

    Read the article

  • Target tracking with a small delay (actionscript 3.0)

    - by John Dodson
    I'm having trouble thinking of a good method to track my character with an enemy attack. Of course, I don't want the attack to track my character's current position; I want it to track where the character was about 1 second before (so you can move around and make the attack miss and loop around you sort of a thing). The general structure of my game uses a timer to update all of my events. I have a timer going off every 25 milliseconds that updates everything, including my player's position and the enemies position. Right now I just have the enemy attack directly targeting my character....which works fine except that it's impossible to escape =p. Let me know if I didn't supply enough details. My approach was going to basically be get my character's position from about 1 second ago, then have the enemy target that position, the only problem is I can't think of a good way to get my character's position from previous times. Thanks for the help!

    Read the article

  • Import FBX with multiple meshes into UDK

    - by Tom
    I used this script to generate a few buildings that I was hoping to import into UDK. Each building is made of about 1000 separate objects. When I export a building as FBX and import the file into UDK it breaks it up into its individual objects again, so I was wondering how I would avoid this. Whether there was a tool to combine all of the objects into one mesh automatically before exporting or if I could prevent UDK from breaking them upon import.

    Read the article

  • Avoid double compression of resources

    - by user1095108
    I am using .pngs for my textures and am using a virtual file system in a .zip file for my game project. This means my textures are compressed and decompressed twice. What are the solutions to this double compression problem? One solution I've heard about is to use .tgas for textures, but it seems ages ago, since I've heard that. Another solution is to implement decompression on the GPU and, since that is fast, forget about the overhead.

    Read the article

  • Any interesting thesis topic?

    - by revers
    Hi, I study Computer Science at Technical University of Lodz (in Poland) with Computer Game and Simulation Technology specialization. I'm going to defend BSc thesis next year and I was wondering what topic I could choose but nothing really interesting is coming to my mind. Maybe You could help me and suggest some subjects related to programming graphics, games or simulations? (or maybe something else that is interesting enough :) ). I would be very grateful for any suggestion!

    Read the article

  • Adding 'swerve' to a direction

    - by Skoder
    Hey. I'm not much of a maths expert, so this is probably quite straight forward. I was playing a soccer flash game where you take free kicks. You provide Power, Swerve and Direction. I'm reading up on vectors and such so I can use the direction and power information to shoot the ball with the correct velocity. What I don't understand is how the 'Swerve' information is used. What formula connects the Swerve information with the Direction and Power? (This is all in 2D) Thanks for any advice.

    Read the article

  • Sorting objects before rendering

    - by dreta
    I'm trying to implement a scene graph and in all the articles i've come across there is talk about object sorting. So you'd sort your objects by "material" for example. Now untill i sat down and started implementing it, i kind of took this for granted, because it made sense. But now i'm wondering what does sorting actually change? In my engine, i have a manager for UBOs, i use those to store data that'll be shared between programs, at the moment that only involves time, camera and projection matrices and lights (i'm not worrying about managing which lights affect which objects ATM). Now for each model i have to change the model to world matrix uniform, no sorting is going to change that. So is the jump from changing this matrix to also setting a material for each object that bad? I vaguely remember reading somewhere that each time you change something in the pipeline, it has to get flushed and that can cause performance issues. But for each drawing call i'm setting up a model to world matrix anyway, so what sense does it make to ever be concerned about this? BTW is there any information about whether changing a uniform and calling glBufferSubData is more (or less) expensive.

    Read the article

  • What would be a good filter to create 'magnetic deformers' from a depth map?

    - by sebf
    In my project, I am creating a system for deforming a highly detailed mesh (clothing) so that it 'fits' a convex mesh. To do this I use depth maps of the item and the 'hull' to determine at what point in world space the deviation occurs and the extent. Simply transforming all occluded vertices to the depths as defined by the 'hull' is fairly effective, and has good performance, but it suffers the problem of not preserving the features of the mesh and requires extensive culling to avoid false-positives. I would like instead to generate from the depth deviation map a set of simple 'deformers' which will 'push'* all vertices of the deformed mesh outwards (in world space). This way, all features of the mesh are preserved and there is no need to have complex heuristics to cull inappropriate vertices. I am not sure how to go about generating this deformer set however. I am imagining something like an algorithm that attempts to match a spherical surface to each patch of contiguous deviations within a certain range, but do not know where to start doing this. Can anyone suggest a suitable filter or algorithm for generating deformers? Or to put it another way 'compressing' a depth map? (*Push because its fitting to a convex 'bulgy' humanoid so transforms are likely to be 'spherical' from the POV of the surface.)

    Read the article

  • Render 3d object to 2d surface (embedded system)

    - by Martin Berger
    i am working on an embedded system of a sort, and in some free time i would like to test its drawing capabilities. System in question is ARM Cortex M3 microcontroller attached to EasyMX Stellaris board. And i have a small 320x240 TFT screen :) Now, i have some free time each day and i want to create rotating cube. Micro C PRO for ARM doesnt have 3d drawing capabilities, which means it must be done in software. From the book Introduction to 3D Game Programming with DirectX 10 i know matrix algebra for transformations but that is cool when you have DirectX to set camera right. I gues i could make 2d object to rotate, but how would i go with 3d one? Any ideas and examples are welcome. Although i would prefer advices. I'd like to understand this.

    Read the article

  • Creating my own kill cam

    - by DalexL
    I plan on creating my own kill cam system for a sandbox tool set. After thinking about the mechanics of the kill cam itself, however, I'm quite lost. I'm trying to recreate the ones commonly seen in call of duty games that show, from the view of the killer, the actual killing scene. My Thoughts: -I can't just keep in memory when people kill others because I wouldn't know when to start the 'recording process'. There is on way for me to accurately determine when somebody is 'about' to kill someone. -My only real idea so far is to have a complete duplicate of everything loaded off to the side copying all the movement from the original world but with a 10 second delay. That way, all the kill cams would be 10 seconds long and the persons camera would just be moved to the second world of their killer. My Questions: Is there already an accepted way to do this? Does anybody have any good ideas for something like this? Thanks if you can!

    Read the article

  • Architecture of an action multiplayer game from scratch

    - by lcf
    Not sure whether it's a good place to ask (do point me to a better one if it's not), but since what we're developing is a game - here it goes. So this is a "real-time" action multiplayer game. I have familiarized myself with concepts like lag compensation, view interpolation, input prediction and pretty much everything that I need for this. I have also prepared a set of prototypes to confirm that I understood everything correctly. My question is about the situation when game engine must be rewind to the past to find out whether there was a "hit" (sometimes it may involve the whole 'recomputation' of the world from that moment in the past up to the present moment. I already have a piece of code that does it, but it's not as neat as I need it to be. The domain logic of the app (the physics of the game) must be separated from the presentation (render) and infrastructure tools (e.g. the remote server interaction specifics). How do I organize all this? :) Is there any worthy implementation with open sources I can take a look at? What I'm thinking is something like this: -> Render / User Input -> Game Engine (this is the so called service layer) -> Processing User Commands & Remote Server -> Domain (Physics) How would you add into this scheme the concept of "ticks" or "interactions" with the possibility to rewind and recalculate "the game"? Remember, I cannot change the Domain/Physics but only the Game Engine. Should I store an array of "World's States"? Should they be just some representations of the world, optimized for this purpose somehow (how?) or should they be actual instances of the world (i.e. including behavior and all that). Has anybody had similar experience? (never worked on a game before if that matters)

    Read the article

  • forward motion car physics - gradual slow

    - by spartan2417
    Im having trouble creating realistic car movements in xna 4. Right now i have a car going forward and hitting a terminal velocity which is fine but when i release the up key i need to the car to slow down gradually and then come to a stop. Im pretty sure this is easy code but i cant seem to get it to work the code - update if (Keyboard.GetState().IsKeyDown(Keys.Up)) { double elapsedTime = gameTime.ElapsedGameTime.Milliseconds; CalcTotalForce(); Acceleration = Vector2.Divide(CalcTotalForce(), MASS); Velocity = Vector2.Add(Velocity, Vector2.Multiply(Acceleration, (float)(elapsedTime))); Position = Vector2.Add(Position, Vector2.Multiply(Velocity, (float)(elapsedTime))); } added functions public Vector2 CalcTraction() { //Traction force = vector direction * engine force return Vector2.Multiply(forwardDirection, ENGINE_FORCE); } public Vector2 CalcDrag() { //Drag force = constdrag * velocity * speed return Vector2.Multiply(Vector2.Multiply(Velocity, DRAG_CONST), Velocity.Y); } public Vector2 CalcRoll() { //roll force = const roll * velocity return Vector2.Multiply(Velocity, ROLL_CONST); } public Vector2 CalcTotalForce() { //total force = traction + (-drag) + (-rolling) return Vector2.Add(CalcTraction(), Vector2.Add(-CalcDrag(), -CalcRoll())); } anyone have any ideas?

    Read the article

  • why are my players drawn to the side of my viewport

    - by Jetbuster
    Following this admittedly brilliant and clean 2d camera class I have a camera on each player, and it works for multiplayer and i've divided the screen into two sections for split screen by giving each camera a viewport. However in the game it looks like this I'm not sure if thats their position relative to the screen or what The relevant gameScreen code, the makePlayers is setup so it could theoretically work for up to 4 players private void makePlayers() { int rowCount = 1; if (NumberOfPlayers > 2) rowCount = 2; players = new Player[NumberOfPlayers]; for (int i = 0; i < players.Length; i++) { int xSize = GameRef.Window.ClientBounds.Width / 2; int ySize = GameRef.Window.ClientBounds.Height / rowCount; int col = i % rowCount; int row = i / rowCount; int xPoint = 0 + xSize * row; int yPoint = 0 + ySize * col; Viewport viewport = new Viewport(xPoint, yPoint, xSize, ySize); Vector2 playerPosition = new Vector2(viewport.TitleSafeArea.X + viewport.TitleSafeArea.Width / 2, viewport.TitleSafeArea.Y + viewport.TitleSafeArea.Height / 2); players[i] = new Player(playerPosition, playerSprites[i], GameRef, viewport); } //players[1].Keyboard = true; } public override void Draw(GameTime gameTime) { base.Draw(gameTime); foreach (Player player in players) { GraphicsDevice.Viewport = player.PlayerCamera.ViewPort; GameRef.spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.PointClamp, null, null, null, player.PlayerCamera.Transform); map.Draw(GameRef.spriteBatch); // Draw the Player player.Draw(GameRef.spriteBatch); // Draw UI screen elements GraphicsDevice.Viewport = Viewport; ControlManager.Draw(GameRef.spriteBatch); GameRef.spriteBatch.End(); } } the player's initialize and draw methods are like so internal void Initialize() { this.score = 0; this.angle = (float)(Math.PI * 0 / 180);//Start sprite at it's default rotation int width = utils.scaleInt(picture.Width, imageScale); int height = utils.scaleInt(picture.Height, imageScale); this.hitBox = new HitBox(new Vector2(centerPos.X - width / 2, centerPos.Y - height / 2), width, height, Color.Black, game.Window.ClientBounds); playerCamera.Initialize(); } #region Methods public void Draw(SpriteBatch spriteBatch) { //Console.WriteLine("Hitbox: X({0}),Y({1})", hitBox.Points[0].X, hitBox.Points[0].Y); //Console.WriteLine("Image: X({0}),Y({1})", centerPos.X, centerPos.Y); Vector2 orgin = new Vector2(picture.Width / 2, picture.Height / 2); hitBox.Draw(spriteBatch); utils.DrawCrosshair(spriteBatch, Position, game.Window.ClientBounds, Color.Red); spriteBatch.Draw(picture, Position, null, Color.White, angle, orgin, imageScale, SpriteEffects.None, 0.1f); } as I said I think I'm gonna need to do something with the render position but I'm to entirely sure what or how it would be elegant to say the least

    Read the article

  • Best way to blend colors in tile lighting? (XNA)

    - by Lemoncreme
    I have made a color, decent, recursive, fast tile lighting system in my game. It does everything I need except one thing: different colors are not blended at all: Here is my color blend code: return (new Color( (byte)MathHelper.Clamp(color.R / factor, 0, 255), (byte)MathHelper.Clamp(color.G / factor, 0, 255), (byte)MathHelper.Clamp(color.B / factor, 0, 255))); As you can see it does not take the already in place color into account. color is the color of the previous light, which is weakened by the above code by factor. If I wanted to blend using the color already in place, I would use the variable blend. Here is an example of a blend that I tried that failed, using blend: return (new Color( (byte)MathHelper.Clamp(((color.R + blend.R) / 2) / factor, 0, 255), (byte)MathHelper.Clamp(((color.G + blend.G) / 2) / factor, 0, 255), (byte)MathHelper.Clamp(((color.B + blend.B) / 2) / factor, 0, 255))); This color blend produces inaccurate and strange results. I need a blend that is accurate, like the first example, that blends the two colors together. What is the best way to do this?

    Read the article

  • Defining the track in a 2D racing game

    - by Ivan
    I am designing a top-down racing game using canvas (html5) which takes a lot of inspiration from Micro Machines. In MM, cars can move off the track, but they are reset/destroyed if they go too far. My maths knowledge isn't great, so I'm finding it hard to separate 3D/complex concepts from those which are directly relevant to my situation. For example, I have seen "splines" mentioned, is this something I should read up on or is that overkill for a 2D game? Could I use a single path which defines the centre of the track and check a car's distance from this line? A second path might be required as a "racing line" for AI. Any advice on methods/techniques/terms to read up on would be greatly appreciated.

    Read the article

  • AS3 Calculating Delta Time In Seconds

    - by user1133079
    Here is how I've been trying to implement delta time based on different internet resources. var startTime:Number = getTimer(); game.Update(deltaTime); deltaTime = Number(getTimer() - startTime) * 0.001; My issue with this is it doesn't seem to be giving me accurate timing. The main update shows the frame time at 0.001 and when reinitializing the level it goes to 0.002. I'm using dt else where for a timer and later on time based physics so I would like it to work as expected. I must be missing something silly.

    Read the article

  • Managing multiple references of the same game entity in different places using IDs

    - by vargonian
    I've seen great questions on similar topics, but none that addressed this particular method: Given that I have multiple collections of game entities in my [XNA Game Studio] game, with many entities belonging to multiple lists, I'm considering ways I could keep track of whenever an entity is destroyed and remove it from the lists it belongs to. A lot of potential methods seem sloppy/convoluted, but I'm reminded of a way I've seen before in which, instead of having multiple collections of game entities, you have collections of game entity IDs instead. These IDs map to game entities via a central "database" (perhaps just a hash table). So, whenever any bit of code wants to access a game entity's members, it first checks to see if it's even in the database still. If not, it can react accordingly. Is this a sound approach? It seems that it would eliminate many of the risks/hassles of storing multiple lists, with the tradeoff being the cost of the lookup every time you want to access an object.

    Read the article

  • Selling your iphone games.

    - by Artemix
    Hi. So, long story short, some days ago I pusblished an iPhone game, I think the game wasnt that bad tbh, and still I got only 10 sells at $0.99. Are they any publishers, sponsors, or distributors to make your game "visible" on the app store market?, or the only thing you need is to have an amazing game and thats all? Somehow I think that even if you have an awesome game if you dont do that "marketing magic" correctly you will not exist in the store. Now Im making a second game, completly different, and I want to know how to do things right. If anyone knows something about this topic, let me know. Thx in advance.

    Read the article

  • Ignore collisions with some objects in certain contexts

    - by Paul Manta
    I'm making a racing game with cars in Unity. The car has a boost/nitro powerup. While boosting, I wouldn't want to be deviated when colliding with zombies, but I do want to be deviated when colliding with walls. On the other hand, I don't want to ignore collision with zombies, because I still want to hit them on impact. How should I handle this? Basically, what I want is for the car to not rotate when colliding with certain objects.

    Read the article

  • Behaviour Trees with irregular updates

    - by Robominister
    I'm interested in behaviour trees that aren't iterated every game tick, but every so often. (Edit: the tree could specify how many frames within the main game loop to wait before running its tick function again). Every theoretical implementation I have seen of behaviour trees talks of the tree search being carried out every game update - which seems necessary, because a leaf node (eg a behaviour, like 'return to base') needs to be constantly checked to see if is still running, failed or completed. Can anyone suggest how I might start implementing a tree that isnt run every tick, or point me in the direction of good material specific to this case (I am struggling to find anything)? My thoughts so far: action leaf nodes (when they start) must only push some kind of action object onto a list for an entity, rather than directly calling any code that makes the entity do something. The list of actions for the entity would be run every frame (update any that need to run, pop any that have completed from the list). the return state from a given action must be fed back into the tree, so that when we run the tree iteration again (and reach the same action leaf node - so the tree has so far determined that we ought to still be trying this action) - that the action has completed, or is still running etc. If my actual action code is running from an action list on an entity, then I possibly need to cancel previously running actions in the list - i am thinking that I can just delete the entire stack of queued up actions. I've seen the idea of ActionLists which block lower priority actions when a higher priority one is added, but this seems like very close logic to behaviour trees, and I dont want to be duplicating behaviour. This leaves me with some questions 1) How would I feed the action return state back into the tree? Its obvious I need to store some information relating to 'currently executing actions' on the entity, and check that in the tree tick, but I can't imagine how. 2) Does having a seperate behaviour tree (for deciding behaviour) and action list (for carrying out actual queued up actions) sound like a reasonable approach? 3) Is the approach of updating a behaviour tree irregularly actually used by anyone? It seems like a nice idea for budgeting ai search time when you have a lot of ai entities to process. (Edit) - I am also thinking about storing a single instance of a given behaviour tree in memory, and providing it by reference to any entity that uses it. So any information about what action was last selected for execution on an entity must be stored in a data context relative to the entity (which the tree can check). (I am probably answering my own questions as i go!) I hope I have expressed my questions adequately! Thanks in advance for any help :)

    Read the article

  • How do I prevent my platformer's character from clipping on wall tiles?

    - by Jonathan Hobbs
    Currently, I have a platformer with tiles for terrain (graphics borrowed from Cave Story). The game is written from scratch using XNA, so I'm not using an existing engine or physics engine. The tile collisions are described pretty much exactly as described in this answer (with simple SAT for rectangles and circles), and everything works fine. Except when the player runs into a wall whilst falling/jumping. In that case, they'll catch on a tile and begin thinking they've hit a floor or ceiling that isn't actually there. The player is moving right and falling downwards. So after movement, collisions are checked - and first, it turns out the player character is colliding with the tile 3rd from the floor, and pushed upwards. Second, he's found to be colliding with the tile beside him, and pushed sideways - the end result being the player character thinks he's on the ground and isn't falling, and 'catches' on the tile for as long as he's running into it. I could solve this by defining the tiles from top to bottom instead, which makes him fall smoothly, but then the inverse case happens and he'll hit a ceiling that isn't there when jumping upwards against the wall. How should I approach resolving this, so that the player character can just fall along the wall as it should?

    Read the article

  • Triangulating a partially triangulated mesh (2D)

    - by teodron
    Referring to the above exhibits, this is the scenario I am working with: starting with a planar graph (in my case, a 2D mesh) with a given triangulation, based on a certain criterion, the graph nodes are labeled as RED and BLACK. (A) a subgraph containing all the RED nodes (with edges between only the directly connected neighbours) is formed (note: although this figure shows a tree forming, it may well happen that the subgraph contain loops) (B) Problem: I need to quickly build a triangulation around the subgraph (e.g. as shown in figure C), but under the constraint that I have to keep the already present edges in the final result. Question: Is there a fast way of achieving this given a partially triangulated mesh? Ideally, the complexity should be in the O(n) class. Some side-remarks: it would be nice for the triangulation algorithm to take into account a certain vertex priority when adding edges (e.g. it should always try to build a "1-ring" structure around the most important nodes first - I can implement iteratively such a routine, but it's O(n^2) ). it would also be nice to reflect somehow the "hop distance" when adding edges: add edges first between the nodes that were "closer" to each other given the start topology. Nevertheless, disregarding the remarks, is there an already known scenario similar to this one where a triangulation is built upon a partially given set of triangles/edges?

    Read the article

< Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >