Search Results

Search found 25952 results on 1039 pages for 'development lifecycle'.

Page 545/1039 | < Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >

  • Instead of the specified Texture, black circles on a green background are getting rendered. Why?

    - by vinzBad
    I'm trying to render a Texture via OpenGL. But instead of the texture black circles on a green background are rendered. (They scale, depending what the rotation of the texture is) Example: The texture I'm trying to render is the following: This is the code I use to render the texture, it's located in my Sprite-class. public void Render() { Matrix4 matrix = Matrix4.CreateTranslation(-OriginX, -OriginY, 0) * Matrix4.CreateRotationZ(Rotation) * Matrix4.CreateTranslation(X, Y, 0); Vector2[] corners = { new Vector2(0,0), //top left new Vector2(Width ,0),//top right new Vector2(Width,Height),//bottom rigth new Vector2(0,Height)//bottom left }; //copy the corners to the uv coordinates Vector2[] uv = corners.ToArray<Vector2>(); //transform the coordinates for (int i = 0; i < 4; i++) corners[i] = new Vector2(Vector3.Transform(new Vector3(corners[i]), matrix)); //GL.Color3(TintColor); GL.BindTexture(TextureTarget.Texture2D, _ID); GL.Begin(BeginMode.Quads); { for (int i = 0; i < 4; i++) { GL.TexCoord2(uv[i]); GL.Vertex3(corners[i].X, corners[i].Y, _layerDepth); } } GL.End(); if (EnableDebugDraw) { GL.Color3(Color.Violet); GL.PointSize(3); GL.Begin(BeginMode.Points); { for (int i = 0; i < 4; i++) GL.Vertex2(corners[i]); } GL.End(); GL.Color3(Color.Green); GL.Begin(BeginMode.Points); GL.Vertex2(X, Y); GL.End(); } } This is how I setup OpenGL. public static void SetupGL() { GL.Enable(EnableCap.AlphaTest); GL.AlphaFunc(AlphaFunction.Greater, 0.1f); GL.Enable(EnableCap.Texture2D); GL.Hint(HintTarget.PerspectiveCorrectionHint, HintMode.Nicest); } With this function I load the texture: public static uint LoadTexture(string path) { uint id; GL.GenTextures(1, out id); GL.BindTexture(TextureTarget.Texture2D, id); Bitmap bitmap = new Bitmap(path); BitmapData data = bitmap.LockBits(new System.Drawing.Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb); GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, data.Width, data.Height, 0, OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, data.Scan0); bitmap.UnlockBits(data); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear); return id; } And here I call Sprite.Render() protected override void OnRenderFrame(FrameEventArgs e) { GL.ClearColor(Color.MidnightBlue); GL.Clear(ClearBufferMask.ColorBufferBit); _sprite.Render(); SwapBuffers(); base.OnRenderFrame(e); } As I stole this code from the Textures-Example from OpenTK, I don't understand why this doesn't work.

    Read the article

  • two guitexture that do not work together

    - by London2423
    I have two GUITexture that move left and right a cube. Is pretty strange but together they don't work. If I activate only one it works. To be more specific: If I have the left GUItexture alone in the game the cube move left. If I have the right GUITexture activated alone the cube move right. Seems all fine I thought but If I have both of them the cube move only right and not left. Where is the mistake? Here is the code inside the GameObject cube for Right move void OnMousedown () { transform.position += Vector3.right * Time.deltaTime; } For Left move void OnMousedown () { transform.position += Vector3.left * Time.deltaTime; } And this is the left GUITexture code //move the cube left Cube.GetComponent<Left> ().enabled = true; left.transform.position += Vector3.left * Time.deltaTime; This is the right GUITexture //move the cube right Cube.GetComponent<Left> ().enabled = true; right.transform.position += Vector3.right * Time.deltaTime; What is the reason for this? I hope someone can help me.

    Read the article

  • XNA content.load Dependancy

    - by Richard
    Quick question, My project i'm building for test purposes is working fine but i have dependencies flying around everywhere due to the XNA framework. In Update i have gametime passed everywhere... this is okay. In Draw i have gametime & spritebatch passed everywhere... this is okay. My issue is in the content.load textures/sounds/fonts. I have them as public variables ie Texture1 = Content.load(of texture2d)("Texture1") I'm passing a 'Game1' pointer into the constructor of every new class being instantiated to gain access to these variables. Am i missing an OOP trick to prevent me having to pass a pointer to 'game1' to every New class?

    Read the article

  • Influence Maps for Pathfinding?

    - by james
    I'm taking the plunge and am getting into game dev, it's been going well but I've got stuck on a problem. I have a maze that is 100x100 with 0,1 to indicate if its a path or a wall. Within the maze I have 300 or so enemies and a player. The outcome I'm looking for is all the enemies work their way towards the player position. Originally I did this using an A* path finding algorithm but with 300 enemies it was taking forever to path find each one individually. After some research I found that an influence map / collaborative diffusion would be the best way to go. But I'm having a real hard time working out how this is actually done. Firstly.. How do you create a influence map? From what I understand each of my walls with have a scent of 0 so that makes them impassable.. then basically a radial effect from my player position to each other cell (So my player starts at 100 and then going outwards from that each other cell will be reduced value) Is that correct? If so,.. How would you do that (Math magic?) My next problem is if that is correct how would my "enemies" stop from getting stuck if they have gone down the wrong way? As say if my player was standing on the otherside of a wall if the enemy is just looking for larger numbers wont it keep getting stuck? I'm doing this in JavaScript so performance is key. Thanks for any help! EDIT: Or if anyones got a better solution? I've been reading about navmeshs, steering pathing, pre calculating all paths on load etc etc

    Read the article

  • Why is my collision detection not accurate?

    - by optimisez
    After trying and trying, I still cannot understand why the leg of character exceeds the wall but no clipping issue when I hit the wall from below. How should I fix it to make him stand still on the wall? From collideWithBox() function below, it shows that playerDest.Y = boxDest.Y - boxDest.height; will get the position the character should standstill on the wall. Theoretically, the clipping effect won't be happen as the character hit the box from below works with the equation playerDest.Y = boxDest.Y + boxDest.height;. void collideWithBox() { if ( spriteCollide(playerDest, boxDest) && keyArr[VK_UP]) //playerDest.Y += 50; playerDest.Y = boxDest.Y + boxDest.height; else if ( spriteCollide(playerDest, boxDest) && !keyArr[VK_UP]) playerDest.Y = boxDest.Y - boxDest.height; } void initPlayer() { // Create texture. hr = D3DXCreateTextureFromFileEx(d3dDevice, "player.png", 169, 44, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 255), NULL, NULL, &player); playerRect.left = playerRect.top = 0; playerRect.right = 29; playerRect.bottom = 36; playerDest.X = 0; playerDest.Y = 564; playerDest.length = playerRect.right - playerRect.left; playerDest.height = playerRect.bottom - playerRect.top; } void initBox() { hr = D3DXCreateTextureFromFileEx(d3dDevice, "brock.png", 330, 132, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 255), NULL, NULL, &box); boxRect.left = 33; boxRect.top = 0; boxRect.right = 63; boxRect.bottom = 30; boxDest.X = boxDest.Y = 300; boxDest.length = boxRect.right - boxRect.left; boxDest.height = boxRect.bottom - boxRect.top; } bool spriteCollide(Entity player, Entity target) { float left1, left2; float right1, right2; float top1, top2; float bottom1, bottom2; left1 = player.X; left2 = target.X; right1 = player.X + player.length; right2 = target.X + target.length; top1 = player.Y; top2 = target.Y; bottom1 = player.Y + player.height; bottom2 = target.Y + target.height; if (bottom1 < top2) return false; if (top1 > bottom2) return false; if (right1 < left2) return false; if (left1 > right2) return false; return true; }

    Read the article

  • How to move a rectangle properly?

    - by bodycountPP
    I recently started to learn OpenGL. Right now I finished the first chapter of the "OpenGL SuperBible". There were two examples. The first had the complete code and showed how to draw a simple triangle. The second example is supposed to show how to move a rectangle using SpecialKeys. The only code provided for this example was the SpecialKeys method. I still tried to implement it but I had two problems. In the previous example I declared and instaciated vVerts in the SetupRC() method. Now as it is also used in the SpecialKeys() method, I moved the declaration and instantiation to the top of the code. Is this proper c++ practice? I copied the part where vertex positions are recalculated from the book, but I had to pick the vertices for the rectangle on my own. So now every time I press a key for the first time the rectangle's upper left vertex is moved to (-0,5:-0.5). This ok because of GLfloat blockX = vVerts[0]; //Upper left X GLfloat blockY = vVerts[7]; // Upper left Y But I also think that this is the reason why my rectangle is shifted in the beginning. After the first time a key was pressed everything works just fine. Here is my complete code I hope you can help me on those two points. GLBatch squareBatch; GLShaderManager shaderManager; //Load up a triangle GLfloat vVerts[] = {-0.5f,0.5f,0.0f, 0.5f,0.5f,0.0f, 0.5f,-0.5f,0.0f, -0.5f,-0.5f,0.0f}; //Window has changed size, or has just been created. //We need to use the window dimensions to set the viewport and the projection matrix. void ChangeSize(int w, int h) { glViewport(0,0,w,h); } //Called to draw the scene. void RenderScene(void) { //Clear the window with the current clearing color glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|GL_STENCIL_BUFFER_BIT); GLfloat vRed[] = {1.0f,0.0f,0.0f,1.0f}; shaderManager.UseStockShader(GLT_SHADER_IDENTITY,vRed); squareBatch.Draw(); //perform the buffer swap to display the back buffer glutSwapBuffers(); } //This function does any needed initialization on the rendering context. //This is the first opportunity to do any OpenGL related Tasks. void SetupRC() { //Blue Background glClearColor(0.0f,0.0f,1.0f,1.0f); shaderManager.InitializeStockShaders(); squareBatch.Begin(GL_QUADS,4); squareBatch.CopyVertexData3f(vVerts); squareBatch.End(); } //Respond to arrow keys by moving the camera frame of reference void SpecialKeys(int key,int x,int y) { GLfloat stepSize = 0.025f; GLfloat blockSize = 0.5f; GLfloat blockX = vVerts[0]; //Upper left X GLfloat blockY = vVerts[7]; // Upper left Y if(key == GLUT_KEY_UP) { blockY += stepSize; } if(key == GLUT_KEY_DOWN){blockY -= stepSize;} if(key == GLUT_KEY_LEFT){blockX -= stepSize;} if(key == GLUT_KEY_RIGHT){blockX += stepSize;} //Recalculate vertex positions vVerts[0] = blockX; vVerts[1] = blockY - blockSize*2; vVerts[3] = blockX + blockSize * 2; vVerts[4] = blockY - blockSize *2; vVerts[6] = blockX+blockSize*2; vVerts[7] = blockY; vVerts[9] = blockX; vVerts[10] = blockY; squareBatch.CopyVertexData3f(vVerts); glutPostRedisplay(); } //Main entry point for GLUT based programs int main(int argc, char** argv) { //Sets the working directory. Not really needed gltSetWorkingDirectory(argv[0]); //Passes along the command-line parameters and initializes the GLUT library. glutInit(&argc,argv); //Tells the GLUT library what type of display mode to use, when creating the window. //Double buffered window, RGBA-Color mode,depth-buffer as part of our display, stencil buffer also available glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGBA|GLUT_DEPTH|GLUT_STENCIL); //Window size glutInitWindowSize(800,600); glutCreateWindow("MoveRect"); glutReshapeFunc(ChangeSize); glutDisplayFunc(RenderScene); glutSpecialFunc(SpecialKeys); //initialize GLEW library GLenum err = glewInit(); //Check that nothing goes wrong with the driver initialization before we try and do any rendering. if(GLEW_OK != err) { fprintf(stderr,"Glew Error: %s\n",glewGetErrorString); return 1; } SetupRC(); glutMainLoop(); return 0; }

    Read the article

  • Why do we move the world instead of the camera

    - by sharethis
    I heard that in an OpenGL game what we do to let the player move is not to move the camera but to move the whole world around. For example here is an extract of this tutorial: http://open.gl/transformations In real life you're used to moving the camera to alter the view of a certain scene, in OpenGL it's the other way around. The camera in OpenGL cannot move and is defined to be located at (0,0,0) facing the negative Z direction. That means that instead of moving and rotating the camera, the world is moved and rotated around the camera to construct the appropriate view. Why do we do that?

    Read the article

  • Java Dragging an object from one area to another [on hold]

    - by user50369
    Hello I have a game where you drag bits of food around the screen. I want to be able to click on an ingredient and drag it to another part of the screen where I release the mouse. I am new to java so I do not really know how to do this please help me Here is me code. This is the class with the mouse listeners in it: public void mousePressed(MouseEvent e) { if (e.getButton() == MouseEvent.BUTTON1) { Comp.ml = true; // placing if (manager.title == true) { if (title.r.contains(Comp.mx, Comp.my)) { title.overview = true; } else if (title.r1.contains(Comp.mx, Comp.my)) { title.options = true; } else if (title.r2.contains(Comp.mx, Comp.my)) { System.exit(0); } } if (manager.option == true) { optionsMouse(e); } mouseinventory(e); } else if (e.getButton() == MouseEvent.BUTTON3) { Comp.mr = true; } } private void mouseinventory(MouseEvent e) { if (e.getButton() == MouseEvent.BUTTON1) { } else if (e.getButton() == MouseEvent.BUTTON1) { } } @Override public void mouseReleased(MouseEvent e) { if (e.getButton() == MouseEvent.BUTTON1) { Comp.ml = false; } else if (e.getButton() == MouseEvent.BUTTON3) { Comp.mr = false; } } @Override public void mouseDragged(MouseEvent e) { for(int i = 0; i < overview.im.ing.toArray().length; i ++){ if(overview.im.ing.get(i).r.contains(Comp.mx,Comp.my)){ overview.im.ing.get(i).newx = Comp.mx; overview.im.ing.get(i).newy = Comp.my; overview.im.ing.get(i).dragged = true; }else{ overview.im.ing.get(i).dragged = false; } } } @Override public void mouseMoved(MouseEvent e) { Comp.mx = e.getX(); Comp.my = e.getY(); // System.out.println("" + Comp.my); } This is the class called ingredient public abstract class Ingrediant { public int x,y,id,lastx,lasty,newx,newy; public boolean removed = false,dragged = false; public int width; public int height; public Rectangle r = new Rectangle(x,y,width,height); public Ingrediant(){ r = new Rectangle(x,y,width,height); } public abstract void tick(); public abstract void render(Graphics g); } and this is a class which extends ingredient called hagleave public class HagLeave extends Ingrediant { private Image img; public HagLeave(int x, int y, int id) { this.x = x; this.y = y; this.newx = x; this.newy = y; this.id = id; width = 75; height = 75; r = new Rectangle(x,y,width,height); } public void tick() { r = new Rectangle(x,y,width,height); if(!dragged){ x = newx; y = newy; } } public void render(Graphics g) { ImageIcon i2 = new ImageIcon("res/ingrediants/hagleave.png"); img = i2.getImage(); g.drawImage(img, x, y, null); g.setColor(Color.red); g.drawRect(r.x, r.y, r.width, r.height); } } The arraylist is in a class called ingrediantManager: public class IngrediantsManager { public ArrayList<Ingrediant> ing = new ArrayList<Ingrediant>(); public IngrediantsManager(){ ing.add(new HagLeave(100,200,1)); ing.add(new PigHair(70,300,2)); ing.add(new GiantsToe(100,400,3)); } public void tick(){ for(int i = 0; i < ing.toArray().length; i ++){ ing.get(i).tick(); if(ing.get(i).removed){ ing.remove(i); i--; } } } public void render(Graphics g){ for(int i = 0; i < ing.toArray().length; i ++){ ing.get(i).render(g); } } }

    Read the article

  • HTML5 platformer collision detection problem

    - by fnx
    I'm working on a 2D platformer game, and I'm having a lot of trouble with collision detection. I've looked trough some tutorials, questions asked here and Stackoverflow, but I guess I'm just too dumb to understand what's wrong with my code. I've wanted to make simple bounding box style collisions and ability to determine on which side of the box the collision happens, but no matter what I do, I always get some weird glitches, like the player gets stuck on the wall or the jumping is jittery. You can test the game here: Platform engine test. Arrow keys move and z = run, x = jump, c = shoot. Try to jump into the first pit and slide on the wall. Here's the collision detection code: function checkCollisions(a, b) { if ((a.x > b.x + b.width) || (a.x + a.width < b.x) || (a.y > b.y + b.height) || (a.y + a.height < b.y)) { return false; } else { handleCollisions(a, b); return true; } } function handleCollisions(a, b) { var a_top = a.y, a_bottom = a.y + a.height, a_left = a.x, a_right = a.x + a.width, b_top = b.y, b_bottom = b.y + b.height, b_left = b.x, b_right = b.x + b.width; if (a_bottom + a.vy > b_top && distanceBetween(a_bottom, b_top) + a.vy < distanceBetween(a_bottom, b_bottom)) { a.topCollision = true; a.y = b.y - a.height + 2; a.vy = 0; a.canJump = true; } else if (a_top + a.vy < b_bottom && distanceBetween(a_top, b_bottom) + a.vy < distanceBetween(a_top, b_top)) { a.bottomCollision = true; a.y = b.y + b.height; a.vy = 0; } else if (a_right + a.vx > b_left && distanceBetween(a_right, b_left) < distanceBetween(a_right, b_right)) { a.rightCollision = true; a.x = b.x - a.width - 3; //a.vx = 0; } else if (a_left + a.vx < b_right && distanceBetween(a_left, b_right) < distanceBetween(a_left, b_left)) { a.leftCollision = true; a.x = b.x + b.width + 3; //a.vx = 0; } } function distanceBetween(a, b) { return Math.abs(b-a); }

    Read the article

  • How are dependant quests generated in Guild Wars 2?

    - by Aufziehvogel
    I recently read that Guild Wars 2 uses a system where the creation of quests depends on which actions user took when they were presented another quest. An example was: There might be a quest to protect a person. If users do not take this action, the person might be kidnapped and later there is a quest to rescue this person. Is there any information on whether the creation of these quests is somehow automatic? From the article it sounded like automatically, but from the specific example you could also guess that people just created a task-set where they added conditions (Task 1 taken: OK; Task 1 not taken: Show Task 2). From what I heard about AI they might also have implemented some sort of a huge neural network to make decisions?

    Read the article

  • android multitouch problem

    - by Max
    Im aware that there a a couple of posts on this matter, but Ive tried all of them and none of them gets rid of my problem. Im starting to get close to the end of my game so I bought a cabel to try it on a real phone, and as I expected my multitouch dosnt work. I use 2 joysticks, one to move my character and one to change his direction so he can shoot while walking backwards etc. my local variable: public void update(MotionEvent event) { if (event == null && lastEvent == null) { return; } else if (event == null && lastEvent != null) { event = lastEvent; } else { lastEvent = event; } int index = event.getActionIndex(); int pointerId = event.getPointerId(index); statement for left Joystick: if (pointerId == 0 && event.getAction() == MotionEvent.ACTION_DOWN && (int) event.getX() > steeringxMesh - 50 && (int) event.getX() < steeringxMesh + 50 && (int) event.getY() > yMesh - 50 && (int) event.getY() < yMesh + 50) { dragging = true; } else if (event.getAction() == MotionEvent.ACTION_UP) { dragging = false; } if (dragging) { //code for moving my character statement for my right joystick: if (pointerId == 1 && event.getAction() == MotionEvent.ACTION_DOWN && (int) event.getX() > shootingxMesh - 50 && (int) event.getX() < shootingxMesh + 50 && (int) event.getY() > yMesh - 50 && (int) event.getY() < yMesh + 50) { shooting = true; } else if (event.getAction() == MotionEvent.ACTION_UP) { shooting = false; } if (shooting) { // code for aiming } This class is my main-Views onTouchListener and is called in a update-method that gets called in my game-loop, so its called every frame. Im really at a loss here, I've done a couple of tutorials and Ive tried all relevant solutions to similar posts. Can post entire Class if necessary but I think this is all the relevant code. Just hope someone can make some sence out of this.

    Read the article

  • Basic modelling of radar

    - by Hawk66
    I'm currently researching how to model/simulate radar for my naval simulation. Since the emphasis is on modelling ASW or submarines in general, I need only a basic radar model - at least for the beginning. So, does anybody know a resource for such a simple model? The model should take signal strength of the sensor, the size of the target and the terrain (height/ground clutter) into account. Thanks.

    Read the article

  • Scrolling background with changing textures

    - by Simran kaur
    I have the 2 cubic structures that are my tracks and are scrolling basically to give effect of movement on object. In my OnBecameInvisible() method, I have changed their Tiling using mainTextureScale void OnBecameInvisible() { renderer.material.mainTextureScale = new Vector2(1, numberOfLanes); this.transform.position = new Vector3(this.transform.position.x, this.transform.position.y, 20.0f); } The tiling works fine. But the alternative tracks have their Tiling set to 0 which is giving an undesirable effect. Requirement: I want to be able to set the Tiling of every track that is visible on the screen. How do I do it?

    Read the article

  • Logic in Entity Components Sytems

    - by aaron
    I'm making a game that uses an Entity/Component architecture basically a port of Artemis's framework to c++,the problem arises when I try to make a PlayerControllerComponent, my original idea was this. class PlayerControllerComponent: Component { public: virtual void update() = 0; }; class FpsPlayerControllerComponent: PlayerControllerComponent { public: void update() { //handle input } }; and have a system that updates PlayerControllerComponents, but I found out that the artemis framework does not look at sub-classes the way I thought it would. So all in all my question here is should I make the framework aware of subclasses or should I add a new Component like object that is used for logic.

    Read the article

  • How can I determine the first visible tile in an isometric perspective?

    - by alekop
    I am trying to render the visible portion of a diamond-shaped isometric map. The "world" coordinate system is a 2D Cartesian system, with the coordinates increasing diagonally (in terms of the view coordinate system) along the axes. The "view" coordinates are simply mouse offsets relative to the upper left corner of the view. My rendering algorithm works by drawing diagonal spans, starting from the upper right corner of the view and moving diagonally to the right and down, advancing to the next row when it reaches the right view edge. When the rendering loop reaches the lower left corner, it stops. There are functions to convert a point from view coordinates to world coordinates and then to map coordinates. Everything works when rendering from tile 0,0, but as the view scrolls around the rendering needs to start from a different tile. I can't figure out how to determine which tile is closest to the upper right corner. At the moment I am simply converting the coordinates of the upper right corner to map coordinates. This works as long as the view origin (upper right corner) is inside the world, but when approaching the edges of the map the starting tile coordinate obviously become invalid. I guess this boils down to asking "how can I find the intersection between the world X axis and the view X axis?"

    Read the article

  • Updating entities in response to collisions - should this be in the collision-detection class or in the entity-updater class?

    - by Prog
    In a game I'm working on, there's a class responsible for collision detection. It's method detectCollisions(List<Entity> entities) is called from the main gameloop. The code to update the entities (i.e. where the entities 'act': update their positions, invoke AI, etc) is in a different class, in the method updateEntities(List<Entity> entities). Also called from the gameloop, after the collision detection. When there's a collision between two entities, usually something needs to be done. For example, zero the velocity of both entities in the collision, or kill one of the entities. It would be easy to have this code in the CollisionDetector class. E.g. in psuedocode: for(Entity entityA in entities){ for(Entity entityB in entities){ if(collision(entityA, entityB)){ if(entityA instanceof Robot && entityB instanceof Robot){ entityA.setVelocity(0,0); entityB.setVelocity(0,0); } if(entityA instanceof Missile || entityB instanceof Missile){ entityA.die(); entityB.die(); } } } } However, I'm not sure if updating the state of entities in response to collision should be the job of CollisionDetector. Maybe it should be the job of EntityUpdater, which runs after the collision detection in the gameloop. Is it okay to have the code responding to collisions in the collision detection system? Or should the collision detection class only detect collisions, report them to some other class and have that class affect the state of the entities?

    Read the article

  • Why circles are not created if small?

    - by Suzan Cioc
    I have changed the scale to my own and now I cant create any object, including circle, if it is of the size which is normal for my scale. I am to create big object first and then modify it to smaller size. Looks like minimal size protection is set somewhere. Where? UPDATE While creating a circle, if I drag for 0.04m circle disappears after drag end. If I drag for 0.08m circle also disappears. If I drag for more than 0.1m, circle persists after drag end. How to set so that it persist after 0.01m too?

    Read the article

  • Reasons to disable game save during combat (e.g. Mass Effect 2)

    - by Steve V.
    So I've been playing Mass Effect 2 (PC) and one of the things I've noticed is that you can only save your game when you're not engaged in combat. As soon as the first enemy shows up on your radar, the save button is disabled. Once combat is over, save functionality reappears. It seems reasonable to assume that Mass Effect 2 is a state machine, and therefore, the internal state of the program at any moment can be captured and reloaded later. This is basically a solved problem - games have been designed this way since the Half-Life era. It also seems reasonable to assume that BioWare knew what they were doing when they made the decision not to follow this model - it's a tried and true system; BioWare wouldn't have done it the way they did without some good reason. What reasons are there to disable game save functionality during combat?

    Read the article

  • When to unload graphics object from main memory?

    - by piotrek
    I writing my resource mangaer, and I consider about how it can work for graphics objects (like textures, meshes). I think about this : I want to load texture (in pseudocode): Texture t = resMgr.GetTex("image.png"); and GetTex make something like this: load texture from disk to main memory create texture object (load it to gpu memory) unload texture from main memory I consider about 3 step, does game engines that you know unload meshes/textures after load them into gpu memory ?

    Read the article

  • Is it possible to procedurally place objects in a non-gridded game?

    - by nickbadal
    I'd like to implement procedural world generation, but I don't want it to look gridded or blocky, where everything is obviously placed on an integer grid. I know that you can do this in gridded worlds by inputting a square's x and y into a noise function, or similar, but is it possible to generate a more natural looking object placement using procedural methods? This is in the context of an adventure game, if it matters. Edit: I guess I should have been a bit more clear in my original question, but I'm mostly wondering about the actual placement of objects in game, e.g. trees, buildings.

    Read the article

  • Projectile Rotation

    - by Alex
    I'm trying to add a projectile system like the projectiles in Realm Of The Mad God. (YouTube it to see what I mean) These projectiles seem to move according to their rotation perfectly and can have nearly any rotation. They also have near perfect hitboxing. What's the maths behind this? My Game works on an integer-based coordinate system, but at the moment projectiles can only shoot either 0, 45, 90, 135, 180, 225, 270 and 315 degrees.

    Read the article

  • Can minecraft support an asymmetrical mesh?

    - by Qwaar
    So in a bout of fancy I have decided I want to play as a Zaku II from gundam, and was saddened that player skins must be symmetrical. Then I remembered my friends mod that let him play as a MLP pony, and another one that let you shapeshift into mobs. So I decided I could just butcher a player model mesh and slap on the shoulder spike and shield, slap a Zaku skin I found on it, port the colors over onto more texture for the shoulder portions, and call it a day once I added it to the shiftable list, before butchering a gun mod to turn a gun into a ZMP-78. Before I get started on this though, I need to know if minecraft will support an asymmetrical mesh.

    Read the article

  • What's the right/standard way of achieving separation of concerns?

    - by Ghanima
    Some background: I want to start developing games, and taking some of the advice given in this site, I've started with something simple and familiar, such as pong, tetris, etc. I want to take as much time as needed to make sure that I have the basics right before moving on to something bigger. I have medium programming experience but I realize making games is a different thing. I find myself wondering many things like should this be in a separate class? Should this module handle this stuff or is it better to let other modules have that kind of functionality? For example, the bouncing of a ball in pong, right now is handled in the ball module, but maybe it's better that some other module did it. Right now I have different modules: one for the graphics, one for the game logic, and others for the objects (depending on the kind of movement required, not all the objects are alike). I know I am asking a lot, any tips you have will be very much appreciated. Short question: What's the right or standard way of separating the modules? What have you found most effective? Is it enough to just keep the drawing (graphics) and the logic separate? Is it necessary to have a lot of classes? (for example for the objects in the game, to handle the movement, etc)

    Read the article

  • 2D Tile Collision free movement

    - by andrepcg
    I'm coding a 3D game for a project using OpenGL and I'm trying to do tile collision on a surface. The surface plane is split into a grid of 64x64 pixels and I can simply check if the (x,y) tile is empty or not. Besides having a grid for collision, there's still free movement inside a tile. For each entity, in the end of the update function I simply increase the position by the velocity: pos.x += v.x; pos.y += v.y; I already have a collision grid created but my collide function is not great, i'm not sure how to handle it. I can check if the collision occurs but the way I handle is terrible. int leftTile = repelBox.x / grid->cellSize; int topTile = repelBox.y / grid->cellSize; int rightTile = (repelBox.x + repelBox.w) / grid->cellSize; int bottomTile = (repelBox.y + repelBox.h) / grid->cellSize; for (int y = topTile; y <= bottomTile; ++y) { for (int x = leftTile; x <= rightTile; ++x) { if (grid->getCell(x, y) == BLOCKED){ Rect colBox = grid->getCellRectXY(x, y); Rect xAxis = Rect(pos.x - 20 / 2.0f, pos.y - 20 / 4.0f, 20, 10); Rect yAxis = Rect(pos.x - 20 / 4.0f, pos.y - 20 / 2.0f, 10, 20); if (colBox.Intersects(xAxis)) v.x *= -1; if (colBox.Intersects(yAxis)) v.y *= -1; } } } If instead of reversing the direction I set it to false then when the entity tries to get away from the wall it's still intersecting the tile and gets stuck on that position. EDIT: I've worked with Flashpunk and it has a great function for movement and collision called moveBy. Are there any simplified implementations out there so I can check them out?

    Read the article

  • What's a good way to check that a player has clicked on an object in a 3D game?

    - by imja
    I'm programming a 3D game (using C++ and OpenGL), and I have a few 3D objects in the scene, we can say they are boxes for this example. I want to let the player click on those boxes to select them (ie. they might change color) with the typical restriction like if more than one box is located where the user clicked, only the one closest to the camera would get selected. What would be the best way to do this? The fact that these objects go through several transforms before getting to window coordinates is what makes this a bit tricky. One approach I thought about was that if the player clicks on the screen, I could normalize the x,y coordinates of mouse click and then transform the bounding box coordinates of the objects into clip-space so that I could compare then to the normalized mouse coordinates. I guess I could then do some sort of ray-box collision test to see if any objects lie as the path of the mouse click. I'm afraid I might be over complicating it. Any better methods out there?

    Read the article

< Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >