Search Results

Search found 33291 results on 1332 pages for 'development environment'.

Page 589/1332 | < Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >

  • Client/Server game even in solo: any big problem?

    - by Klaim
    I'm making a game which have strong basic design based on multiplayer but also should provide a really interesting and self-sufficient solo game. A bit like a real-time strategy game. The events and actions taken shouldn't be as massive and immediate as in a FPS, so you can also think the networking like for an RTS. It's a PC game, targetting Windows, MacOSX and Linux (Ubuntu & Fedora). It's programmed in C++, using a variety of open source libraries, so I have great (potential) control over the performances. So far I always considered that just making the game work with two applications, client & server, even in solo mode was ok. However, as I'm in the process of starting the network code I'm having doubts about if it's a good idea. I'm not a specialist so I might be missing something in my analysis. I see these pros and cons: Pros: The game works only one way so if I fix a bug it should apply on all game modes, whatever the distance with the server is; Basic networking issues would be detected early, including behaviour with the protection softwares (firewall) installed (i am not specialist so this might be wrong); Cons: I suppose that even if it should be really fast enough, networking client and server on the same computer would still be slower than no networking and message passing in (one) process memory. Maybe debugging would be more difficult? I don't have experience in this case but so far I assume that debugging with Visual Studio allows me to debug multiple process so it shouldn't be really different. Also, remote debugging. My question is: is there a big disadvantage that I missed? Or maybe there are advantages that I missed and that should encourage me to just continue with only client-server game sessions?

    Read the article

  • What kind of graphics would you like better? [ pictures ] [closed]

    - by Roger Travis
    I am looking forward to make an android game, something angrybirds style. I've already made my own engine and now have to decide what kind of graphics should I make. It could be either realistic, like that or a doodle-style like this Right now the first one looks more appealing to me... on the other hand, doodle-graphics are very easy to draw and their transparency doesn't seem to slow down the engine much. What do you think?

    Read the article

  • C# Collision Math Help

    - by user36037
    I am making my own collision detection in MonoGame. I have a PolyLine class That has a property to return the normal of that PolyLine instance. I have a ConvexPolySprite class that has a List LineSegments. I hav a CircleSprite class that has a Center Property and a Radius Property. I am using a static class for the collision detection method. I am testing it on a single line segment. Vector2(200,0) = Vector2(300, 200) The problem is it detects the collision anywhere along the path of line out into space. I cannot figure out why. Thanks in advance; public class PolyLine { //--------------------------------------------------------------------------------------------------------------------------- // Class Properties /// <summary> /// Property for the upper left-hand corner of the owner of this instance /// </summary> public Vector2 ParentPosition { get; set; } /// <summary> /// Relative start point of the line segment /// </summary> public Vector2 RelativeStartPoint { get; set; } /// <summary> /// Relative end point of the line segment /// </summary> public Vector2 RelativeEndPoint { get; set; } /// <summary> /// Property that gets the absolute position of the starting point of the line segment /// </summary> public Vector2 AbsoluteStartPoint { get { return ParentPosition + RelativeStartPoint; } }//end of AbsoluteStartPoint /// <summary> /// Gets the absolute position of the end point of the line segment /// </summary> public Vector2 AbsoluteEndPoint { get { return ParentPosition + RelativeEndPoint; } }//end of AbsoluteEndPoint public Vector2 NormalizedLeftNormal { get { Vector2 P = AbsoluteEndPoint - AbsoluteStartPoint; P.Normalize(); float x = P.X; float y = P.Y; return new Vector2(-y, x); } }//end of NormalizedLeftNormal //--------------------------------------------------------------------------------------------------------------------------- // Class Constructors /// <summary> /// Sole ctor /// </summary> /// <param name="parentPosition"></param> /// <param name="relStart"></param> /// <param name="relEnd"></param> public PolyLine(Vector2 parentPosition, Vector2 relStart, Vector2 relEnd) { ParentPosition = parentPosition; RelativeEndPoint = relEnd; RelativeStartPoint = relStart; }//end of ctor }//end of PolyLine class public static bool Collided(CircleSprite circle, ConvexPolygonSprite poly) { var distance = Vector2.Dot(circle.Position - poly.LineSegments[0].AbsoluteEndPoint, poly.LineSegments[0].NormalizedLeftNormal) + circle.Radius; if (distance <= 0) { return false; } else { return true; } }//end of collided

    Read the article

  • What is the correct way to reset and load new data into GL_ARRAY_BUFFER?

    - by Geto
    I am using an array buffer for colors data. If I want to load different colors for the current mesh in real time what is the correct way to do it. At the moment I am doing: glBindVertexArray(vao); glBindBuffer(GL_ARRAY_BUFFER, colorBuffer); glBufferData(GL_ARRAY_BUFFER, SIZE, colorsData, GL_STATIC_DRAW); glEnableVertexAttribArray(shader->attrib("color")); glVertexAttribPointer(shader->attrib("color"), 3, GL_FLOAT, GL_TRUE, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, 0); It works, but I am not sure if this is good and efficient way to do it. What happens to the previous data ? Does it write on top of it ? Do I need to call : glDeleteBuffers(1, colorBuffer); glGenBuffers(1, colorBuffer); before transfering the new data into the buffer ?

    Read the article

  • Why Android for new (micro) consoles?

    - by Klaim
    There are a lot of new (micro) consoles with Android inside coming soon or already there (OUYA for example). My question is: why use Android and not another OS as base for these consoles? I assume that there are pragmatic answers to this but I can't see any clear killer feature. For example, one can assume that any stable Linux distribution would work (like Valve seem to think). Android is primarily targetted at mobile platforms which mean it is built around the idea of being interrupted which goes against a lot of what console devs needs in new hardware - but is not killing Android as a platform for gamedev either as it's just a constraint. Why not another OS? What's the Android killer features that make micro-console builder using it instead of Linux or anything else?

    Read the article

  • Why are my sprite sheet's frames not visible in Cocos Builder?

    - by Ramy Al Zuhouri
    I have created a sprite sheet with zwoptex. Then I just dragged the .plist and .png files to my Cocos Builder project. After this I wanted to take a sprite frame and set it to a sprite: But the sprite image is empty! At first I thought it was just empty in Cocos Builder, and that it must have been working correctly when imported to a project. But if I try to run that scene with CCBReader I still see an empty sprite. Why?

    Read the article

  • Is it possible to procedurally place objects in a non-gridded game?

    - by nickbadal
    I'd like to implement procedural world generation, but I don't want it to look gridded or blocky, where everything is obviously placed on an integer grid. I know that you can do this in gridded worlds by inputting a square's x and y into a noise function, or similar, but is it possible to generate a more natural looking object placement using procedural methods? This is in the context of an adventure game, if it matters. Edit: I guess I should have been a bit more clear in my original question, but I'm mostly wondering about the actual placement of objects in game, e.g. trees, buildings.

    Read the article

  • Android Dynamic 2D Map

    - by Deltharis
    My problem is, I want to create a 2D tiled map. Yes, I know it's been asked a lot. I've seen answers that propose the use of tiled however it only allows (or so it seems to me) to generate static maps that do not change once generated. And I need a large empty uniform space of empty tiles, upon which players may place various buildings (some spanning more than one tile and logically being the same one). How to approach this in Android? Do I make some kind of TableLayout, use arbitrarly large amount of rows and imageviews (with my emptyTile), than somehow work event-based changing of image ids from there? I'd think that only a portion of that map should be visible at a time, but I don't see how scrolling around could be the part of that structure.

    Read the article

  • Detecting pixels in a rotated Texture2D in XNA?

    - by PugWrath
    I know things similar to this have been posted, but I'm still trying to find a good solution... I'm drawing Texture2D objects on the ground in my game, and for Mouse-Over or targeting methods, I'm detecting whether or not the pixel in that Texture at the mouse position is Color.Transparent. This works perfectly when I do not rotate the texture, but I'd like to be able to rotate textures to add realism/variety. However, I either need to create a new Texture2D that is rotated at the correct angle so that I can detect its pixels, or I need to find some other method of detection... Any thoughts?

    Read the article

  • rotating spheres

    - by Dave
    I want to continuously rotate 2 spheres, however the rotation does not seem to work. Here is my code: float angle = 0.0f; void light(){ glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); glEnable(GL_LIGHT1); // Create light components GLfloat positionlight1[] = { 9.0, 5.0, 1.0, 0.0 }; GLfloat positionlight2[] = {0.2,2.5,1.3,0.0}; GLfloat light_ambient1[] = { 0.0, 0.0, 1.0, 1.0}; GLfloat light_diffuse[] = { 1.0, 1.0, 1.0, 1.0 }; glLightfv(GL_LIGHT0, GL_AMBIENT, light_ambient1); glLightfv(GL_LIGHT1, GL_DIFFUSE, light_diffuse); glLightfv(GL_LIGHT0, GL_POSITION, positionlight1); glLightfv(GL_LIGHT1, GL_POSITION, positionlight2); } void changeSize(int w, int h) { if (h==0) // Prevent A Divide By Zero By { h=1; // Making Height Equal One } glMatrixMode(GL_PROJECTION); // Select The Projection Matrix glLoadIdentity(); // Reset The Projection Matrix glViewport(0,0,w,h);// Reset The Current Viewport // Calculate The Aspect Ratio Of The Window gluPerspective(45.0f,(GLfloat)w/(GLfloat)h,0.1f,100.0f); glMatrixMode(GL_MODELVIEW); // Select The Modelview Matrix // Reset The Modelview Matrix } void renderScene(void) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glPushMatrix(); //set where to start the current object glTranslatef(0.0,1.2,-6); glRotatef(angle,0,1.2,-6); glutSolidSphere(1,50,50); glPopMatrix(); //end the current object transformations glPushMatrix(); //set where to start the current object glTranslatef(0.0,-2,-6); glRotatef(angle,0,-2,-6); glutSolidSphere(0.5,50,50); glPopMatrix(); //end the current object transformations angle=+0.1; glutSwapBuffers(); } int main(int argc, char **argv) { // init GLUT and create window glutInit(&argc, argv); glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA); glutInitWindowPosition(100,100); glutInitWindowSize(500,500); glutCreateWindow("Hello World"); // register callbacks light(); glutDisplayFunc(renderScene); glutReshapeFunc(changeSize); glutIdleFunc(renderScene); // enter GLUT event processing loop glutMainLoop(); return 1; } Graphicstest::Graphicstest(void) { } In the renderscene where i draw,translate and rotate my 2 spheres. It does not seem to rotate the spheres continuously. What am i doing wrong?

    Read the article

  • How do I consolidate the differences between iOS and Android update loops?

    - by kkan
    I'm currently working on moving some Android-ndk code to the iPhone. From looking at some samples it seems that the main loop is handled for you and all you've got to do is override the render method on the view to handle the rendering. Then add a selector to handle the update methods. The render method itself looks like it's attached to the windows refresh. But in android I've got my own game loop that controls the rendering and updates using C++ time.h. Is it possible to implement the same here bypassing Apple's loop? I'd really like the keep the structures of the code similar.

    Read the article

  • Why wont SVN work on the Google Code Source? [closed]

    - by BluFire
    I installed SVN Toroise. I then proceeded to the desktop and right clicked and clicked on SVN checkout. I then entered under the URL of Repository "http://apple-crunch.googlecode.com/svn/trunk/ apple-crunch-read-only" but after a few seconds the checkout fails saying that the URL doesn't exist. I got the command line from http://code.google.com/p/apple-crunch/source/checkout I'm trying to get a direct copy instead of browsing through the source.

    Read the article

  • What is the purpose of the canonical view volume?

    - by breadjesus
    I'm currently learning OpenGL and haven't been able to find an answer to this question. After the projection matrix is applied to the view space, the view space is "normalized" so that all the points lie within the range [-1, 1]. This is generally referred to as the "canonical view volume" or "normalized device coordinates". While I've found plenty of resources telling me about how this happens, I haven't seen anything about why it happens. What is the purpose of this step?

    Read the article

  • What is better for the overall performance and feel of the game: one setInterval performing all the work, or many of them doing individual tasks?

    - by Bane
    This question is, I suppose, not limited to Javascript, but it is the language I use to create my game, so I'll use it as an example. For now, I have structured my HTML5 game like this: var fps = 60; var game = new Game(); setInterval(game.update, 1000/fps); And game.update looks like this: this.update = function() { this.parseInput(); this.logic(); this.physics(); this.draw(); } This seems a bit inefficient, maybe I don't need to do all of those things at once. An obvious alternative would be to have more intervals performing individual tasks, but is it worth it? var fps = 60; var game = new Game(); setInterval(game.draw, 1000/fps); setInterval(game.physics, 1000/a); //where "a" is some constant, performing the same function as "fps" ... With which approach should I go and why? Is there a better alternative? Also, in case the second approach is the best, how frequently should I perform the tasks?

    Read the article

  • is wisdom of what happens 'behind scenes' (in compiler, external DLLs etc.) important?

    - by I_Question_Things_Deeply
    I have been a computer-fanatic for almost a decade now. I've always loved and wondered how computers work, even from the purest, lowest hardware level to the very smallest pixel on the screen, and all the software around that. That seems to be my problem though ... as I try to write code (I'm pretty fluent at C++) I always sit there enormous amounts of time in front of a text-editor wondering how every line, statement, datum, function, etc. will correspond to every Assembly and machine instruction performed to do absolutely everything necessary for the kernel to allocate memory to run my compiled program, and all of the other hardware being used as well. For example ... I would write cout << "Before memory changed" << endl; and run the debugger to get the Assembly for this, and then try and reverse disassemble the Assembly to machine code based on my ISA, and then research every .dll, library file, linked library, linking process, linker source code of the program, the make file, the kernel I'm using's steps of processing this compilation, the hardware's part aside from the processor (e.g. video card, sound card, chipset, cache latency, byte-sized registers, calling convention use, DDR3 RAM and disk drive, filesystem functioning and so many other things). Am I going about programming wrong? I mean I feel I should know everything that goes on underneath English syntax on a computer program. But the problem is that the more I research every little thing the less I actually accomplish at all. I can never finish anything because of this mentality, yet I feel compelled to know everything... what should I do?

    Read the article

  • How can I extract a list of Minecraft items and recipes?

    - by Sean
    I'm designing a robust system for resolving item dependencies in Minecraft and to do so, I need to maintain a database of items and recipes. Right now, this database has to be hand-crafted (no pun intended); I would like to know if it is possible to somehow query the Minecraft jars (or perhaps more realistically, grep through them) to extract this data automatically. How can this be done? The project is currently in Python, but it can still be ported to Java without much fuss at this stage. (For the curious.)

    Read the article

  • How can I go about learning to write a shader

    - by Donutz
    So here's the background: I'm writing a game, just for my own amusement and education really. I've already come to the conclusion that XNA was the way to go for graphics, I've bought a couple of books, I've gotten basic game graphics going, and that's great. Now I'm starting to get a little in-depth and I'm starting to need to do stuff not covered in my (beginner) books. In particular, I need to display a sprite using a mask. Actually, what I need to do is display a generic sprite with a different color for each player. After banging around on the web, it seems the way to go is to have a color texture (one for each player) which I display using the mask, then display the generic part of the sprite. This has to be done dynamically, i.e. at runtime because there are too many sprites to keep in memory if I try to generate all the permutations at startup. So, I need to use a shader. Fine. I've downloaded a sample shader program, and managed to hit it with a hammer until it does something close enough to what I want so that I know I'm on the right track. And here, we come to my problem... I have no friggin' clue what I'm doing. While there are a lot of samples and such about shaders, no one ever actually explains what's going on. For instance, I can't find any real docs on Tex2D. I feel like the guys in Zoolander poking at the computer. So, my question (yes, I have a question) -- where is a good URL or what is a good book to take me from dumskie to reasonably competent to write a basic shader?

    Read the article

  • Translating an object along its heading

    - by Kuros
    I am working on a simulation that requires me to have several objects moving around in 3D space (text output of their current position on the grid and heading is fine, I do not need graphics), and I am having some trouble getting objects to move along their relative headings. I have a basic understanding of vectors and matrices. I am using a vector to represent their position, and I am also using Euler Angles. I can translate one of my entities with a matrix along whatever axis, and I can alter their heading. For example, if I have an entity at (order is XYZ) 1, 1, 1, with a heading of 0, I can apply a translation matrix to get them to talk to 1, 1, 2 fine. However, if I change their heading to 270, they still walk to 1, 1, 3, instead of 2, 1, 2 as I desire. I have a feeling that my problem lies in not translating my matrix from world space to object space, but I am not sure how to go about that. How can I do this? Addition: I am using 3D vectors to represent their current position and their heading (using the three euler angles). For now, all I want to do is have an entity walk in a square, reporting their current position at each step. So, assuming it starts at 10, 10, 10 I want it to walk as follows: 10,10,10 -> 10, 10, 15 10, 10, 15 -> 5, 10, 15 5, 10, 15 -> 5, 10, 10 5, 10, 10 -> 10, 10, 10 My 1 Z unit translation matrix is as follows: [1 0 0 0] [0 1 0 0] [0 0 1 1] [0 0 0 1] My rotation matrix is as follows: [0 0 1 0] [0 1 0 0] [-1 0 0 0] [0 0 0 1]

    Read the article

  • Getting Started with Component Architecture: DI?

    - by ashes999
    I just moved away from MVC towards something more component-architecture-like. I have no concept of messages yet (it's rough prototype code), objects just get internal properties and values of other classes for now. That issue aside, it seems like this is turning into an aspect-oriented-programming challenge. I've noticed that all entities with, for example, a position component will have similar properties (get/set X/Y/Z, rotation, velocity). Is it a common practice, and/or good idea, to push these behind an interface and use dependency injection to inject a generic class (eg. PositionComponent) which already has all the boiler-plate code? (I'm sure the answer will affect the model I use for message/passing)

    Read the article

  • ECS with Go - circular imports [migrated]

    - by Andreas
    I'm exploring both Go and Entity-Component-Systems. I understand how ECS works, and I'm trying to replicate what seems to be the go-to document of ECS, namely http://cowboyprogramming.com/2007/01/05/evolve-your-heirachy/ For performance, the document recommends to use static arrays of every component type. That is, not arrays of component interfaces (arrays of pointers). The problem with this in Go is circular imports. I have one package, ecs, which contains the definitions for Entity, Component and System types/interfaces as well as an EntityManager. Another package, ecs/components, contains the various components. Obviously, the ecs/components package depends on ecs. But, to declare arrays of specific components in EntityManager, ecs would depend on ecs/components, therefore creating a circular import. Is there any way of avoiding this? I am aware that normally a high level system should not depend on lower systems. I'm also want to point out that using an array of pointers is probably fast enough for my purposes, but I'm interested in possible workarounds (for future reference) Thank you for your help!

    Read the article

  • returning correct multiTouch id

    - by Max
    I've spent countless hours on reading tutorials and looking at every question related to multiTouch from here and Stackoverflow. But I just cannot figure out how to do this correctly. I use a loop to get my pointerId, I dont see alot of people doing this but its the only way I've managed to get it somewhat working. I have two joysticks on my screen, one for moving and one for controlling my sprites rotation and the angle he shoots, like in Monster Shooter. Both these work fine. My problem is that when I Move my sprite at the same time as Im shooting, my touchingPoint for my movement is set to the touchingPoint of my shooting, since the x and y is higher on the touchingPoint of my shooting (moving-stick on left side of screen, shooting-stick on right side), my sprite speeds up, this creates an unwanted change in speed for my sprite. I will post my entire onTouch method here with some variable-changes to make it more understandable. Since I do not know where Im going wrong. public void update(MotionEvent event) { if (event == null && lastEvent == null) { return; } else if (event == null && lastEvent != null) { event = lastEvent; } else { lastEvent = event; } int pointerCount = event.getPointerCount(); for (int i = 0; i < pointerCount; i++) { int x = (int) event.getX(i); int y = (int) event.getY(i); int id = event.getPointerId(i); int action = event.getActionMasked(); int actionIndex = event.getActionIndex(); String actionString; switch (action) { case MotionEvent.ACTION_DOWN: actionString = "DOWN"; break; case MotionEvent.ACTION_UP: shooting=false; // when shooting is true, it shoots dragging=false; // when dragging is true, it moves actionString = "UP"; break; case MotionEvent.ACTION_POINTER_DOWN: actionString = "PNTR DOWN"; break; case MotionEvent.ACTION_POINTER_UP: shooting=false; dragging=false; actionString = "PNTR UP"; break; case MotionEvent.ACTION_CANCEL: shooting=false; dragging=false; actionString = "CANCEL"; break; case MotionEvent.ACTION_MOVE: try{ if((int) event.getX(id) > 0 && (int) event.getX(id) < touchingBox && (int) event.getY(id) > touchingBox && (int) event.getY(id) < view.getHeight()){ movingPoint.x = (int) event.getX(id); movingPoint.y = (int) event.getY(id); dragging = true; } else if((int) event.getX(id) > touchingBox && (int) event.getX(id) < view.getWidth() && (int) event.getY(id) > touchingBox && (int) event.getY(id) < view.getHeight()){ shootingPoint.x = (int) event.getX(id); shootingPoint.y = (int) event.getY(id); shooting=true; }else{ shooting=false; dragging=false; } }catch(Exception e){ } actionString = "MOVE"; break; default: actionString = ""; } Wouldnt post this much code if I wasnt at an absolute loss of what I'm doing wrong. I simply can not get a good understanding of how multiTouching works. basicly movingPoint changes for both my first and second finger. I bind it to a box, but aslong as I hold one finger within this box, it changes its value based on where my second finger touches. It moves in the right direction and nothing gives an error, the problem is the speed-change, its almost like it adds up the two touchingPoints.

    Read the article

  • Searching for fast skeletal animation algorithm

    - by igf
    Is it theoretically possible to dynamicaly animate scene with 100-150 400 poly characters meshes on high-end GL ES 2.0 mobile devices or i definetley should use prerendered keyframe animation? Scene have only one light source and precalculated shadow maps. View from top like in Warcraft 3. No any other meshes or objects. 2d collision detecting between objects calculated via spatial hashing. It can be any other algorithm besides ragdoll, but it must supply fast and simple skeletal animation for frame with 100+ low poly meshes for each mesh. Any ideas?

    Read the article

  • Randomly spawning bitmaps on cnvas

    - by Toystoj
    I need some ideas in order to finish algorithm. I'm randomly placing objects (bitmaps) on canvas without overlapping. Time needed to finish it is my problem. When I need to spawn for example 80% of canvas it takes to long. So i was thinking : I should make some change when the bitmaps take off 50 % of canvas. I want to tell algorithm that it should generate new locations (x,y) where it is free space. My question is : How to render new location (x,y) in place where is free space. In summary: Things I know : object location (x,y) 4 corners (x,y) of object object width, height canvas width, height Any suggestions?

    Read the article

  • two guitexture that do not work together

    - by London2423
    I have two GUITexture that move left and right a cube. Is pretty strange but together they don't work. If I activate only one it works. To be more specific: If I have the left GUItexture alone in the game the cube move left. If I have the right GUITexture activated alone the cube move right. Seems all fine I thought but If I have both of them the cube move only right and not left. Where is the mistake? Here is the code inside the GameObject cube for Right move void OnMousedown () { transform.position += Vector3.right * Time.deltaTime; } For Left move void OnMousedown () { transform.position += Vector3.left * Time.deltaTime; } And this is the left GUITexture code //move the cube left Cube.GetComponent<Left> ().enabled = true; left.transform.position += Vector3.left * Time.deltaTime; This is the right GUITexture //move the cube right Cube.GetComponent<Left> ().enabled = true; right.transform.position += Vector3.right * Time.deltaTime; What is the reason for this? I hope someone can help me.

    Read the article

  • Updating entities in response to collisions - should this be in the collision-detection class or in the entity-updater class?

    - by Prog
    In a game I'm working on, there's a class responsible for collision detection. It's method detectCollisions(List<Entity> entities) is called from the main gameloop. The code to update the entities (i.e. where the entities 'act': update their positions, invoke AI, etc) is in a different class, in the method updateEntities(List<Entity> entities). Also called from the gameloop, after the collision detection. When there's a collision between two entities, usually something needs to be done. For example, zero the velocity of both entities in the collision, or kill one of the entities. It would be easy to have this code in the CollisionDetector class. E.g. in psuedocode: for(Entity entityA in entities){ for(Entity entityB in entities){ if(collision(entityA, entityB)){ if(entityA instanceof Robot && entityB instanceof Robot){ entityA.setVelocity(0,0); entityB.setVelocity(0,0); } if(entityA instanceof Missile || entityB instanceof Missile){ entityA.die(); entityB.die(); } } } } However, I'm not sure if updating the state of entities in response to collision should be the job of CollisionDetector. Maybe it should be the job of EntityUpdater, which runs after the collision detection in the gameloop. Is it okay to have the code responding to collisions in the collision detection system? Or should the collision detection class only detect collisions, report them to some other class and have that class affect the state of the entities?

    Read the article

< Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >