Search Results

Search found 33291 results on 1332 pages for 'development environment'.

Page 539/1332 | < Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >

  • Why do we use the Pythagorean theorem in game physics?

    - by Starkers
    I've recently learned that we use Pythagorean theorem a lot in our physics calculations and I'm afraid I don't really get the point. Here's an example from a book to make sure an object doesn't travel faster than a MAXIMUM_VELOCITY constant in the horizontal plane: MAXIMUM_VELOCITY = <any number>; SQUARED_MAXIMUM_VELOCITY = MAXIMUM_VELOCITY * MAXIMUM_VELOCITY; function animate(){ var squared_horizontal_velocity = (x_velocity * x_velocity) + (z_velocity * z_velocity); if( squared_horizontal_velocity <= SQUARED_MAXIMUM_VELOCITY ){ scalar = squared_horizontal_velocity / SQUARED_MAXIMUM_VELOCITY; x_velocity = x_velocity / scalar; z_velocity = x_velocity / scalar; } } Let's try this with some numbers: An object is attempting to move 5 units in x and 5 units in z. It should only be able to move 5 units horizontally in total! MAXIMUM_VELOCITY = 5; SQUARED_MAXIMUM_VELOCITY = 5 * 5; SQUARED_MAXIMUM_VELOCITY = 25; function animate(){ var x_velocity = 5; var z_velocity = 5; var squared_horizontal_velocity = (x_velocity * x_velocity) + (z_velocity * z_velocity); var squared_horizontal_velocity = 5 * 5 + 5 * 5; var squared_horizontal_velocity = 25 + 25; var squared_horizontal_velocity = 50; // if( squared_horizontal_velocity <= SQUARED_MAXIMUM_VELOCITY ){ if( 50 <= 25 ){ scalar = squared_horizontal_velocity / SQUARED_MAXIMUM_VELOCITY; scalar = 50 / 25; scalar = 2.0; x_velocity = x_velocity / scalar; x_velocity = 5 / 2.0; x_velocity = 2.5; z_velocity = z_velocity / scalar; z_velocity = 5 / 2.0; z_velocity = 2.5; // new_horizontal_velocity = x_velocity + z_velocity // new_horizontal_velocity = 2.5 + 2.5 // new_horizontal_velocity = 5 } } Now this works well, but we can do the same thing without Pythagoras: MAXIMUM_VELOCITY = 5; function animate(){ var x_velocity = 5; var z_velocity = 5; var horizontal_velocity = x_velocity + z_velocity; var horizontal_velocity = 5 + 5; var horizontal_velocity = 10; // if( horizontal_velocity >= MAXIMUM_VELOCITY ){ if( 10 >= 5 ){ scalar = horizontal_velocity / MAXIMUM_VELOCITY; scalar = 10 / 5; scalar = 2.0; x_velocity = x_velocity / scalar; x_velocity = 5 / 2.0; x_velocity = 2.5; z_velocity = z_velocity / scalar; z_velocity = 5 / 2.0; z_velocity = 2.5; // new_horizontal_velocity = x_velocity + z_velocity // new_horizontal_velocity = 2.5 + 2.5 // new_horizontal_velocity = 5 } } Benefits of doing it without Pythagoras: Less lines Within those lines, it's easier to read what's going on ...and it takes less time to compute, as there are less multiplications Seems to me like computers and humans get a better deal without Pythagorean theorem! However, I'm sure I'm wrong as I've seen Pythagoras' theorem in a number of reputable places, so I'd like someone to explain me the benefit of using Pythagorean theorem to a maths newbie. Does this have anything to do with unit vectors? To me a unit vector is when we normalize a vector and turn it into a fraction. We do this by dividing the vector by a larger constant. I'm not sure what constant it is. The total size of the graph? Anyway, because it's a fraction, I take it, a unit vector is basically a graph that can fit inside a 3D grid with the x-axis running from -1 to 1, z-axis running from -1 to 1, and the y-axis running from -1 to 1. That's literally everything I know about unit vectors... not much :P And I fail to see their usefulness. Also, we're not really creating a unit vector in the above examples. Should I be determining the scalar like this: // a mathematical work-around of my own invention. There may be a cleverer way to do this! I've also made up my own terms such as 'divisive_scalar' so don't bother googling var divisive_scalar = (squared_horizontal_velocity / SQUARED_MAXIMUM_VELOCITY); var divisive_scalar = ( 50 / 25 ); var divisive_scalar = 2; var multiplicative_scalar = (divisive_scalar / (2*divisive_scalar)); var multiplicative_scalar = (2 / (2*2)); var multiplicative_scalar = (2 / 4); var multiplicative_scalar = 0.5; x_velocity = x_velocity * multiplicative_scalar x_velocity = 5 * 0.5 x_velocity = 2.5 Again, I can't see why this is better, but it's more "unit-vector-y" because the multiplicative_scalar is a unit_vector? As you can see, I use words such as "unit-vector-y" so I'm really not a maths whiz! Also aware that unit vectors might have nothing to do with Pythagorean theorem so ignore all of this if I'm barking up the wrong tree. I'm a very visual person (3D modeller and concept artist by trade!) and I find diagrams and graphs really, really helpful so as many as humanely possible please!

    Read the article

  • Understanding dot notation

    - by Starkers
    Here's my interpretation of dot notation: a = [2,6] b = [1,4] c = [0,8] a . b . c = (2*6)+(1*4)+(0*8) = 12 + 4 + 0 = 16 What is the significance of 16? Apparently it's a scalar. Am I right in thinking that a scalar is the number we times a unit vector by to get a vector that has a scaled up magnitude but the same direction as the unit vector? So again, what is the relevance of 16? When is it used? It's not the magnitude of all the vectors added up. The magnitude of all of them is calculated as follows: sqrt( ax * ax + ay * ay ) + sqrt( bx * bx + by * by ) + sqrt( cx * cx + cy * cy) sqrt( 2 * 2 + 6 * 6 ) + sqrt( 1 * 1 + 4 * 4 ) + sqrt( 0 * 0 + 8 * 8) sqrt( 4 + 36 ) + sqrt( 1 + 16 ) + sqrt( 0 + 64) sqrt( 40 ) + sqrt( 17 ) + sqrt( 64) 6.3 + 4.1 + 8 10.4 + 8 18.4 So I don't really get this diagram: Attempting with sensible numbers: a = [1,0] b = [4,3] a . b = (1*0) + (4*3) = 0 + 12 = 12 So what exactly is a . b describing here? The magnitude of that vector? Because that isn't right: the 'a.b' vector = [4,0] sqrt( x*x + y*y ) sqrt( 4*4 + 0*0 ) sqrt( 16 + 0 ) 4 So what is 12 describing?

    Read the article

  • Why do my pyramids fade black and then back to colour again

    - by geminiCoder
    I have the following vertecies and norms GLfloat verts[36] = { -0.5, 0, 0.5, 0, 0, -0.5, 0.5, 0, 0.5, 0, 0, -0.5, 0.5, 0, 0.5, 0, 1, 0, -0.5, 0, 0.5, 0, 0, -0.5, 0, 1, 0, 0.5, 0, 0.5, -0.5, 0, 0.5, 0, 1, 0 }; GLfloat norms[36] = { 0, -1, 0, 0, -1, 0, 0, -1, 0, -1, 0.25, 0.5, -1, 0.25, 0.5, -1, 0.25, 0.5, 1, 0.25, -0.5, 1, 0.25, -0.5, 1, 0.25, -0.5, 0, -0.5, -1, 0, -0.5, -1, 0, -0.5, -1 }; I am writing my fists Open GL game, But I need to know for sure if my Normals are correct as the colours aren't rendering correctly. my Pyramids are coloured then fade to black every half rotation then back again. My app so far is based on the boiler plate code provided by apple. heres my modified setUp Method [EAGLContext setCurrentContext:self.context]; [self loadShaders]; self.effect = [[GLKBaseEffect alloc] init]; self.effect.light0.enabled = GL_TRUE; self.effect.light0.diffuseColor = GLKVector4Make(1.0f, 0.4f, 0.4f, 1.0f); glEnable(GL_DEPTH_TEST); glGenVertexArraysOES(1, &_vertexArray); //create vertex array glBindVertexArrayOES(_vertexArray); glGenBuffers(1, &_vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(verts) + sizeof(norms), NULL, GL_STATIC_DRAW); //create vertex buffer big enough for both verts and norms and pass NULL as data.. uint8_t *ptr = (uint8_t *)glMapBufferOES(GL_ARRAY_BUFFER, GL_WRITE_ONLY_OES); //map buffer to pass data to it memcpy(ptr, verts, sizeof(verts)); //copy verts memcpy(ptr+sizeof(verts), norms, sizeof(norms)); //copy norms to position after verts glUnmapBufferOES(GL_ARRAY_BUFFER); glEnableVertexAttribArray(GLKVertexAttribPosition); glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0)); //tell GL where verts are in buffer glEnableVertexAttribArray(GLKVertexAttribNormal); glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(sizeof(verts))); //tell GL where norms are in buffer glBindVertexArrayOES(0); And the update method. - (void)update { float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height); GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f); self.effect.transform.projectionMatrix = projectionMatrix; GLKMatrix4 baseModelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -4.0f); baseModelViewMatrix = GLKMatrix4Rotate(baseModelViewMatrix, _rotation, 0.0f, 1.0f, 0.0f); // Compute the model view matrix for the object rendered with GLKit GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); self.effect.transform.modelviewMatrix = modelViewMatrix; // Compute the model view matrix for the object rendered with ES2 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); _normalMatrix = GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelViewMatrix), NULL); _modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix); _rotation += self.timeSinceLastUpdate * 0.5f; } But providing I understand this correct one pyramid is using the GLKit base effect shaders and the other the shaders which are included in the project. So for both of them to have the same error, I thought it would be the Norms?

    Read the article

  • Are there any Java based libraries that provide game mapping features?

    - by James.Elsey
    Hi All, I'm working on a Java web based game in my spare time (springMVC / JSPs etc), and I'm wondering what are my options for dealing with the "game world" or mapping element. My game will be 2d / text based, so I have no need for any OpenGL / Flash etc. My initial idea was to use Google maps and provide a custom overlay, but I want to know if there are any alternatives? For example, if I create a 2d map with all my zones, are there any libraries that will help me plot players, work out distances and so forth? Regards

    Read the article

  • Connecting 2 Vertices in 3DS Max?

    - by Reanimation
    How do you connect two vertices in 3DS Max 2013? I have two vertices which I wish to connect with a line to create an edge. (actually several) I have tried all I can think and done several Google searches but it only comes up with older versions method which say use the "connect" button... But I can't find the connect button on my version (see below) This is what my menu looks like: These are the vertices I'm trying to connect: Basically, I've edited an STL file and deleted some edges and vertices. Now I want to fill the gaps and triangulate what's left. Thanks.

    Read the article

  • What economic books would you suggest for learning about economic valuation of goods and simulations thereof?

    - by Rushyo
    I'm looking to create an economic model for a game based on goods created procedurally. Every natural resource and produced good would be procedurally generated, with certain goods being assigned certain uses. Fakesium might be used for the production of Weapon A and produced from Fakesium factories which use Dilithium and Widgets as reagents, where Widgets are also the product of Foo and Bar The problem is not creating the resources and their various production utlities - but getting the game's AI empires and merchants to (Addendum: somewhat) correctly value the goods according to their scarcity, utility and production costs. I need to create a simulation of goods which allows the various game factions to assign a common value denominator (credits) to each resource, depending on how much its worth to that empire. I see the simulation being something like: "I have a high requirement for Weapon A. Since I don't have much of Fakesium, which is needed for Weapon A - I must have a high demand for Fakesium. If I can acquire Fakesium, devalue it. If not, increase its value - and also increase demand for Dilithium and Widgets too." This is very naive - because it may be much much cheaper for the empire to simply purchase Dilithium and Widgets directly rather than purchasing Fakesium, for example. Another example is two resources might allow the creation of Weapon A (Fakesium and Lieron), so we'd need to consider that. I've been scratching my head over the problem and it keeps growing. By the time the player joins the world, I'd expect enough iterations of this process to have occurred that prices would have largely normalised - and would then only trigger rarely to compensate for major changes (eg. if the player blows up the world's only Foo mine!) Could anyone suggest resources (books, largely) which outline this style of modelling, preferably in the context of simulations? Since this problem would never occur outside fantasy worlds, I figured this is probably the most likely place to find people who have encountered similar problems and I'm sure there's people who know of good places for Games Developers to start looking at less specific economic theory too. Additionally, does anyone know of any developers with blogs whose games or research applications perform similar modelling? EDIT: I think I should underline that I'm not looking for optimal solutions. I'm looking to make the actors impulsive - making rudimentary decisions based on fuzzy inputs about what they care about or don't. I'm aiming to understand the problem area better not derive answers. All the textbooks I've found seem to be about real-world economics or how to solve complex theoretical problems, neither of which are terribly relevant to the actor's decision making.

    Read the article

  • How do I draw a scrolling background?

    - by droidmachine
    How can I draw background tile in my 2D side-scrolling game? Is that loop logical for OpenGL es? My tile 2400x480. Also I want to use parallax scrolling for my game. batcher.beginBatch(Assets.background); for(int i=0; i<100; i++) batcher.drawSprite(0+2400*i, 240, 2400, 480, Assets.backgroundRegion); batcher.endBatch(); UPDATE And thats my onDrawFrame.I'm sending deltaTime for fps control. public void onDrawFrame(GL10 gl) { GLGameState state = null; synchronized(stateChanged) { state = this.state; } if(state == GLGameState.Running) { float deltaTime = (System.nanoTime()-startTime) / 1000000000.0f; startTime = System.nanoTime(); screen.update(deltaTime); screen.present(deltaTime); } if(state == GLGameState.Paused) { screen.pause(); synchronized(stateChanged) { this.state = GLGameState.Idle; stateChanged.notifyAll(); } } if(state == GLGameState.Finished) { screen.pause(); screen.dispose(); synchronized(stateChanged) { this.state = GLGameState.Idle; stateChanged.notifyAll(); } } }

    Read the article

  • Line Intersection from parametric equation

    - by Sidar
    I'm sure this question has been asked before. However, I'm trying to connect the dots by translating an equation on paper into an actual function. I thought It would be interesting to ask here instead on the Math sites (since it's going to be used for games anyway ). Let's say we have our vector equation : x = s + Lr; where x is the resulting vector, s our starting point/vector. L our parameter and r our direction vector. The ( not sure it's called like this, please correct me ) normal equation is : x.n = c; If we substitute our vector equation we get: (s+Lr).n = c. We now need to isolate L which results in L = (c - s.n) / (r.n); L needs to be 0 < L < 1. Meaning it needs to be between 0 and 1. My question: I want to know what L is so if I were to substitute L for both vector equation (or two lines) they should give me the same intersection coordinates. That is if they intersect. But I can't wrap my head around on how to use this for two lines and find the parameter that fits the intersection point. Could someone with a simple example show how I could translate this to a function/method?

    Read the article

  • Exporting UV coords from Blender

    - by Soapy
    So I have searched on google and various other websites but I've not found an answer. The only ones I did find did not work. So my question is how do I get UV coords from blender (2.63)? Currently I'm writing my own custom file exporter, and so far have managed to export vertices and their normals. Is there a way to export the UV coords? N.B. I'm currently try to figure it out using a simple cube that is unwrapped and has a texture applied to it.

    Read the article

  • Adding delay between damage

    - by iQue
    I have a bunch of enemies chasing my main-character, and if they intersect I want them to damage him and that's all good. The problem is that right now they damage him as long as they stand around him, every frame! and since it gets called every frame my character's HP reaches 0 almost instantly. I've tried adding delay and I've tried a timertask, but can't get it to work. This is the code I use to check for intersection: private void checkCollision(Canvas canvas) { synchronized (getHolder()) { Rect h1 = happy.getBounds(); for (int i = 0; i < enemies.size(); i++) { for (int j = 0; j < bullets.size(); j++) { Rect b1 = bullets.get(j).getBounds(); Rect e1 = enemies.get(i).getBounds(); if (b1.intersect(e1)) { enemies.get(i).damageHP(5); bullets.remove(j); } if(e1.intersect(h1)){ happy.damageHP(5); // this is the statement that needs some sort of delay, I want them to damage him every 2 seconds they intersect him. } if(enemies.get(i).getHP() <= 0){ enemies.get(i).death(canvas, enemies); score.incScore(5); break; } if(happy.getHP() <= 0){ score.incScore(-50); //end-screen } } } } } If anyone knows the logic to do this please do tell.

    Read the article

  • Find Nearest Object

    - by ultifinitus
    I have a fairly sizable game engine created, and I'm adding some needed features, such as this, how do I find the nearest object from a list of points? In this case, I could simply use the Pythagorean theorem to find the distance, and check the results. I know I can't simply add x and y, because that's the distance to the object, if you only took right angle turns. However I'm wondering if there's something else I could do? I also have a collision system, where essentially I turn objects into smaller objects on a smaller grid, kind of like a minimap, and only if objects exist in the same gridspace do I check for collisions, I could do the same thing, only make the gridspace larger to check for closeness. (rather than checking every. single. object) however that would take additional setup in my base class and clutter up the already cluttered object. TL;DR Question: Is there something efficient and accurate that I can use to detect which object is closest, based on a list of points and sizes?

    Read the article

  • How to have operations with character/items on binary with concrete operations on C++?

    - by Piperoman
    I have the next problem. A item can have a lot of states: NORMAL = 0000000 DRY = 0000001 HOT = 0000010 BURNING = 0000100 WET = 0001000 COLD = 0010000 FROZEN = 0100000 POISONED= 1000000 A item can have some states at same time but not all of them Is impossible to be dry and wet at same time. If you COLD a WET item, it turns into FROZEN. If you HOT a WET item, it turns into NORMAL A item can be BURNING and POISON Etc. I have tried to set binary flags to states, and use AND to combine different states, checking before if it is possible or not to do it, or change to another status. Does there exist a concrete approach to solve this problem efficiently without having an interminable switch that checks every state with every new state? It is relatively easy to check 2 different states, but if there exists a third state it is not trivial to do.

    Read the article

  • Profiling and containing memory per system

    - by chadb
    I have been interesting in profiling and keeping a managed memory pool for each subsystem, so I could get statistic on how much memory was being used in something such as sounds or graphics. However, what is the best design for doing this? I was thinking of using multiple allocators and just using one per subsystem, however, that would result in global variables for my allocators (or so it would seem to me). Another approach I have seen/been suggested is to just overload new and pass in an allocator for a parameter. I had a similar question over on stackoverflow here with a bounty, however, it seems as if perhaps I was too vague or just there is not enough people with knowledge in the subject.

    Read the article

  • how to solve ArrayList outOfBoundsExeption?

    - by iQue
    Im getting: 09-02 17:15:39.140: E/AndroidRuntime(533): java.lang.IndexOutOfBoundsException: Invalid index 1, size is 1 09-02 17:15:39.140: E/AndroidRuntime(533): at java.util.ArrayList.throwIndexOutOfBoundsException(ArrayList.java:251) when Im killing enemies using this method: private void checkCollision() { Rect h1 = happy.getBounds(); for (int i = 0; i < enemies.size(); i++) { for (int j = 0; j < bullets.size(); j++) { Rect b1 = bullets.get(j).getBounds(); Rect e1 = enemies.get(i).getBounds(); if (b1.intersect(e1)) { enemies.get(i).damageHP(5); bullets.remove(j); Log.d("TAG", "HERE: LOLTHEYTOUCHED"); } if (h1.intersect(e1)){ happy.damageHP(5); } if(enemies.get(i).getHP() <= 0){ enemies.remove(i); } if(happy.getHP() <= 0){ //end-screen !!!!!!! } } } } using this ArrayList: private ArrayList<Enemy> enemies = new ArrayList<Enemy>(); and adding to array like this: public void createEnemies() { Bitmap bmp = BitmapFactory.decodeResource(getResources(), R.drawable.female); if (enemyCounter < 24) { enemies.add(new Enemy(bmp, this, controls)); } enemyCounter++; } I dont really understand what the problem is, Ive been looking around for a while but cant really find anything that helps me. If you know or if you can link me someplace where they have a solution for a similar problem Ill be a very happy camper! Thanks for ur time.

    Read the article

  • Simple thruster like behaviour when rotating sprite

    - by ensamgud
    I'm prototyping some 2D game concepts with XNA and have added some basic keyboard inputs to control a triangle sprite. When I press key up the sprite accelerates in it's current facing direction, when I release the key it brakes down. For rotation, when I press left/right keys I rotate the sprite. Currently the sprite immedately changes direction when I rotate it. What I want is for it to keep moving in the same direction when I rotate, until I hit key up, adding thrust in whatever direction the sprite is pointing at. This would simulate thrusters on a classic space shooter like Asteroids. I'm adding an image to describe the behaviour I'm after and some code samples of how I'm doing things at the moment. This is my player struct, holding information of the sprite. public struct PlayerData { public Vector2 Position; // where to draw the sprite public Vector2 Direction; // travel direction of sprite public float Angle; // rotation of sprite public float Velocity; public float Acceleration; public float Decelleration; public float RotationAcceleration; public float RotationDecceleration; public float TopSpeed; public float Scale; } This is how I'm currently handling thrusting / braking (when pressing/releasing key up) (simplified, removed some bounds checking etc): player.Velocity += player.Acceleration * 0.1f; player.Velocity -= player.Acceleration * 0.1f; And when I rotate the sprite left and right: player.Angle -= player.RotationAcceleration * 0.1f; player.Angle += player.RotationAcceleration * 0.1f; This runs in the update loop, keeps the direction updated and updates the position: Vector2 up = new Vector2(0f, -1f); Matrix rotMatrix = Matrix.CreateRotationZ(player.Angle); player.Direction = Vector2.Transform(up, rotMatrix); player.Direction *= player.Velocity; player.Position += player.Direction; I am following along various beginner tutorials and haven't found any describing this, but I have tried some on my own without success. Do I need to change my velocity and acceleration fields to Vectors instead of floats to accomplish this type of movement? I realise my Angle and the Direction vector is currently tied together and I need to disconnect these somehow to be able to rotate freely without changing the direction of the movement, but I can't quite figure out how to do this while keeping the acceleration/decceleration functional. Would appreciate an explanation rather than pure code samples. Thanks,

    Read the article

  • Whole map design vs. tiles array design

    - by Mikalichov
    I am working on a 2D RPG, which will feature the usual dungeon/town maps (pre-generated). I am using tiles, that I will then combine to make the maps. My original plan was to assemble the tiles using Photoshop, or some other graphic program, in order to have one bigger picture that I could then use as a map. However, I have read on several places people talking about how they used arrays to build their map in the engine (so you give an array of x tiles to your engine, and it assemble them as a map). I can understand how it's done, but it seems a lot more complicated to implement, and I can't see obvious avantages. What is the most common method, and what are advantages/disadvantages of each?

    Read the article

  • My image is not showing in java, using ImageIcon

    - by user1048606
    I'd like to know why my images are now showing up when I use ImageIcon and when I have specified the directory the image is in. All I get is a black blank screen with nothing else on it. import java.awt.Image; import java.awt.event.KeyEvent; import javax.swing.ImageIcon; import java.awt.Image; import java.awt.event.KeyEvent; import java.util.ArrayList; import javax.swing.ImageIcon; // Class for handling key input public class Craft { private int dx; private int dy; private int x; private int y; private Image image; private Image image2; private ArrayList missiles; private final int CRAFT_SIZE = 20; private String craft = "C:\\Users\\Jimmy\\Desktop\\Jimmy's Folder\\programs\\craft.png"; public Craft() { ImageIcon ii = new ImageIcon(craft); image2 = ii.getImage(); missiles = new ArrayList(); x = 40; y = 60; } public void move() { x += dx; y += dy; } public int getX() { return x; } public int getY() { return y; } public Image getImage() { return image; } public ArrayList getMissiles() { return missiles; } public void keyPressed(KeyEvent e) { int key = e.getKeyCode(); // Shooting key if (key == KeyEvent.VK_SPACE) { fire(); } if (key == KeyEvent.VK_LEFT) { dx = -1; } if (key == KeyEvent.VK_RIGHT) { dx = 1; } if (key == KeyEvent.VK_UP) { dy = -1; } if (key == KeyEvent.VK_DOWN) { dy = 1; } } // Handles the missile object firing out of the ship public void fire() { missiles.add(new Missile(x + CRAFT_SIZE, y + CRAFT_SIZE/2)); } public void keyReleased(KeyEvent e) { int key = e.getKeyCode(); if (key == KeyEvent.VK_LEFT) { dx = 0; } if (key == KeyEvent.VK_RIGHT) { dx = 0; } if (key == KeyEvent.VK_UP) { dy = 0; } if (key == KeyEvent.VK_DOWN) { dy = 0; } } }

    Read the article

  • How do i make a minecraft server mod? [closed]

    - by Simon
    Possible Duplicate: Mods for Minecraft Server - how does it work? I have made some minecraft client mods, but i've started a server a mounth ago and i want to make a mod for it, but i cant find any tutorial on the internet. How can then the other guys making those mods for minecraft server know how they are going to do? Do they try forward as i tryed or are they doing something else. I would be glad if someone could tell me how to do or find tutorials for me, couse I have tryed to find them in nearly a week of searching. But i guess im searching at the wrong spot of internet, what do i know :o

    Read the article

  • How to deal with animated doors in isometric tiles

    - by George Profenza
    I've got a tricky issue I'm not sure how to tackle best: I have an animated tile of a door. When it's closed it should be sorted one way, but when it's openend it will need to be sorted a different way, as it belonging to a different(neighbouring tile). Here's the door closed: and the door opened: I imagine it would be possible to override the sorting system for such tiles and adjust the sorting based on the frame, but it feels a bit hacky. Has anyone encountered a similar scenario ? Any elegant solutions ?

    Read the article

  • OpenGL directional light creating black spots

    - by AnonymousDeveloper
    I probably ought to start by saying that I suspect the problem is that one of my vectors is not in the correct "space", but I don't know for sure. I am having a strange problem with a directional light. When I move the camera away from (0.0, 0.0, 0.0) it creates tiny black spots that grow larger as the distance increases. I apologize ahead of time for the length of the code. Vertex shader: #version 410 core in vec3 vf_normal; in vec3 vf_bitangent; in vec3 vf_tangent; in vec2 vf_textureCoordinates; in vec3 vf_vertex; out vec3 tc_normal; out vec3 tc_bitangent; out vec3 tc_tangent; out vec2 tc_textureCoordinates; out vec3 tc_vertex; uniform mat3 vf_m_normal; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform float vf_te_inner; uniform float vf_te_outer; void main() { tc_normal = vf_normal; tc_bitangent = vf_bitangent; tc_tangent = vf_tangent; tc_textureCoordinates = vf_textureCoordinates; tc_vertex = vf_vertex; gl_Position = vf_m_mvp * vec4(vf_vertex, 1.0); } Tessellation Control shader: #version 410 core layout (vertices = 3) out; in vec3 tc_normal[]; in vec3 tc_bitangent[]; in vec3 tc_tangent[]; in vec2 tc_textureCoordinates[]; in vec3 tc_vertex[]; out vec3 te_normal[]; out vec3 te_bitangent[]; out vec3 te_tangent[]; out vec2 te_textureCoordinates[]; out vec3 te_vertex[]; uniform float vf_te_inner; uniform float vf_te_outer; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; #define ID gl_InvocationID float getTessLevelInner(float distance0, float distance1) { float avgDistance = (distance0 + distance1) / 2.0; return clamp((vf_te_inner - avgDistance), 1.0, vf_te_inner); } float getTessLevelOuter(float distance0, float distance1) { float avgDistance = (distance0 + distance1) / 2.0; return clamp((vf_te_outer - avgDistance), 1.0, vf_te_outer); } void main() { te_normal[gl_InvocationID] = tc_normal[gl_InvocationID]; te_bitangent[gl_InvocationID] = tc_bitangent[gl_InvocationID]; te_tangent[gl_InvocationID] = tc_tangent[gl_InvocationID]; te_textureCoordinates[gl_InvocationID] = tc_textureCoordinates[gl_InvocationID]; te_vertex[gl_InvocationID] = tc_vertex[gl_InvocationID]; float eyeToVertexDistance0 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[0], 1.0)).xyz); float eyeToVertexDistance1 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[1], 1.0)).xyz); float eyeToVertexDistance2 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[2], 1.0)).xyz); gl_TessLevelOuter[0] = getTessLevelOuter(eyeToVertexDistance1, eyeToVertexDistance2); gl_TessLevelOuter[1] = getTessLevelOuter(eyeToVertexDistance2, eyeToVertexDistance0); gl_TessLevelOuter[2] = getTessLevelOuter(eyeToVertexDistance0, eyeToVertexDistance1); gl_TessLevelInner[0] = getTessLevelInner(eyeToVertexDistance2, eyeToVertexDistance0); } Tessellation Evaluation shader: #version 410 core layout (triangles, equal_spacing, cw) in; in vec3 te_normal[]; in vec3 te_bitangent[]; in vec3 te_tangent[]; in vec2 te_textureCoordinates[]; in vec3 te_vertex[]; out vec3 g_normal; out vec3 g_bitangent; out vec4 g_patchDistance; out vec3 g_tangent; out vec2 g_textureCoordinates; out vec3 g_vertex; uniform float vf_te_inner; uniform float vf_te_outer; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat3 vf_m_normal; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_displace; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; vec2 interpolate2D(vec2 v0, vec2 v1, vec2 v2) { return vec2(gl_TessCoord.x) * v0 + vec2(gl_TessCoord.y) * v1 + vec2(gl_TessCoord.z) * v2; } vec3 interpolate3D(vec3 v0, vec3 v1, vec3 v2) { return vec3(gl_TessCoord.x) * v0 + vec3(gl_TessCoord.y) * v1 + vec3(gl_TessCoord.z) * v2; } float amplify(float d, float scale, float offset) { d = scale * d + offset; d = clamp(d, 0, 1); d = 1 - exp2(-2*d*d); return d; } float getDisplacement(vec2 t0, vec2 t1, vec2 t2) { float displacement = 0.0; vec2 textureCoordinates = interpolate2D(t0, t1, t2); vec2 vector = ((t0 + t1 + t2) / 3.0); float sampleDistance = sqrt((vector.x * vector.x) + (vector.y * vector.y)); sampleDistance /= ((vf_te_inner + vf_te_outer) / 2.0); displacement += texture(vf_t_displace, textureCoordinates).x; displacement += texture(vf_t_displace, textureCoordinates + vec2(-sampleDistance, -sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2(-sampleDistance, sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2( sampleDistance, sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2( sampleDistance, -sampleDistance)).x; return (displacement / 5.0); } void main() { g_normal = normalize(interpolate3D(te_normal[0], te_normal[1], te_normal[2])); g_bitangent = normalize(interpolate3D(te_bitangent[0], te_bitangent[1], te_bitangent[2])); g_patchDistance = vec4(gl_TessCoord, (1.0 - gl_TessCoord.y)); g_tangent = normalize(interpolate3D(te_tangent[0], te_tangent[1], te_tangent[2])); g_textureCoordinates = interpolate2D(te_textureCoordinates[0], te_textureCoordinates[1], te_textureCoordinates[2]); g_vertex = interpolate3D(te_vertex[0], te_vertex[1], te_vertex[2]); float displacement = getDisplacement(te_textureCoordinates[0], te_textureCoordinates[1], te_textureCoordinates[2]); float d2 = min(min(min(g_patchDistance.x, g_patchDistance.y), g_patchDistance.z), g_patchDistance.w); d2 = amplify(d2, 50, -0.5); g_vertex += g_normal * displacement * 0.1 * d2; gl_Position = vf_m_mvp * vec4(g_vertex, 1.0); } Geometry shader: #version 410 core layout (triangles) in; layout (triangle_strip, max_vertices = 3) out; in vec3 g_normal[3]; in vec3 g_bitangent[3]; in vec4 g_patchDistance[3]; in vec3 g_tangent[3]; in vec2 g_textureCoordinates[3]; in vec3 g_vertex[3]; out vec3 f_tangent; out vec3 f_bitangent; out vec3 f_eyeDirection; out vec3 f_lightDirection; out vec3 f_normal; out vec4 f_patchDistance; out vec4 f_shadowCoordinates; out vec2 f_textureCoordinates; out vec3 f_vertex; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat3 vf_m_normal; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; void main() { int index = 0; while (index < 3) { vec3 vertexNormal_cameraspace = vf_m_normal * normalize(g_normal[index]); vec3 vertexTangent_cameraspace = vf_m_normal * normalize(f_tangent); vec3 vertexBitangent_cameraspace = vf_m_normal * normalize(f_bitangent); mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); vec3 eyeDirection = -(vf_m_view * vf_m_model * vec4(g_vertex[index], 1.0)).xyz; vec3 lightDirection = normalize(-(vf_m_view * vec4(vf_l_position, 1.0)).xyz); f_eyeDirection = TBN * eyeDirection; f_lightDirection = TBN * lightDirection; f_normal = normalize(g_normal[index]); f_patchDistance = g_patchDistance[index]; f_shadowCoordinates = vf_m_depthBias * vec4(g_vertex[index], 1.0); f_textureCoordinates = g_textureCoordinates[index]; f_vertex = (vf_m_model * vec4(g_vertex[index], 1.0)).xyz; gl_Position = gl_in[index].gl_Position; EmitVertex(); index ++; } EndPrimitive(); } Fragment shader: #version 410 core in vec3 f_bitangent; in vec3 f_eyeDirection; in vec3 f_lightDirection; in vec3 f_normal; in vec4 f_patchDistance; in vec4 f_shadowCoordinates; in vec3 f_tangent; in vec2 f_textureCoordinates; in vec3 f_vertex; out vec4 fragColor; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; vec2 poissonDisk[16] = vec2[]( vec2(-0.94201624, -0.39906216), vec2( 0.94558609, -0.76890725), vec2(-0.09418410, -0.92938870), vec2( 0.34495938, 0.29387760), vec2(-0.91588581, 0.45771432), vec2(-0.81544232, -0.87912464), vec2(-0.38277543, 0.27676845), vec2( 0.97484398, 0.75648379), vec2( 0.44323325, -0.97511554), vec2( 0.53742981, -0.47373420), vec2(-0.26496911, -0.41893023), vec2( 0.79197514, 0.19090188), vec2(-0.24188840, 0.99706507), vec2(-0.81409955, 0.91437590), vec2( 0.19984126, 0.78641367), vec2( 0.14383161, -0.14100790) ); float random(vec3 seed, int i) { vec4 seed4 = vec4(seed,i); float dot_product = dot(seed4, vec4(12.9898, 78.233, 45.164, 94.673)); return fract(sin(dot_product) * 43758.5453); } float amplify(float d, float scale, float offset) { d = scale * d + offset; d = clamp(d, 0, 1); d = 1 - exp2(-2.0 * d * d); return d; } void main() { vec3 lightColor = vf_l_color.xyz; float lightPower = vf_l_color.w; vec3 materialDiffuseColor = texture(vf_t_diffuse, f_textureCoordinates).xyz; vec3 materialAmbientColor = vec3(0.1, 0.1, 0.1) * materialDiffuseColor; vec3 materialSpecularColor = texture(vf_t_specular, f_textureCoordinates).xyz; vec3 n = normalize(texture(vf_t_normal, f_textureCoordinates).rgb * 2.0 - 1.0); vec3 l = normalize(f_lightDirection); float cosTheta = clamp(dot(n, l), 0.0, 1.0); vec3 E = normalize(f_eyeDirection); vec3 R = reflect(-l, n); float cosAlpha = clamp(dot(E, R), 0.0, 1.0); float visibility = 1.0; float bias = 0.005 * tan(acos(cosTheta)); bias = clamp(bias, 0.0, 0.01); for (int i = 0; i < 4; i ++) { float shading = (0.5 / 4.0); int index = i; visibility -= shading * (1.0 - texture(vf_t_shadow, vec3(f_shadowCoordinates.xy + poissonDisk[index] / 3000.0, (f_shadowCoordinates.z - bias) / f_shadowCoordinates.w))); }\n" fragColor.xyz = materialAmbientColor + visibility * materialDiffuseColor * lightColor * lightPower * cosTheta + visibility * materialSpecularColor * lightColor * lightPower * pow(cosAlpha, 5); fragColor.w = texture(vf_t_diffuse, f_textureCoordinates).w; } The following images should be enough to give you an idea of the problem. Before moving the camera: Moving the camera just a little. Moving it to the center of the scene.

    Read the article

  • How can be data oriented programming applied for GUI system?

    - by Miro
    I've just learned basics of Data oriented programming design, but I'm not very familiar with that yet. I've also read Pitfalls of Object Oriented Programming GCAP 09. It seems that data oriented programming is much better idea for games, than OOP. I'm just creating my own GUI system and it's completely OOP. I'm thinking if is data oriented programming design applicable for structured things like GUI. The main problem I see is that every type widget has different data, so I can hardly group them into arrays. Also every type of widget renders differently so I still need to call virtual functions.

    Read the article

  • I am thinking about developing a game, but i am single developer? [on hold]

    - by Jake Doe
    Since very little i wanted to create a game, my place where my rules apply, where i am not limited. Now that i am capable of doing. I am asking myself should i start ? I have already the idea i have choosen the engine, only coding and artwork is required. The engine i have choose cost is quite high(50k), i can try throught a kickstarter campaign or indiegogo. But shouid I ? Please give me your opinion. Thank you :)

    Read the article

  • How should I organise classes for a space simulator?

    - by Peteyslatts
    I have pretty much taught myself everything I know about programming, so while I know how to teach myself (books, internet and reading API's), I'm finding that there hasn't been a whole lot in the way of good programming. I am finishing up learning the basics of XNA and I want to create a space simulator to test my knowledge. This isn't a full scale simulator, but just something that covers everything I learned. It's also going to be modular so I can build on it, after I get the basics down. One of the early features I want to implement is AI. And I want to take this into account as I'm designing my classes so I can minimize rewriting code. So my question: How should I design ship classes so that both the player and AI can use them? The only idea I have so far is: Create a ship class that contains stats, models, textures, collision data etc. The player and AI would then have the data for position, rotation, health, etc and would base their status off of the ship stats.

    Read the article

  • Box2D blocky map. Body, Fixtures a huge map and performance

    - by Solom
    Right now I'm still in the planning phase of a my very first game. I'm creating a "Minecraft"-like game in 2D that features blocks that can be destroyed as well as players moving around the map. For creating the map I chose a 2D-Array of Integers that represent the Block ID. For testing purposes I created a huge map (16348 * 256) and in my prototype that didn't use Box2D everything worked like a charm. I only rendered those blocks that where within the bounds of my camera and got 60 fps straight. The problem started when I decided to use an existing physics-solution rather than implementing my own one. What I had was basically simple hitboxes around the blocks and then I had to manually check if the player collided with any of those in his neighborhood. For more advanced physics as well as the collision detection I want to switch over to Box2D. The problem I have right now is ... how to go about the bodies? I mean, the blocks are of a static bodytype. They don't move on their own, they just are there to be collided with. But as far as I can see it, every block needs his own body with a rectangular fixture attached to it, so as to be destroyable. But for a huge map such as mine, this turns out to be a real performance bottle-neck. (In fact even a rather small map [compared to the other] of 1024*256 is unplayable.) I mean I create thousands of thousands of blocks. Even if I just render those that are in my immediate neighborhood there are hundreds of them and (at least with the debugRenderer) I drop to 1 fps really quickly (on my own "monster machine"). I thought about strategies like creating just one body, attaching multiple fixtures and only if a fixture got hit, separate it from the body, create a new one and destroy it, but this didn't turn out quite as successful as hoped. (In fact the core just dumps. Ah hello C! I really missed you :X) Here is the code: public class Box2DGameScreen implements Screen { private World world; private Box2DDebugRenderer debugRenderer; private OrthographicCamera camera; private final float TIMESTEP = 1 / 60f; // 1/60 of a second -> 1 frame per second private final int VELOCITYITERATIONS = 8; private final int POSITIONITERATIONS = 3; private Map map; private BodyDef blockBodyDef; private FixtureDef blockFixtureDef; private BodyDef groundDef; private Body ground; private PolygonShape rectangleShape; @Override public void show() { world = new World(new Vector2(0, -9.81f), true); debugRenderer = new Box2DDebugRenderer(); camera = new OrthographicCamera(); // Pixel:Meter = 16:1 // Body definition BodyDef ballDef = new BodyDef(); ballDef.type = BodyDef.BodyType.DynamicBody; ballDef.position.set(0, 1); // Fixture definition FixtureDef ballFixtureDef = new FixtureDef(); ballFixtureDef.shape = new CircleShape(); ballFixtureDef.shape.setRadius(.5f); // 0,5 meter ballFixtureDef.restitution = 0.75f; // between 0 (not jumping up at all) and 1 (jumping up the same amount as it fell down) ballFixtureDef.density = 2.5f; // kg / m² ballFixtureDef.friction = 0.25f; // between 0 (sliding like ice) and 1 (not sliding) // world.createBody(ballDef).createFixture(ballFixtureDef); groundDef = new BodyDef(); groundDef.type = BodyDef.BodyType.StaticBody; groundDef.position.set(0, 0); ground = world.createBody(groundDef); this.map = new Map(20, 20); rectangleShape = new PolygonShape(); // rectangleShape.setAsBox(1, 1); blockFixtureDef = new FixtureDef(); // blockFixtureDef.shape = rectangleShape; blockFixtureDef.restitution = 0.1f; blockFixtureDef.density = 10f; blockFixtureDef.friction = 0.9f; } @Override public void render(float delta) { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); debugRenderer.render(world, camera.combined); drawMap(); world.step(TIMESTEP, VELOCITYITERATIONS, POSITIONITERATIONS); } private void drawMap() { for(int a = 0; a < map.getHeight(); a++) { /* if(camera.position.y - (camera.viewportHeight/2) > a) continue; if(camera.position.y - (camera.viewportHeight/2) < a) break; */ for(int b = 0; b < map.getWidth(); b++) { /* if(camera.position.x - (camera.viewportWidth/2) > b) continue; if(camera.position.x - (camera.viewportWidth/2) < b) break; */ /* blockBodyDef = new BodyDef(); blockBodyDef.type = BodyDef.BodyType.StaticBody; blockBodyDef.position.set(b, a); world.createBody(blockBodyDef).createFixture(blockFixtureDef); */ PolygonShape rectangleShape = new PolygonShape(); rectangleShape.setAsBox(1, 1, new Vector2(b, a), 0); blockFixtureDef.shape = rectangleShape; ground.createFixture(blockFixtureDef); rectangleShape.dispose(); } } } @Override public void resize(int width, int height) { camera.viewportWidth = width / 16; camera.viewportHeight = height / 16; camera.update(); } @Override public void hide() { dispose(); } @Override public void pause() { } @Override public void resume() { } @Override public void dispose() { world.dispose(); debugRenderer.dispose(); } } As you can see I'm facing multiple problems here. I'm not quite sure how to check for the bounds but also if the map is bigger than 24*24 like 1024*256 Java just crashes -.-. And with 24*24 I get like 9 fps. So I'm doing something really terrible here, it seems and I assume that there most be a (much more performant) way, even with Box2D's awesome physics. Any other ideas? Thanks in advance!

    Read the article

  • Combining pathfinding with global AI objectives

    - by V_Programmer
    I'm making a turn-based strategy game using Java and LibGDX. Now I want to code the AI. I haven't written the AI code yet. I've simply designed it. The AI will have two components, one focused in tactics and resource management (create troops, determine who have strategical advantage, detect important objectives, etc) and a individual component, focused in assign the work to each unit, examine its possibilites and move the unit. Now I'm facing an important problem. The map where the action take place is a grid-based map. Each terrain has different movement cost. I read about pathfinding and I think A* is a very good option to determine a good route between two points. However, imagine I have an unit with movement = 5 (i.e, it can move 5 tiles of movement cost = 1). My tactical AI has found an objective at a distance d = 20 tiles (Manhattan distance) from my unit. My problem is the following: the unit won't be able to reach the objective in one turn. So the AI will have to store a list of position and execute them in various turns. I don't know how to solve this. PS. In my unit code, I have a list called "selectionMarks" which stores all the possible places where the unit can go in this turn. This places are calculed recursively using a "getSelectionMarks" function. Any help is appreciated :D

    Read the article

< Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >