Search Results

Search found 26774 results on 1071 pages for 'distributed development'.

Page 528/1071 | < Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >

  • Point line collision reaction

    - by user4523
    I am trying to program point line segment collision detection and reaction. I am doing this for fun and to learn. The point moves (it has a velocity, and can be controlled by the user), whilst the lines are strait and stationary. The lines are not axis aligned. Everything is in 2D. It is quite straight forward to work out if a collision has occurred. For each frame, the point moves from A to B. AB is a line, and if it crosses the line segment, a collision has occurred (or will occur) and I am able to work out the point of intersection (poi). The problem I am having is with the reaction. Ideally I would like the point to be prevented from moving across the line. In one frame, I can move the point back to the poi (or only alow it to move as far as the poi), and alter the velocity. The problem I am having with this approach (I think) is that, next frame the user may try to cross the line again. Although the point is on the poi, the point may not be exactly on the line. Since it is not axis aligned, I think there is always some subtle rounding issue (A float representation of a point on a line might be rounded to a point that is slightly on one side or the other). Because of this, next frame the path might not intersect the line (because it can start on the other side and move away from it) and the point is effectively allowed to cross the line. Firstly, does the analysis sound correct? Having accepted (maybe) that I cannot always exactly position the point on the line, I tried to move the point away from the line slightly (either along the normal to the line, or along the path vector). I then get a problem at edges. Attempting to fix one collision by moving the point away from the line (even slightly) can cause it to cross another line (one shape I am dealing with is a star, with sharp corners). This can mean that the solution to one collision inadvertently creates another collision, which is ignored. Again, does this sound correct? Anyway, whatever I try, I am having difficulty with edges, and the point is occasionally able to penetrate the polygons and cross lines, which is undesirable. Whilst I can find a lot of information about collision detection on the web (and on this site) I can find precious little information on collision reaction. Does any one know of any good point line collision reaction tutorials? Or is my approach too flawed/over complicated?

    Read the article

  • Developing an AI opponent for Monopoly

    - by Bernhard Zürn
    i want to develop an AI opponent for the Board Game Monopoly. I want to implement the whole Game with Prolog (XPCE). The probability for a field on the Board being hit, can be computed with Markov Chains. I already know some "best practices" like "after 50% of the playing time it does not make sense to buy out of jail because in jail you get renting fees for your fields but you don't have to pay for other fields as long as you stay in prison". The interesting question always is: buy a streetfield ? buy houses / hotels ? how much ? so i think i would have to compute some kind of future liquidity .. does anyone know how to pack that into an algorithm or how to translate it to prolog ?

    Read the article

  • Html5 games, what is the standard dimension to use?

    - by aoi
    I am trying to make html5 games to be played on the browser(not offline apps), and I am trying to support the maximum number of platforms, hence I need to know what dimension should I use for the game canvas so that it works in the most number of places. Also is there anyway to "scale" a large game to fit in the tiny size of iphone(around 320x356px I think). By "scale" I don't mean to actually resize just the canvas, as because that can mess up the coordinate based calculations, and for a large number of objects, re-positioning based on canvas size can be a real hassle.

    Read the article

  • How do you cope mentally with one very long piece of work

    - by Asher Einhorn
    This is my first games industry job and my task is to take out one major game component and put in a newer one. So far it's been 5 weeks, and I'm still just staring at errors. I think it could be months before it's at the point that it can compile. It's really getting me down. I'm just changing things over, I'm not really writing anything myself. it's just endless. I fix a thousand errors and nine thousand take their place. I'm sure this must be a common thing, so I was just wondering, how do you cope with this? It doesn't seem like I can break it down into little chunks at all.

    Read the article

  • How do I detect if sprite should be going up or down?

    - by Geore Shg
    I use the following code to detect if a sprite should be going up or down: If (pos.Y + 100) >= Sprite.BottomY Then Going_up = True pos.Y = Sprite.BottomY - 130 End If If pos.Y <= Sprite.TopY Then Going_up = False pos.Y = Sprite.TopY - 1 Vel.Y = 3 End If Then my response code: If Going_up Then Vel.Y -= CSng(gameTime.ElapsedGameTime.TotalMilliseconds / 40) pos.Y -= Vel.Y Else Vel.Y += CSng(gameTime.ElapsedGameTime.TotalMilliseconds / 40) pos.Y += Vel.Y End If Sprite.velocity = Vel Sprite.position = pos But it's pretty terrible. It only works when the sprite starts at the top, and when I want to change the BottomY and TopY, it just starts glitching. What is a better to detect if the sprite should be going up or down?

    Read the article

  • I'm looking to learn how to apply traditional animation techniques to my graphics engine - are there any tutorials or online-resources that can help?

    - by blueberryfields
    There are many traditional animation techniques - such as blurring of motion, motion along an elliptical curve rather than a straight line, counter-motion before beginning of movement - which help with creating the appearance of a realistic 3D animated character. I'm looking to incorporate tools and short cuts for some of these into my graphics engine, to make it easier for my end users to use these techniques in their animations. Is there a good resource listing the techniques and the principles behind them, especially how they might apply to a graphics engine or 3D animation?

    Read the article

  • XNA 4.0 - Purple/Pink Tint Over All Sprites After Viewing in FullScreen

    - by D. Dubya
    I'm a noob to the game dev world and recently finished the 2D XNA tutorial from http://www.pluralsight.com. Everything was perfect until I decided to try the game in Fullscreen mode. The following code was added to the Game1 constructor. graphics.PreferredBackBufferWidth = 800; graphics.PreferredBackBufferHeight = 480; graphics.IsFullScreen = true; As soon as it launched in Fullscreen, I noticed that the entire game was tinted. None of the colours were appearing as they should. That code was removed, the game then launched in the 800x480 window, however the tint remained. I commented out all my Draw code so that all that was left was GraphicsDevice.Clear(Color.CornflowerBlue); //spriteBatch.Begin(); //gameState.Draw(spriteBatch, false); //spriteBatch.End(); //spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.Additive); //gameState.Draw(spriteBatch, true); //spriteBatch.End(); base.Draw(gameTime); The result was an empty window that was tinted Purple, not Blue. I changed the GraphicsDevice.Clear colour to Color.White and the window was tinted Pink. Color.Transparent gave a Black window. Even tried rebooting my PC but the 'tint' still remains. I'm at a loss here.

    Read the article

  • HowTo Enable jBullet DebugMode

    - by Kenneth Bray
    I would like to render the physics world of jBullet to debug some issues in my game, and I am not finding too much on enabling the debugDraw method of jBullet. Do I need to write my own debugDraw method, or is there an easier way to draw the physics models to the screen? If there is already a built in method I would prefer to use that, otherwise I guess I will start making my own functions to handle this.

    Read the article

  • Multi Threading - How to split the tasks

    - by Motig
    if I have a game engine with the basic 'game engine' components, what is the best way to 'split' the tasks with a multi-threaded approach? Assuming I have the standard components of: Rendering Physics Scripts Networking And a quad-core, I see two ways of multi-threading: Option A ('Vertical'): Using this approach I can allow one core for each component of the engine; e.g. one core for the Rendering task, one for the Physics, etc. Advantages: I do not need to worry about thread-safety within each component I can take advantage of special optimizations provided for single-threaded access (e.g. DirectX offers a flag that can be set to tell it that you will only use single-threading) Option B ('Horizontal'): Using this approach, each task may be split up into 1 <= n <= numCores threads, and executed simultaneously, one after the other. Advantages: Allows for work-sharing, i.e. each thread can take over work still remaining as the others are still processing I can take advantage of libraries that are designed for multi-threading (i.e. ... DirectX) I think, in retrospect, I would pick Option B, but I wanted to hear you guys' thoughts on the matter.

    Read the article

  • 2D Ragdoll - should it collide with itself?

    - by Axarydax
    Hi, I'm working on a ragdoll fighting game as a hobby project, but I have one dilemma. I am not sure if my ragdoll should collide with itself or not, i.e. if ragdoll's body parts should collide. 2D world is somewhat different than 3D, because there are several layers of stuff implied (for example in Super Mario you jump through a platform above you while going up). The setup I'm currently most satisfied with is when only the parts which are joined by a joint don't collide, so head doesn't collide with neck, neck with chest, chest with upper arm etc, but the head can collide with chest, arms, legs. I've tried every different way, but I'm not content with either. Which way would recommend me to go?

    Read the article

  • How do I create weapon attachments?

    - by Tron86
    My question is; I am developing a game for XNA and I am trying to create a weapon attachment for my player model. My player model loads the .md3 format and reads tags for attachment points. I am able to get the tag of my model's hand. And I am also able to get the tag of my weapon's handle. Each tag I am able to get the rotation and position of and this is how I am calculating it: Model.worldMatrix = Matrix.CreateScale(Model.scale) * Matrix.CreateRotationX(-MathHelper.PiOver2) * Matrix.CreateRotationY(MathHelper.PiOver2); Pretty simple, the player model has a scale and its orientation(it loads on its side so I just use a 90 degree X axis rotation, and a Y axis rotation to face away from the camera). I then calculate the torso tag on the lower body, which gives me a local coordinate at the waist. Then I take that matrix and calculate the tag_weapon in the upper body. This gives me the hand position in local space. I also get the rotation matrix from that tag that I store for later use. All this seems to work fine. Now I move onto my weapon: Matrix weaponWorld = Matrix.CreateScale(CurrentWeapon.scale) * Matrix.CreateRotationX(-MathHelper.PiOver2) * TagRotationMatrix * Matrix.CreateTranslation(HandTag.Position) * Matrix.CreateRotationY(PlayerRotation) * Matrix.CreateTranslation(CollisionBody.Position) * You may notice the weapon matrix gets rotated by 90 degress on the X axis as well. This is because they load in on their sides. Once again this seems pretty simple and follows the SRT order I keep reading about. My TagRotation matrix is the hand's rotation. HandTag.Position is its position in local space. CreateRotationY(PlayerRotation) is the player's rotation in world space, and the CollisionBody.Position is the player's world location. Everything seems to be in order, and almost works in game. However when the gun spawns and follows the player's hand it seems to be flipped on an axis every couple frames. Almost like the X or Y axis is being inversed then put right back. Its hard to explain and I am totally stumped. Even removing all my X axis fixes does nothing to solve the problem. Hopefully I explained everything enough as I am a bit new to this! Thanks!

    Read the article

  • read object from compressed file that generate from actionscript3

    - by Last Chance
    I have made a simple game Map Editor, and I want to save a array that contain map tile info to a file, as below: var arr:Array = [.....2d tile info in it...]; var ba:ByteArray = new ByteArray(); ba.writeObject(arr); ba.compress(); var file:File = new File(); file.save(ba); now I had successful save a compressed object to a file. now the problem is my server side need to read this file and decompress get the arr out from file, then convert it as python list. is that prossible?

    Read the article

  • Rotating wheel with touch (adding momentum and slowing down the initial rate it can be moved

    - by Lewis
    I have a wheel control in a game which is setup like so: - (void)ccTouchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; CGPoint location = [touch locationInView:[touch view]]; location = [[CCDirector sharedDirector] convertToGL:location]; if (CGRectContainsPoint(wheel.boundingBox, location)) { CGPoint firstLocation = [touch previousLocationInView:[touch view]]; CGPoint location = [touch locationInView:[touch view]]; CGPoint touchingPoint = [[CCDirector sharedDirector] convertToGL:location]; CGPoint firstTouchingPoint = [[CCDirector sharedDirector] convertToGL:firstLocation]; CGPoint firstVector = ccpSub(firstTouchingPoint, wheel.position); CGFloat firstRotateAngle = -ccpToAngle(firstVector); CGFloat previousTouch = CC_RADIANS_TO_DEGREES(firstRotateAngle); CGPoint vector = ccpSub(touchingPoint, wheel.position); CGFloat rotateAngle = -ccpToAngle(vector); CGFloat currentTouch = CC_RADIANS_TO_DEGREES(rotateAngle); wheelRotation += (currentTouch - previousTouch) * 0.6; //limit speed 0.6 } } Now once the user lets go of the wheel I want it to rotate back to where it was before but not without taking into account the momentum of the swipe the user has done. This is the bit I really can't get my head around. So if the swipe generates a lot of momentum then the wheel will carry on moving slightly in that direction until the overall force which pulls the wheel back to the starting position kicks in. Any ideas/code snippets?

    Read the article

  • Detailed Modern Opengl Tutorial?

    - by Kogesho
    I am asking for a specific modern opengl tutorial. I need a tutorial that does not skip to explain any lines of code. It should also include different independent objects moving/rotating (most tutorials use only one object), as well as imported 3d objects and collision detection for them. It should also avoid stuff that won't be used. Arcysnthesis for example gives a new concept, and after teaching it, in the next tutorial, it explains how bad it is for performance and introduces another method. Do you know any?

    Read the article

  • Tiled perlin/value noise texture with (2^n)+1 size

    - by tobi
    Actually what I have in mind is value noise I think, but what I am going to ask applies to both of them. It is known that if you want to produce tiled texture by using the perlin/value noise, the size of the texture should be specified as the power of 2 (2^n). Without any modifications to the algorithm when you use the size of (2^n)+1 the texture cannot be tiled anymore, so I am wondering whether it is possible (by modifying the algorithm somehow) to generate such tiling texture with the size of (2^n)+1. The article (from which I have my implementation) is here: http://devmag.org.za/2009/04/25/perlin-noise/ I am aware that I can produce texture with 2^n size and just copy twice the last column/row from the ends to make it (2^n)+1, but I don't want to, because such repetitions are visible too much.

    Read the article

  • Who should respond to collision: Unit or projectile?

    - by aleguna
    In an RTS if a projectile hits a unit. Who should handle the collision? If projectile handles the collision, it must be aware of all possible types of units, to know what damage to inflict. For example a bullet will likely kill a human, but it will do nothing to a tank. The same goes if unit handles a collision. So either way one of them should be aware of all possible types of the other. Of course the 'true' way would be to do full physics simulation, but that's not an option for an RTS with 1000s of units and projectiles... So what are the common practicies in this regards?

    Read the article

  • Make Gameobject Stand On Surface Facing Certain Direction

    - by Julian
    I want to make a biped character stand on any surface I click on. Surfaces have up vectors of any of positive or negative X,Y,Z. So imagine a cube with each face being a gameobject whose up vector pointing directly away from the cube. If my character is facing "forward" and I click on a surface which is to the left or right of me ( left or right walls), I want my character to now be standing on that surface but still be facing in the direction he initially was. If I click on a wall which is in the forward path of my character i want him to now be standing on that surface and his forward to now be what was once "up" relative to my character. Here is the code I am working with now. void Update() { if (Input.GetMouseButtonUp (0)) { RaycastHit hit; var ray = Camera.main.ScreenPointToRay(Input.mousePosition); if (Physics.Raycast(ray, out hit)) { Vector3 upVectBefore = transform.up; Vector3 forwardVectBefore = transform.forward; Quaternion rotationVectBefore = transform.rotation; Vector3 hitPosition = hit.transform.position; transform.position = hitPosition; float lookDifference = Vector3.Distance(hit.transform.up, forwardVectBefore); if(Vector3.Distance(hit.transform.up, upVectBefore) < .23) //Same normal { transform.rotation = rotationVectBefore; } else if(lookDifference > 1.412 && lookDifference <= 1.70607) //side wall { transform.up = hit.transform.up; transform.forward = forwardVectBefore; } else //head on wall { transform.up = hit.transform.up; transform.forward = upVectBefore; } } } } The first case "Same normal" works fine, however the other two do not work as I would like them to. Sometimes my character is laying down on the surface or on the wrong side of the surface. Does anyone know nice way of solving this problem?

    Read the article

  • c++ most used libraries [on hold]

    - by Basaa
    I'm trying to find out whether or not I want to switch from Java to c++ for my OpenGL game programming. I now have setup a test project in VS 11 professional, with GLUT. I created my windows with GLUT, and I can render OpenGL primitives without any problems. Now my question: What library(s) is/are used mostly in the indie/semi professional industry for using OpenGL in c++? With 'using OpenGL' I mean: Creating and managing an OpenGL window Actually using the OpenGL API Handling user-input (keyboard/mouse)

    Read the article

  • How can I create an orthographic display that handles different screen dimensions?

    - by Piku
    I'm trying to create an iPad/iPhone game using GLES2.0 that contains a 3D scene with a heads-up-display/GUI overlaid on the top. However, this problem would also apply if I were to port my game to a computer and run the game in a resizable window, or allow the user to change screen resolutions... When trying to make the 2D GUI/HUD work I've made the assumption that all I'm really doing is drawing a load of 2D textured 'quads' on the screen and am trying to treat the orthographic projection as an old-style 2D display with 0,0 in the upper left and screenWidth,ScreenHeight in the lower right. This causes me all sorts of confusion when I rotate my ipad into Landscape mode since I can't work out what to put into my projection and modelview matrices to turn everything around the right way. It also gets messy if I want to support the iPad's large screen, an iPhone or a Retina display since I have to then draw three sets of textures for everything and work out which ones to use. Should I be trying to map the 2D OpenGL co-ords 1:1 with the screen? While typing out this question it occurs to me that I could keep my origin in the centre, still running -1/+1 along the axes. This would let me scale my 2D content appropriately on the different screen sizes, but wouldn't I end up with the textures being scaled and possibly losing quality? I'm using OpenGLES 2.0 and have a matrix library that has equivalents to the GLES1.1 glOrthof() and glFrustrum() calls.

    Read the article

  • How do you make a bullet ricochet off a vertical wall?

    - by Bagofsheep
    First things first. I am using C# with XNA. My game is top-down and the player can shoot bullets. I've managed to get the bullets to ricochet correctly off horizontal walls. Yet, despite using similar methods (e.g. http://stackoverflow.com/questions/3203952/mirroring-an-angle) and reading other answered questions about this subject I have not been able to get the bullets to ricochet off a vertical wall correctly. Any method I've tried has failed and sometimes made ricocheting off a horizontal wall buggy. Here is the collision code that calls the ricochet method: //Loop through returned tile rectangles from quad tree to test for wall collision. If a collision occurs perform collision logic. for (int r = 0; r < returnObjects.Count; r++) if (Bullets[i].BoundingRectangle.Intersects(returnObjects[r])) Bullets[i].doCollision(returnObjects[r]); Now here is the code for the doCollision method. public void doCollision(Rectangle surface) { if (Ricochet) doRicochet(surface); else Trash = true; } Finally, here is the code for the doRicochet method. public void doRicochet(Rectangle surface) { if (Position.X > surface.Left && Position.X < surface.Right) { //Mirror the bullet's angle. Rotation = -1 * Rotation; //Moves the bullet in the direction of its rotation by given amount. moveFaceDirection(Sprite.Width * BulletScale.X); } else if (Position.Y > surface.Top && Position.Y < surface.Bottom) { } } Since I am only dealing with vertical and horizontal walls at the moment, the if statements simply determine if the object is colliding from the right or left, or from the top or bottom. If the object's X position is within the boundaries of the tile's X boundaries (left and right sides), it must be colliding from the top, and vice verse. As you can see, the else if statement is empty and is where the correct code needs to go.

    Read the article

  • Sorting for 2D Drawing

    - by Nexian
    okie, looked through quite a few similar questions but still feel the need to ask mine specifically (I know, crazy). Anyhoo: I am drawing a game in 2D (isometric) My objects have their own arrays. (i.e. Tiles[], Objects[], Particles[], etc) I want to have a draw[] array to hold anything that will be drawn. Because it is 2D, I assume I must prioritise depth over any other sorting or things will look weird. My game is turn based so Tiles and Objects won't be changing position every frame. However, Particles probably will. So I am thinking I can populate the draw[] array (probably a vector?) with what is on-screen and have it add/remove object, tile & particle references when I pan the screen or when a tile or object is specifically moved. No idea how often I'm going to have to update for particles right now. I want to do this because my game may have many thousands of objects and I want to iterate through as few as possible when drawing. I plan to give each element a depth value to sort by. So, my questions: Does the above method sound like a good way to deal with the actual drawing? What is the most efficient way to sort a vector? Most of the time it wont require efficiency. But for panning the screen it will. And I imagine if I have many particles on screen moving across multiple tiles, it may happen quite often. For reference, my screen will be drawing about 2,800 objects at any one time. When panning, it will be adding/removing about ~200 elements every second, and each new element will need adding in the correct location based on depth.

    Read the article

  • How to divide hex grid evenly among n players?

    - by manabreak
    I'm making a simple hex-based game, and I want the map to be divided evenly among the players. The map is created randomly, and I want the players to have about equal amount of cells, with relatively small areas. For example, if there's four players and 80 cells in the map, each of the players would have about 20 cells (it doesn't have to be spot-on accurate). Also, each player should have no more than four adjacent cells. That is to say, when the map is generated, the biggest "chunks" cannot be more than four cells each. I know this is not always possible for two or three players (as this resembles the "coloring the map" problem), and I'm OK with doing other solutions for those (like creating maps that solve the problem instead). But, for four to eight players, how could I approach this problem? As always, any and all help is appreciated. :)

    Read the article

  • OpenGL doesn't draw (3.3+) [on hold]

    - by Dhiego Magalhães
    Brief: I've been following this tutorial about OpenGL for 2 days, and I still can't have a triangle drawn, so I'm asking for help here. The tutorial is turned to OpenGL version 3.3 programing, using vertex arrays, buffers, etc. The libraries are: GLFW3 and GLEW, and I setted them by myself. The screen keeps black all the time. Full code: link here (It's just like a Hello World opengl program) Further Details: I get no errors at all. I downloaded a software to test my video card, and it supports OpenGL 4.1+ Standard OpenGL code for drawing (from earlier version) such as this one works normally. I'm using Microsoft Visual Studio 10.0 I presume all the OpenGL implementation was dune right: I added Additional Dependences to the linker as glew32.lib, opengl32.lib, glfw3.lib. The glew.dll was placed at SysWOW64 - because I'm running window 64bits, and glew is 32. Notes: I've been working hard to find out what this is, but I can't find. I would appreciate if anyone could test this code for me, so I can know if I implemented something wrong, and that its not my code.

    Read the article

  • how does HDR work?

    - by dotminic
    I'm trying to understand what HDR is and how it works. I understand the basic concepts and have an slight idea of how it is implemented with D3D/hlsl. However it's still pretty foggy. Say I'm rendering a sphere with a texture of the earth and a small point list of vertices to act as stars, how would I render this in HDR ? Here are a few things I'm confused about: I'm guessing, I can't use just any basic image format for the texture as the values would be limited to [0, 255] and clamped to [0, 1] in a shader. Same goes for the back buffer, I take it the format needs to be a float point format ? What are the other steps involved ? Surely there has to be more than just using floating point formats to render to a render target and then apply some bloom as a post process ? (considering the output will be 8bpp anyway) Basically, what are the steps for HDR ? How does it work ? I can't seem to find any good papers / articles that describe the process, other than this one, but it seems to skim over the basics a little, so it's confusing.

    Read the article

  • Increasing efficiency of N-Body gravity simulation

    - by Postman
    I'm making a space exploration type game, it will have many planets and other objects that will all have realistic gravity. I currently have a system in place that works, but if the number of planets goes above 70, the FPS decreases an practically exponential rates. I'm making it in C# and XNA. My guess is that I should be able to do gravity calculations between 100 objects without this kind of strain, so clearly my method is not as efficient as it should be. I have two files, Gravity.cs and EntityEngine.cs. Gravity manages JUST the gravity calculations, EntityEngine creates an instance of Gravity and runs it, along with other entity related methods. EntityEngine.cs public void Update() { foreach (KeyValuePair<string, Entity> e in Entities) { e.Value.Update(); } gravity.Update(); } (Only relevant piece of code from EntityEngine, self explanatory. When an instance of Gravity is made in entityEngine, it passes itself (this) into it, so that gravity can have access to entityEngine.Entities (a dictionary of all planet objects)) Gravity.cs namespace ExplorationEngine { public class Gravity { private EntityEngine entityEngine; private Vector2 Force; private Vector2 VecForce; private float distance; private float mult; public Gravity(EntityEngine e) { entityEngine = e; } public void Update() { //First loop foreach (KeyValuePair<string, Entity> e in entityEngine.Entities) { //Reset the force vector Force = new Vector2(); //Second loop foreach (KeyValuePair<string, Entity> e2 in entityEngine.Entities) { //Make sure the second value is not the current value from the first loop if (e2.Value != e.Value ) { //Find the distance between the two objects. Because Fg = G * ((M1 * M2) / r^2), using Vector2.Distance() and then squaring it //is pointless and inefficient because distance uses a sqrt, squaring the result simple cancels that sqrt. distance = Vector2.DistanceSquared(e2.Value.Position, e.Value.Position); //This makes sure that two planets do not attract eachother if they are touching, completely unnecessary when I add collision, //For now it just makes it so that the planets are not glitchy, performance is not significantly improved by removing this IF if (Math.Sqrt(distance) > (e.Value.Texture.Width / 2 + e2.Value.Texture.Width / 2)) { //Calculate the magnitude of Fg (I'm using my own gravitational constant (G) for the sake of time (I know it's 1 at the moment, but I've been changing it) mult = 1.0f * ((e.Value.Mass * e2.Value.Mass) / distance); //Calculate the direction of the force, simply subtracting the positions and normalizing works, this fixes diagonal vectors //from having a larger value, and basically makes VecForce a direction. VecForce = e2.Value.Position - e.Value.Position; VecForce.Normalize(); //Add the vector for each planet in the second loop to a force var. Force = Vector2.Add(Force, VecForce * mult); //I have tried Force += VecForce * mult, and have not noticed much of an increase in speed. } } } //Add that force to the first loop's planet's position (later on I'll instead add to acceleration, to account for inertia) e.Value.Position += Force; } } } } I have used various tips (about gravity optimizing, not threading) from THIS question (that I made yesterday). I've made this gravity method (Gravity.Update) as efficient as I know how to make it. This O(N^2) algorithm still seems to be eating up all of my CPU power though. Here is a LINK (google drive, go to File download, keep .Exe with the content folder, you will need XNA Framework 4.0 Redist. if you don't already have it) to the current version of my game. Left click makes a planet, right click removes the last planet. Mouse moves the camera, scroll wheel zooms in and out. Watch the FPS and Planet Count to see what I mean about performance issues past 70 planets. (ALL 70 planets must be moving, I've had 100 stationary planets and only 5 or so moving ones while still having 300 fps, the issue arises when 70+ are moving around) After 70 planets are made, performance tanks exponentially. With < 70 planets, I get 330 fps (I have it capped at 300). At 90 planets, the FPS is about 2, more than that and it sticks around at 0 FPS. Strangely enough, when all planets are stationary, the FPS climbs back up to around 300, but as soon as something moves, it goes right back down to what it was, I have no systems in place to make this happen, it just does. I considered multithreading, but that previous question I asked taught me a thing or two, and I see now that that's not a viable option. I've also thought maybe I could do the calculations on my GPU instead, though I don't think it should be necessary. I also do not know how to do this, it is not a simple concept and I want to avoid it unless someone knows a really noob friendly simple way to do it that will work for an n-body gravity calculation. (I have an NVidia gtx 660) Lastly I've considered using a quadtree type system. (Barnes Hut simulation) I've been told (in the previous question) that this is a good method that is commonly used, and it seems logical and straightforward, however the implementation is way over my head and I haven't found a good tutorial for C# yet that explains it in a way I can understand, or uses code I can eventually figure out. So my question is this: How can I make my gravity method more efficient, allowing me to use more than 100 objects (I can render 1000 planets with constant 300+ FPS without gravity calculations), and if I can't do much to improve performance (including some kind of quadtree system), could I use my GPU to do the calculations?

    Read the article

< Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >