Search Results

Search found 25518 results on 1021 pages for 'iterative development'.

Page 460/1021 | < Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >

  • Pattern for performing game actions

    - by Arkiliknam
    Is there a generally accepted pattern for performing various actions within a game? A way a player can perform actions and also that an AI might perform actions, such as move, attack, self-destruct, etc. I currently have an abstract BaseAction which uses .NET generics to specify the different objects that get returned by the various actions. This is all implemented in a pattern similar to the Command, where each action is responsible for itself and does all that it needs. My reasoning for being abstract is so that I may have a single ActionHandler, and AI can just queue up different action implementing the baseAction. And the reason it is generic is so that the different actions can return result information relevant to the action (as different actions can have totally different outcomes in the game), along with some common beforeAction and afterAction implementations. So... is there a more accepted way of doing this, or does this sound alright?

    Read the article

  • OpenGL VertexBuffer won'e render in GLFW3

    - by sm81095
    So I have started to try to learn OpenGL, and I decided to use GLFW to assist in window creation. The problem is, since GLFW3 is so new, there are no tutorials on it yet and how to use it with modern OpenGL (3.3, specifically). Using the GLFW3 tutorial found on the website, which uses older OpenGL rendering (glBegin(GL_TRIANGLES), glVertex3f()), and such, I can get a triangle to render to the screen. The problem is, using new OpenGL, I can't get the same triangle to render to the screen. I am new to OpenGL, and GLFW3 is new to most people, so I may be completely missing something obvious, but here is my code: static const GLuint g_vertex_buffer_data[] = { -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; int main(void) { GLFWwindow* window; if(!glfwInit()) { fprintf(stderr, "Failed to initialize GLFW."); return -1; } glfwWindowHint(GLFW_SAMPLES, 4); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); window = glfwCreateWindow(800, 600, "Test Window", NULL, NULL); if(!window) { glfwTerminate(); fprintf(stderr, "Failed to create a GLFW window"); return -1; } glfwMakeContextCurrent(window); glewExperimental = GL_TRUE; GLenum err = glewInit(); if(err != GLEW_OK) { glfwTerminate(); fprintf(stderr, "Failed to initialize GLEW"); fprintf(stderr, (char*)glewGetErrorString(err)); return -1; } GLuint VertexArrayID; glGenVertexArrays(1, &VertexArrayID); glBindVertexArray(VertexArrayID); GLuint programID = LoadShaders("SimpleVertexShader.glsl", "SimpleFragmentShader.glsl"); GLuint vertexBuffer; glGenBuffers(1, &vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW); while(!glfwWindowShouldClose(window)) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glUseProgram(programID); glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_TRIANGLES, 0, 3); glDisableVertexAttribArray(0); glfwSwapBuffers(window); glfwPollEvents(); } glDeleteBuffers(1, &vertexBuffer); glDeleteProgram(programID); glfwDestroyWindow(window); glfwTerminate(); exit(EXIT_SUCCESS); } I know it is not my shaders, they are super simple and I've checked them against GLFW 2.7 so I know that they work. I'm assuming that I've missed something crucial to using the OpenGL context with GLFW3, so any help locating the problem would be greatly appreciated.

    Read the article

  • When I shoot from a gun while walking, the bullet is off the center, but when stand still it's fine

    - by Vlad1k
    I am making a small project in Unity, and whenever I walk with the gun and shoot at the same time, the bullets seem to curve and shoot off 2-3 CMs from the center. When I stand still this doesn't happen. This is my main Javascript code: @script RequireComponent(AudioSource) var projectile : Rigidbody; var speed = 500; var ammo = 30; var fireRate = 0.1; private var nextFire = 0.0; function Update() { if(Input.GetButton ("Fire1") && Time.time > nextFire) { if(ammo != 0) { nextFire = Time.time + fireRate; var clone = Instantiate(projectile, transform.position, transform.root.rotation); clone.velocity = transform.TransformDirection(Vector3 (0, 0, speed)); ammo = ammo - 1; audio.Play(); } else { } } } I assume that these two lines need to be tweaked: var clone = Instantiate(projectile, transform.position, transform.root.rotation); clone.velocity = transform.TransformDirection(Vector3 (0, 0, speed)); Thanks in advanced, and please remember that I just started Unity, and I might have a difficult time understanding some things. Thanks!

    Read the article

  • Adaptive Characters: AI Solution Needs a Problem

    - by Roger F. Gay
    Have sophisticated adaptive programming, will travel - so to speak. I'm part of a group that developed sophisticated learning / adaptive software for robotics. The system "thinks" via its simulator, building and adapting code on its own; and then carries out the best solution. The software can also adapt to new situations, etc. http://mensnewsdaily.com/2007/05/16/robobusiness-robots-with-imagination/ It's easy to imagine using it with automated game characters that will adapt to the players moves and style - the easiest example would be fighting. The more the simulated fighter fights with the human player, the more it learns to counter that players fighting skills. But there should be more. Anyone have any ideas as to how adaptive characters might be interesting in games?

    Read the article

  • Generating triangles from a square grid

    - by vivi
    I have a 2D square grid of values representing terrain elevations, and I want to generate triangles from that grid to make a 3D view of the terrain. My first thought was to split each square diagonally into 2 triangles, however the split diagonal can clearly be seen, especially from the top : [Sorry, as a new user I can't post images, please see here : imgur] Is there a recommended way to generate triangles to remove/reduce this effect ?

    Read the article

  • How to deal with Character body parts from Design to Cocos2d

    - by Edwin Soho
    I'm trying to figure out the pattern the game developers use together with game designers: See the picture below with the independent parts: Questions: 1) Should I create different image parts from different body parts or keep frame by frame animaton? (I know both can be used, but I'm trying to figure what is commonly used in the industry) 2) If I'm going to generate different image parts from different body parts (which is I thing is more logical) how would I export that to Cocos2d (Vector or Bitmap)?

    Read the article

  • How change LOD in geometry?

    - by ChaosDev
    Im looking for simple algorithm of LOD, for change geometry vertexes and decrease frame time. Im created octree, but now I want model or terrain vertex modify algorithm,not for increase(looking on tessellation later) but for decrease. I want something like this Questions: Is same algorithm can apply either to model and terrain correctly? Indexes need to be modified ? I must use octree or simple check distance between camera and object for desired effect ? New value of indexcount for DrawIndexed function needed ? Code: //m_LOD == 10 in the beginning //m_RawVerts - array of 3d Vector filled with values from vertex buffer. void DecreaseLOD() { m_LOD--; if(m_LOD<1)m_LOD=1; RebuildGeometry(); } void IncreaseLOD() { m_LOD++; if(m_LOD>10)m_LOD=10; RebuildGeometry(); } void RebuildGeometry() { void* vertexRawData = new byte[m_VertexBufferSize]; void* indexRawData = new DWORD[m_IndexCount]; auto context = mp_D3D->mp_Context; D3D11_MAPPED_SUBRESOURCE data; ZeroMemory(&data,sizeof(D3D11_MAPPED_SUBRESOURCE)); context->Map(mp_VertexBuffer->mp_buffer,0,D3D11_MAP_READ,0,&data); memcpy(vertexRawData,data.pData,m_VertexBufferSize); context->Unmap(mp_VertexBuffer->mp_buffer,0); context->Map(mp_IndexBuffer->mp_buffer,0,D3D11_MAP_READ,0,&data); memcpy(indexRawData,data.pData,m_IndexBufferSize); context->Unmap(mp_IndexBuffer->mp_buffer,0); DWORD* dwI = (DWORD*)indexRawData; int sz = (m_VertexStride/sizeof(float));//size of vertex element //algorithm must be here. std::vector<Vector3d> vertices; int i = 0; for(int j = 0; j < m_VertexCount; j++) { float x1 = (((float*)vertexRawData)[0+i]); float y1 = (((float*)vertexRawData)[1+i]); float z1 = (((float*)vertexRawData)[2+i]); Vector3d lv = Vector3d(x1,y1,z1); //my useless attempts if(j+m_LOD+1<m_RawVerts.size()) { float v1 = VECTORHELPER::Distance(m_RawVerts[dwI[j]],m_RawVerts[dwI[j+m_LOD]]); float v2 = VECTORHELPER::Distance(m_RawVerts[dwI[j]],m_RawVerts[dwI[j+m_LOD+1]]); if(v1>v2) lv = m_RawVerts[dwI[j+1]]; else if(v2<v1) lv = m_RawVerts[dwI[j+2]]; } (((float*)vertexRawData)[0+i]) = lv.x; (((float*)vertexRawData)[1+i]) = lv.y; (((float*)vertexRawData)[2+i]) = lv.z; i+=sz;//pass others vertex format values without change } for(int j = 0; j < m_IndexCount; j++) { //indices ? } //set vertexes to device UpdateVertexes(vertexRawData,mp_VertexBuffer->getSize()); delete[] vertexRawData; delete[] indexRawData; }

    Read the article

  • AS3 Calculating Delta Time In Seconds

    - by user1133079
    Here is how I've been trying to implement delta time based on different internet resources. var startTime:Number = getTimer(); game.Update(deltaTime); deltaTime = Number(getTimer() - startTime) * 0.001; My issue with this is it doesn't seem to be giving me accurate timing. The main update shows the frame time at 0.001 and when reinitializing the level it goes to 0.002. I'm using dt else where for a timer and later on time based physics so I would like it to work as expected. I must be missing something silly.

    Read the article

  • 2D Animation Smoothness - Delta time vs. Kinematics

    - by viperld002
    I'm animating a sprite in 2D with key frames of rotation and xy-positions. I've recently had a discussion with someone saying that when the device (happens to be an iPad using cocos2D) hits a performance bump due to whatever else the user may be doing, lag will arise and that the best way to fight it is to not use actual positions, but velocities, accelerations and torques with kinematics. His message is to evaluate the positions and rotations from these speeds at the current point in time. I've never experienced a situation where I've heard of using kinematics to stem lag in 2D animations and am not sure of how effective it could be. Also, it seems to be overkill. The application is not networked so it's all running on a local device. The desired effect is that the animation always plays as closely as it can to the target frame rate. Wouldn't the technique suffer the same problems as just using the time since the last frame or a fixed time step since the kinematics would also require some time value to perform the calculation? What techniques could you suggest to best achieve the desired effect? EDIT 1 Thank you for your responses, they are very illuminating. I want to clarify my question before choosing an answer however, to make sure that this post really serves it's purpose. I have a sprite of a ball, and a text file with 3 arrays worth of information (rotation,translations x, translations y) with each unit of information existing as a key frame to be stepped through (0 to 49 and back to 0 to replay it again). I have this playing by interpolating from the current key frame to the next, every n-units of time. The animation is visibly correct when compared to a video I was given of it, and it is smooth because of the interpolations between the key frames. This is the existing state of the project. There are no physics simulated, only a static animation of a ball moving in a way an artist specifically designed. Should I, instead of rotation in degrees and translations by positions in space, derive velocities, accelerations and torques to express this static animation as a function of time? As in, position now = foo(time now), where foo uses kinematics.

    Read the article

  • HLSL problem with divide by homogeneous component

    - by Berend
    When I try to divide my position.z by my position.w in HLSL I get as result always 1.0f or higher. Is this a common problem for some reason? When I divide my position.x or y by the w this works fine. But the divide for the z gives a wrong result. I use the view matrix for my camera and the projection matrix as i use it in the game because I want to create a depthmap from the cameraposition. Can anybody explain what I'm doing wrong? Do I need another view matrix?

    Read the article

  • Making a game with responsive resolution

    - by alexandervrs
    I am making a game, however I wish for it to be resolution agnostic. My target resolution i.e. where things look as intended is 1600 x 900. My ideas are: Make the HUD stay fixed to the sides no matter what resolution, use different size for HUD graphics under a certain resolution and another under a certain large one. Use large HD sprites/backgrounds which are a power of 2, so they scale nicely. Use the player's native resolution. Scale the game area (not the HUD) to fit (resulting zooming in some and cropping the game area sides if necessary for widescreen, no stretch), but always fill the screen. Have a min and max resolution limit for small and very large displays where you will just change the resolution(?) or scale up/down to fit. What I am a bit confused though is what math formula I would use to scale the game area correctly based on the resolution no matter the aspect ratio, fully fit in a square screen and with some clip to the sides for widescreen. Pseudocode would help as well. :)

    Read the article

  • What are my tool options to prototype a 2D online multiplayer game?

    - by Asher Einhorn
    I'm looking for the best tool to allow me to quickly put together a 2D game that relies largely on networking. It's extremely likely that this game will require a server side program to constantly run. I have little experience with these things and since it's a prototype i'd like the easiest options for achieving this. I am looking to make this game for the web and mobile devices, although at present I only have access to ios hardware, (no android etc). I just want to get the bare bones of this set up so I can test it from the earliest opportunity to see if it's fun. EDIT - doesn't unity have some inbuilt networking stuff in it?

    Read the article

  • OpenGL 3 and the Radeon HD 4850x2

    - by rotard
    A while ago, I picked up a copy of the OpenGL SuperBible fifth edition and slowly and painfully started teaching myself OpenGL the 3.3 way, after having been used to the 1.0 way from school way back when. Making things more challenging, I am primarily a .NET developer, so I was working in Mono with the OpenTK OpenGL wrapper. On my laptop, I put together a program that let the user walk around a simple landscape using a couple shaders that implemented per-vertex coloring and lighting and texture mapping. Everything was working brilliantly until I ran the same program on my desktop. Disaster! Nothing would render! I have chopped my program down to the point where the camera sits near the origin, pointing at the origin, and renders a square (technically, a triangle fan). The quad renders perfectly on my laptop, coloring, lighting, texturing and all, but the desktop renders a small distorted non-square quadrilateral that is colored incorrectly, not affected by the lights, and not textured. I suspect the graphics card is at fault, because I get the same result whether I am booted into Ubuntu 10.10 or Win XP. I did find that if I pare the vertex shader down to ONLY outputting the positional data and the fragment shader to ONLY outputting a solid color (white) the quad renders correctly. But as SOON as I start passing in color data (whether or not I use it in the fragment shader) the output from the vertex shader is distorted again. The shaders follow. I left the pre-existing code in, but commented out so you can get an idea what I was trying to do. I'm a noob at glsl so the code could probably be a lot better. My laptop is an old lenovo T61p with a Centrino (Core 2) Duo and an nVidia Quadro graphics card running Ubuntu 10.10 My desktop has an i7 with a Radeon HD 4850 x2 (single card, dual GPU) from Saphire dual booting into Ubuntu 10.10 and Windows XP. The problem occurs in both XP and Ubuntu. Can anyone see something wrong that I am missing? What is "special" about my HD 4850x2? string vertexShaderSource = @" #version 330 precision highp float; uniform mat4 projection_matrix; uniform mat4 modelview_matrix; //uniform mat4 normal_matrix; //uniform mat4 cmv_matrix; //Camera modelview. Light sources are transformed by this matrix. //uniform vec3 ambient_color; //uniform vec3 diffuse_color; //uniform vec3 diffuse_direction; in vec4 in_position; in vec4 in_color; //in vec3 in_normal; //in vec3 in_tex_coords; out vec4 varyingColor; //out vec3 varyingTexCoords; void main(void) { //Get surface normal in eye coordinates //vec4 vEyeNormal = normal_matrix * vec4(in_normal, 0); //Get vertex position in eye coordinates //vec4 vPosition4 = modelview_matrix * vec4(in_position, 0); //vec3 vPosition3 = vPosition4.xyz / vPosition4.w; //Get vector to light source in eye coordinates //vec3 lightVecNormalized = normalize(diffuse_direction); //vec3 vLightDir = normalize((cmv_matrix * vec4(lightVecNormalized, 0)).xyz); //Dot product gives us diffuse intensity //float diff = max(0.0, dot(vEyeNormal.xyz, vLightDir.xyz)); //Multiply intensity by diffuse color, force alpha to 1.0 //varyingColor.xyz = in_color * diff * diffuse_color.xyz; varyingColor = in_color; //varyingTexCoords = in_tex_coords; gl_Position = projection_matrix * modelview_matrix * in_position; }"; string fragmentShaderSource = @" #version 330 //#extension GL_EXT_gpu_shader4 : enable precision highp float; //uniform sampler2DArray colorMap; //in vec4 varyingColor; //in vec3 varyingTexCoords; out vec4 out_frag_color; void main(void) { out_frag_color = vec4(1,1,1,1); //out_frag_color = varyingColor; //out_frag_color = vec4(varyingColor, 1) * texture(colorMap, varyingTexCoords.st); //out_frag_color = vec4(varyingColor, 1) * texture(colorMap, vec3(varyingTexCoords.st, 0)); //out_frag_color = vec4(varyingColor, 1) * texture2DArray(colorMap, varyingTexCoords); }"; Note that in this code the color data is accepted but not actually used. The geometry is outputted the same (wrong) whether the fragment shader uses varyingColor or not. Only if I comment out the line varyingColor = in_color; does the geometry output correctly. Originally the shaders took in vec3 inputs, I only modified them to take vec4s while troubleshooting.

    Read the article

  • How to remove a box2d body when collision happens?

    - by Ayham
    I’m still new to java and android programming and I am having so much trouble Removing an object when collision happens. I looked around the web and found that I should never handle removing BOX2D bodies during collision detection (a contact listener) and I should add my objects to an arraylist and set a variable in the User Data section of the body to delete or not and handle the removing action in an update handler. So I did this: First I define two ArrayLists one for the faces and one for the bodies: ArrayList<Sprite> myFaces = new ArrayList<Sprite>(); ArrayList<Body> myBodies = new ArrayList<Body>(); Then when I create a face and connect that face to its body I add them to their ArrayLists like this: face = new AnimatedSprite(pX, pY, pWidth, pHeight, this.mBoxFaceTextureRegion); Body BoxBody = PhysicsFactory.createBoxBody(mPhysicsWorld, face, BodyType.DynamicBody, objectFixtureDef); mPhysicsWorld.registerPhysicsConnector(new PhysicsConnector(face, BoxBody, true, true)); myFaces.add(face); myBodies.add(BoxBody); now I add a contact listener and an update handler in the onloadscene like this: this.mPhysicsWorld.setContactListener(new ContactListener() { private AnimatedSprite face2; @Override public void beginContact(final Contact pContact) { } @Override public void endContact(final Contact pContact) { } @Override public void preSolve(Contact contact,Manifold oldManifold) { } @Override public void postSolve(Contact contact,ContactImpulse impulse) { } }); scene.registerUpdateHandler(new IUpdateHandler() { @Override public void reset() { } @Override public void onUpdate(final float pSecondsElapsed) { } }); My plan is to detect which two bodies collided in the contact listener by checking a variable from the user data section of the body, get their numbers in the array list and finally use the update handler to remove these bodies. The questions are: Am I using the arraylist correctly? How to add a variable to the User Data (the code please). I tried removing a body in this update handler but it still throws me NullPointerException , so what is the right way to add an update handler and where should I add it. Any other advices to do this would be great. Thanks in advance.

    Read the article

  • Path tables or real time searching for AI?

    - by SirYakalot
    What is the more common practice in commercial games; path lookup tables or real time searches? I've read that in many games path lookup tables are pre-calculated and baked into each map, so to speak, then steering behaviour is used to handle dynamic obstacles. or is it better practice to use optimised hierarchical A* searches? I understand the pro's and cons of each, I'm just curious as to what is most often used in the industry.

    Read the article

  • How do I simulate the mouse and keyboard using C# or C++?

    - by Art
    I want to start develop for Kinect, but hardest theme for it - how to send keyboard and mouse input to any application. In previous question I got an advice to develop my own driver for this devices, but this will take a while. I imagine application like a gate, that can translate SendMessage's into system wide input or driver application with API to send this inputs. So I wonder, is there are drivers or simulators that can interact with C# or C++? Small edition: SendMessage, PostMessage, keybd_event will work only on Windows application with common messages loop. So I need driver application that will work on low, kernel, level.

    Read the article

  • How do I keep a 3D model on the screen in OpenGL?

    - by NoobScratcher
    I'm trying to keep a 3D model on the screen by placing my glDrawElement functions inside the draw function with the declarations at the top of .cpp. When I render the model, the model attaches it self to the current vertex buffer object. This is because my whole graphical user interface is in 2D quads except the window frame. Is there a way to avoid this from happening? or any common causes of this? Creating the file object: int index = IndexAssigner(1, 1); //make a fileobject and store list and the index of that list in a c string ifstream file (list[index].c_str() ); //Make another string //string line; points.push_back(Point()); Point p; int face[4]; Model rendering code: int numfloats = 4; float* point=reinterpret_cast<float*>(&points[0]); int num_bytes=numfloats*sizeof(float); cout << "Size Of Point" << sizeof(Point) << endl; GLuint vertexbuffer; glGenVertexArrays(1, &vao[3]); glGenBuffers(1, &vertexbuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBufferData(GL_ARRAY_BUFFER, points.size()*sizeof(points), points.data(), GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, num_bytes, &points[0]); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, points.size(), &points[0]); glEnableClientState(GL_INDEX_ARRAY); glIndexPointer(GL_FLOAT, faces.size(), faces.data()); glEnableVertexAttribArray(0); glDrawElements(GL_QUADS, points.size(), GL_UNSIGNED_INT, points.data()); glDrawElements(GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data());

    Read the article

  • samplerCubeShadow and texture offset

    - by Irbis
    I use sampler2DShadow when accessing a single shadow map. I create PCF in this way: result += textureProjOffset(ShadowSampler, ShadowCoord, ivec2(-1,-1)); result += textureProjOffset(ShadowSampler, ShadowCoord, ivec2(-1,1)); result += textureProjOffset(ShadowSampler, ShadowCoord, ivec2(1,1)); result += textureProjOffset(ShadowSampler, ShadowCoord, ivec2(1,-1)); result = result * 0.25; For a cube map I use samplerCubeShadow: result = texture(ShadowCubeSampler, vec4(normalize(position), depth)); How to adopt above PCF when accessing a cube map ?

    Read the article

  • Multithreading 2D gravity calculations

    - by Postman
    I'm building a space exploration game and I've currently started working on gravity ( In C# with XNA). The gravity still needs tweaking, but before I can do that, I need to address some performance issues with my physics calculations. This is using 100 objects, normally rendering 1000 of them with no physics calculations gets well over 300 FPS (which is my FPS cap), but any more than 10 or so objects brings the game (and the single thread it runs on) to its knees when doing physics calculations. I checked my thread usage and the first thread was killing itself from all the work, so I figured I just needed to do the physics calculation on another thread. However when I try to run the Gravity.cs class's Update method on another thread, even if Gravity's Update method has nothing in it, the game is still down to 2 FPS. Gravity.cs public void Update() { foreach (KeyValuePair<string, Entity> e in entityEngine.Entities) { Vector2 Force = new Vector2(); foreach (KeyValuePair<string, Entity> e2 in entityEngine.Entities) { if (e2.Key != e.Key) { float distance = Vector2.Distance(entityEngine.Entities[e.Key].Position, entityEngine.Entities[e2.Key].Position); if (distance > (entityEngine.Entities[e.Key].Texture.Width / 2 + entityEngine.Entities[e2.Key].Texture.Width / 2)) { double angle = Math.Atan2(entityEngine.Entities[e2.Key].Position.Y - entityEngine.Entities[e.Key].Position.Y, entityEngine.Entities[e2.Key].Position.X - entityEngine.Entities[e.Key].Position.X); float mult = 0.1f * (entityEngine.Entities[e.Key].Mass * entityEngine.Entities[e2.Key].Mass) / distance * distance; Vector2 VecForce = new Vector2((float)Math.Cos(angle), (float)Math.Sin(angle)); VecForce.Normalize(); Force = Vector2.Add(Force, VecForce * mult); } } } entityEngine.Entities[e.Key].Position += Force; } } Yeah, I know. It's a nested foreach loop, but I don't know how else to do the gravity calculation, and this seems to work, it's just so intensive that it needs its own thread. (Even if someone knows a super efficient way to do these calculations, I'd still like to know how I COULD do it on multiple threads instead) EntityEngine.cs (manages an instance of Gravity.cs) public class EntityEngine { public Dictionary<string, Entity> Entities = new Dictionary<string, Entity>(); public Gravity gravity; private Thread T; public EntityEngine() { gravity = new Gravity(this); } public void Update() { foreach (KeyValuePair<string, Entity> e in Entities) { Entities[e.Key].Update(); } T = new Thread(new ThreadStart(gravity.Update)); T.IsBackground = true; T.Start(); } } EntityEngine is created in Game1.cs, and its Update() method is called within Game1.cs. I need my physics calculation in Gravity.cs to run every time the game updates, in a separate thread so that the calculation doesn't slow the game down to horribly low (0-2) FPS. How would I go about making this threading work? (any suggestions for an improved Planetary Gravity system are welcome if anyone has them) I'm also not looking for a lesson in why I shouldn't use threading or the dangers of using it incorrectly, I'm looking for a straight answer on how to do it. I've already spent an hour googling this very question with little results that I understood or were helpful. I don't mean to come off rude, but it always seems hard as a programming noob to get a straight meaningful answer, I usually rather get an answer so complex I'd easily be able to solve my issue if I understood it, or someone saying why I shouldn't do what I want to do and offering no alternatives (that are helpful). Thank you for the help!

    Read the article

  • Better way to load level content in XNA?

    - by user2002495
    Currently I loaded all my assets in XNA in the main Game class. What I want to achieve later is that I only load specific assets for specific levels (the game will consist of many levels). Here is how I load my main assets into the main class: protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); plane = new Player(Content.Load<Texture2D>(@"Player/playerSprite"), 6, 8); plane.animation = "down"; plane.pos = new Vector2(400, 500); plane.fps = 15; Global.currentPos = plane.pos; lvl1 = new Level1(Content.Load<Texture2D>(@"Levels/bgLvl1"), Content.Load<Texture2D>(@"Levels/bgLvl1-other"), new Vector2(0, 0), new Vector2(0, -600)); CommonBullet.LoadContent(Content); CommonEnemyBullet.LoadContent(Content); } protected override void UnloadContent() { } protected override void Update(GameTime gameTime) { if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); plane.Update(gameTime); lvl1.Update(gameTime); foreach (CommonEnemy ce in cel) { if (ce.CollidesWith(plane)) { ce.hasSpawn = false; } foreach (CommonBullet b in plane.commonBulletList) { if (b.CollidesWith(ce)) { ce.hasSpawn = false; } } ce.Update(gameTime); } LoadCommonEnemy(); base.Update(gameTime); } private void LoadCommonEnemy() { int randY = rand.Next(-600, -10); int randX = rand.Next(0, 750); if (cel.Count < 3) { cel.Add(new CommonEnemy(Content.Load<Texture2D>(@"Enemy/Common/commonEnemySprite"), 7, 2, "left", randX, randY)); } for (int i = 0; i < cel.Count; i++) { if (!cel[i].hasSpawn) { cel.RemoveAt(i); i--; } } } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Black); spriteBatch.Begin(); lvl1.Draw(spriteBatch); plane.Draw(spriteBatch); foreach (CommonEnemy ce in cel) { ce.Draw(spriteBatch); } spriteBatch.End(); base.Draw(gameTime); } I wish to load my players, enemies, all in Level1 class. However, when I move my player & enemy code into the Level1 class, the gameTime returns null. Here is my Level1 class: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Media; using Microsoft.Xna.Framework.Input; using SpaceShooter_Beta.Animation.PlayerCollection; using SpaceShooter_Beta.Animation.EnemyCollection.Common; namespace SpaceShooter_Beta.Levels { public class Level1 { public Texture2D bgTexture1, bgTexture2; public Vector2 bgPos1, bgPos2; public float speed = 5f; Player plane; public Level1(Texture2D texture1, Texture2D texture2, Vector2 pos1, Vector2 pos2) { this.bgTexture1 = texture1; this.bgTexture2 = texture2; this.bgPos1 = pos1; this.bgPos2 = pos2; } public void LoadContent(ContentManager cm) { plane = new Player(cm.Load<Texture2D>(@"Player/playerSprite"), 6, 8); plane.animation = "down"; plane.pos = new Vector2(400, 500); plane.fps = 15; Global.currentPos = plane.pos; } public void Draw(SpriteBatch sb) { sb.Draw(bgTexture1, bgPos1, Color.White); sb.Draw(bgTexture2, bgPos2, Color.White); plane.Draw(sb); } public void Update(GameTime gt) { bgPos1.Y += speed; bgPos2.Y += speed; if (bgPos1.Y >= 600) { bgPos1.Y = -600; } if (bgPos2.Y >= 600) { bgPos2.Y = -600; } plane.Update(gt); } } } Of course when I did this, I delete all my player's code in the main Game class. All of that works fine (no errors) except that the game cannot start. The debugger says that plane.Update(gt); in Level 1 class has null GameTime, same thing with the Draw method in the Level class. Please help, I appreciate for the time. [EDIT] I know that using switch in the main class can be a solution. But I prefer a cleaner solution than that, since using switch still means I need to load all the assets through the main class, the code will be A LOT later on for each levels

    Read the article

  • LWJGL - Mixing 2D and 3D

    - by nathan
    I'm trying to mix 2D and 3D using LWJGL. I have wrote 2D little method that allow me to easily switch between 2D and 3D. protected static void make2D() { glEnable(GL_BLEND); GL11.glMatrixMode(GL11.GL_PROJECTION); GL11.glLoadIdentity(); glOrtho(0.0f, SCREEN_WIDTH, SCREEN_HEIGHT, 0.0f, 0.0f, 1.0f); GL11.glMatrixMode(GL11.GL_MODELVIEW); GL11.glLoadIdentity(); } protected static void make3D() { glDisable(GL_BLEND); GL11.glMatrixMode(GL11.GL_PROJECTION); GL11.glLoadIdentity(); // Reset The Projection Matrix GLU.gluPerspective(45.0f, ((float) SCREEN_WIDTH / (float) SCREEN_HEIGHT), 0.1f, 100.0f); // Calculate The Aspect Ratio Of The Window GL11.glMatrixMode(GL11.GL_MODELVIEW); glLoadIdentity(); } The in my rendering code i would do something like: make2D(); //draw 2D stuffs here make3D(); //draw 3D stuffs here What i'm trying to do is to draw a 3D shape (in my case a quad) and i 2D image. I found this example and i took the code from TextureLoader, Texture and Sprite to load and render a 2D image. Here is how i load the image. TextureLoader loader = new TextureLoader(); Sprite s = new Sprite(loader, "player.png") And how i render it: make2D(); s.draw(0, 0); It works great. Here is how i render my quad: glTranslatef(0.0f, 0.0f, 30.0f); glScalef(12.0f, 9.0f, 1.0f); DrawUtils.drawQuad(); Once again, no problem, the quad is properly rendered. DrawUtils is a simple class i wrote containing utility method to draw primitives shapes. Now my problem is when i want to mix both of the above, loading/rendering the 2D image, rendering the quad. When i try to load my 2D image with the following: s = new Sprite(loader, "player.png); My quad is not rendered anymore (i'm not even trying to render the 2D image at this point). Only the fact of creating the texture create the issue. After looking a bit at the code of Sprite and TextureLoader i found that the problem appears after the call of the glTexImage2d. In the TextureLoader class: glTexImage2D(target, 0, dstPixelFormat, get2Fold(bufferedImage.getWidth()), get2Fold(bufferedImage.getHeight()), 0, srcPixelFormat, GL_UNSIGNED_BYTE, textureBuffer); Commenting this like make the problem disappear. My question is then why? Is there anything special to do after calling this function to do 3D? Does this function alter the render part, the projection matrix?

    Read the article

  • Optimized algorithm for line-sphere intersection in GLSL

    - by fernacolo
    Well, hello then! I need to find intersection between line and sphere in GLSL. Right now my solution is based on Paul Bourke's page and was ported to GLSL this way: // The line passes through p1 and p2: vec3 p1 = (...); vec3 p2 = (...); // Sphere center is p3, radius is r: vec3 p3 = (...); float r = ...; float x1 = p1.x; float y1 = p1.y; float z1 = p1.z; float x2 = p2.x; float y2 = p2.y; float z2 = p2.z; float x3 = p3.x; float y3 = p3.y; float z3 = p3.z; float dx = x2 - x1; float dy = y2 - y1; float dz = z2 - z1; float a = dx*dx + dy*dy + dz*dz; float b = 2.0 * (dx * (x1 - x3) + dy * (y1 - y3) + dz * (z1 - z3)); float c = x3*x3 + y3*y3 + z3*z3 + x1*x1 + y1*y1 + z1*z1 - 2.0 * (x3*x1 + y3*y1 + z3*z1) - r*r; float test = b*b - 4.0*a*c; if (test >= 0.0) { // Hit (according to Treebeard, "a fine hit"). float u = (-b - sqrt(test)) / (2.0 * a); vec3 hitp = p1 + u * (p2 - p1); // Now use hitp. } It works perfectly! But it seems slow... I'm new at GLSL. You can answer this questions in two ways: Tell me there is no solution, showing some proof or strong evidence. Tell me about GLSL features (vector APIs, primitive operations) that makes the above algorithm faster, showing some example. Thanks a lot!

    Read the article

  • how can i get rotation vector from matrix4x4 in xna?

    - by mr.Smyle
    i want to get rotation vector from matrix to realize some parent-children system for models. Matrix bonePos = link.Bone.Transform * World; Matrix m = Matrix.CreateTranslation(link.Offset) * Matrix.CreateScale(link.gameObj.Scale.X, link.gameObj.Scale.Y, link.gameObj.Scale.Z) * Matrix.CreateFromYawPitchRoll(MathHelper.ToRadians(link.gameObj.Rotation.Y), MathHelper.ToRadians(link.gameObj.Rotation.X), MathHelper.ToRadians(link.gameObj.Rotation.Z)) //need rotation vector from bone matrix here (now it's global model rotation vector) * Matrix.CreateFromYawPitchRoll(MathHelper.ToRadians(Rotation.Y), MathHelper.ToRadians(Rotation.X), MathHelper.ToRadians(Rotation.Z)) * Matrix.CreateTranslation(bonePos.Translation); link.gameObj.World = m; where : link - struct with children model settings, like position, rotation etc. And link.Bone - Parent Bone

    Read the article

  • What should I worry about when changing OpenGL origin to upper left of screen?

    - by derivative
    For self education, I'm writing a 2D platformer engine in C++ using SDL / OpenGL. I initially began with pure SDL using the tutorials on sdltutorials.com and lazyfoo.net, but I'm now rendering in an OpenGL context (specifically immediate mode but I'm learning about VAOs/VBOs) and using SDL for interface, audio, etc. SDL uses a coordinate system with the origin in the upper left of the screen and the positive y-axis pointing down. It's easy to set up my orthographic projection in OpenGL to mirror this. I know that texture coordinates are a right-hand system with values from 0 to 1 -- flipping the texture vertically before rendering (well, flip the file before loading) yields textures that render correctly... which is fine if I'm drawing the entire texture, but ultimately I'll be using tilesets and can imagine problems. What should I be concerned about in terms of rendering when I do this? If anybody has any advice or they've done this themselves and can point out future pitfalls, that would be great, but really any thoughts would be appreciated.

    Read the article

  • 2D SAT Collision Detection not working when using certain polygons (With example)

    - by sFuller
    My SAT algorithm falsely reports that collision is occurring when using certain polygons. I believe this happens when using a polygon that does not contain a right angle. Here is a simple diagram of what is going wrong: Here is the problematic code: std::vector<vec2> axesB = polygonB->GetAxes(); //loop over axes B for(int i = 0; i < axesB.size(); i++) { float minA,minB,maxA,maxB; polygonA->Project(axesB[i],&minA,&maxA); polygonB->Project(axesB[i],&minB,&maxB); float intervalDistance = polygonA->GetIntervalDistance(minA, maxA, minB, maxB); if(intervalDistance >= 0) return false; //Collision not occurring } This function retrieves axes from the polygon: std::vector<vec2> Polygon::GetAxes() { std::vector<vec2> axes; for(int i = 0; i < verts.size(); i++) { vec2 a = verts[i]; vec2 b = verts[(i+1)%verts.size()]; vec2 edge = b-a; axes.push_back(vec2(-edge.y,edge.x).GetNormailzed()); } return axes; } This function returns the normalized vector: vec2 vec2::GetNormailzed() { float mag = sqrt( x*x + y*y ); return *this/mag; } This function projects a polygon onto an axis: void Polygon::Project(vec2* axis, float* min, float* max) { float d = axis->DotProduct(&verts[0]); float _min = d; float _max = d; for(int i = 1; i < verts.size(); i++) { d = axis->DotProduct(&verts[i]); _min = std::min(_min,d); _max = std::max(_max,d); } *min = _min; *max = _max; } This function returns the dot product of the vector with another vector. float vec2::DotProduct(vec2* other) { return (x*other->x + y*other->y); } Could anyone give me a pointer in the right direction to what could be causing this bug? Edit: I forgot this function, which gives me the interval distance: float Polygon::GetIntervalDistance(float minA, float maxA, float minB, float maxB) { float intervalDistance; if (minA < minB) { intervalDistance = minB - maxA; } else { intervalDistance = minA - maxB; } return intervalDistance; //A positive value indicates this axis can be separated. } Edit 2: I have recreated the problem in HTML5/Javascript: Demo

    Read the article

< Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >