Search Results

Search found 25377 results on 1016 pages for 'development 4 0'.

Page 414/1016 | < Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >

  • Static "LoD" hack opinions

    - by David Lively
    I've been playing with implementing dynamic level of detail for rendering a very large mesh in XNA. It occurred to me that (duh) the whole point of this is to generate small triangles close to the camera, and larger ones far away. Given that, rather than constantly modifying or swapping index buffers based on a feature's rendered size or distance from the camera, it would be a lot easier (and potentially quite a bit faster), to render a single "fan" or flat wedge/frustum-shaped planar mesh that is tessellated into small triangles close to the near or small end of the frustum and larger ones at the far end, sort of like this (overhead view) (Pardon the gap in the middle - I drew one side and mirrored it) The triangle sizes are chosen so that all are approximately the same size when projected. Then, that mesh would be transformed to track the camera so that the Z axis (center vertical in this image) is always aligned with the view direction projected into the XZ plane. The vertex shader would then read terrain heights from a height texture and adjust the Y coordinate of the mesh to match a height field that defines the terrain. This eliminates the need for culling (since the mesh is generated to match the viewport dimensions) and the need to modify the index and/or vertex buffers when drawing the terrain. Obviously this doesn't address terrain with overhangs, etc, but that could be handled to a certain extent by including a second mesh that defines a sort of "ceiling" via a different texture. The other LoD schemes I've seen aren't particularly difficult to implement and, in some cases, are a lot more flexible, but this seemed like a decent quick-and-dirty way to handle height map-based terrain without getting into geometry manipulation. Has anyone tried this? Opinions?

    Read the article

  • C++ OpenGL wireframe cube rendering blank

    - by caleb.breckon
    I'm just trying to draw a bunch of lines that make up a "cube". I can't for the life of me figure out why this is producing a black screen. The debugger does not break at any point. I'm sure it's a problem with my pointers, as I'm only decent at them in regular c++ and in OpenGL it gets even worse. const char* vertexSource = "#version 150\n" "in vec3 position;" "void main() {" " gl_Position = vec4(position, 1.0);" "}"; const char* fragmentSource = "#version 150\n" "out vec4 outColor;" "void main() {" " outColor = vec4(1.0, 1.0, 1.0, 1.0);" "}"; int main() { initializeGLFW(); // Initialize GLEW glewExperimental = GL_TRUE; glewInit(); // Create Vertex Array Object GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); // Create a Vertex Buffer Object and copy the vertex data to it GLuint vbo; glGenBuffers( 1, &vbo ); float vertices[] = { 1.0f, 1.0f, 1.0f, // Vertex 0 (X, Y, Z) -1.0f, 1.0f, 1.0f, // Vertex 1 (X, Y, Z) -1.0f, -1.0f, 1.0f, // Vertex 2 (X, Y, Z) 1.0f, -1.0f, 1.0f, // Vertex 3 (X, Y, Z) 1.0f, 1.0f, -1.0f, // Vertex 4 (X, Y, Z) -1.0f, 1.0f, -1.0f, // Vertex 5 (X, Y, Z) -1.0f, -1.0f, -1.0f, // Vertex 6 (X, Y, Z) 1.0f, -1.0f, -1.0f // Vertex 7 (X, Y, Z) }; GLuint indices[] = { 0, 1, 1, 2, 2, 3, 3, 0, 4, 5, 5, 6, 6, 7, 7, 4, 0, 4, 1, 5, 2, 6, 3, 7 }; glBindBuffer( GL_ARRAY_BUFFER, vbo ); glBufferData( GL_ARRAY_BUFFER, sizeof( vertices ), vertices, GL_STATIC_DRAW ); //glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, vbo); //glBufferData( GL_ELEMENT_ARRAY_BUFFER, sizeof( indices ), indices, GL_STATIC_DRAW ); // Create and compile the vertex shader GLuint vertexShader = glCreateShader( GL_VERTEX_SHADER ); glShaderSource( vertexShader, 1, &vertexSource, NULL ); glCompileShader( vertexShader ); // Create and compile the fragment shader GLuint fragmentShader = glCreateShader( GL_FRAGMENT_SHADER ); glShaderSource( fragmentShader, 1, &fragmentSource, NULL ); glCompileShader( fragmentShader ); // Link the vertex and fragment shader into a shader program GLuint shaderProgram = glCreateProgram(); glAttachShader( shaderProgram, vertexShader ); glAttachShader( shaderProgram, fragmentShader ); glBindFragDataLocation( shaderProgram, 0, "outColor" ); glLinkProgram (shaderProgram); glUseProgram( shaderProgram); // Specify the layout of the vertex data GLint posAttrib = glGetAttribLocation( shaderProgram, "position" ); glEnableVertexAttribArray( posAttrib ); glVertexAttribPointer( posAttrib, 3, GL_FLOAT, GL_FALSE, 0, 0 ); // Main loop while(glfwGetWindowParam(GLFW_OPENED)) { // Clear the screen to black glClearColor( 0.0f, 0.0f, 0.0f, 1.0f ); glClear( GL_COLOR_BUFFER_BIT ); // Draw lines from 2 vertices glDrawElements(GL_LINES, sizeof(indices), GL_UNSIGNED_INT, indices ); // Swap buffers glfwSwapBuffers(); } // Clean up glDeleteProgram( shaderProgram ); glDeleteShader( fragmentShader ); glDeleteShader( vertexShader ); //glDeleteBuffers( 1, &ebo ); glDeleteBuffers( 1, &vbo ); glDeleteVertexArrays( 1, &vao ); glfwTerminate(); exit( EXIT_SUCCESS ); }

    Read the article

  • 3D/perspective Top down shooter bullet issues

    - by Tseng
    I'm developing a top-down shooter with multiple levels (ground for ground units, middle level for buildings, top level for air unity). The problem is the collision. Though I can make the collider box of a bullet be long enough to reach the ground (and collide with it), the real issue is optical. When the bullet is fired from a aircraft and collides with some object on the ground (building, ground unit) it will be optically offset due to the perspective camera, because it looks like the shot "by-passed" the target as seen below Is there any way to make the bullets collide perspectively correct? I'm using Unity3d Engine and it offers only simple colliders (box, sphere, cylinder, mesh and wheel), though I don't think a cone-formed collider would solve this issue. I'd need a (cheap) way to check if it's overlapping a destructible object? I thought of casting a ray from the camera through the bullet and if it hits something destructible, trigger an action, but that's quite punctual and maybe to performance heavy on certain number of bullets

    Read the article

  • Java 2D Rectangle Collision? [on hold]

    - by Andreas Elia
    I am just wanting to know of another (longer OR shorter) way of getting 100% effective collisions on a 2D plat-former. The current collision system that is in place works from coords on the level and does not always work reliably. Thank you in advance for any help/support. The current system draws a rectangle and is checking to see if any two points collide. From testing, the system can sometimes "glitch" and allow the player to collide into walls etc. Player Class http://pastebin.com/2zE8vz8R Main Class http://pastebin.com/A6Utb3ti

    Read the article

  • How to determine the end of list has been reached?

    - by Sweta Dwivedi
    I'm trying to animate my object according to a set of recorded values from kinect skeleton stream by saving the (x,y,z) stream from the skeletal data into a list and then set my objects x and y position from the x,y of the list. However, once the list end has been reached it starts to animate again from the start. I don't want that - I just want the model position to keep going in the positive X direction. Is there any way I can check if end of the list has been reached and to just update the model position in x direction? Or is there any other way to continue moving my sprite once the points in the list are over... i dont want it to start animating all the way again.. protected override void Update(GameTime gameTime) { //position += spriteSpeed * (float)gameTime.ElapsedGameTime.TotalSeconds; //// TODO: Add your update logic here using (StreamReader r = new StreamReader(f)) { string line; Viewport view = graphics.GraphicsDevice.Viewport; int maxWidth = view.Width; int maxHeight = view.Height; while((line = r.ReadLine()) != null) { string[] temp = line.Split(','); int x = (int) Math.Floor(((float.Parse(temp[0]) * 0.5f) + 0.5f) * maxWidth); int y = (int) Math.Floor(((float.Parse(temp[1]) * -0.5f) + 0.5f) * maxHeight); motion_2.Add(new Point(x, y)); } } position.X = motion_2[i].X; position.Y = motion_2[i].Y; i++; a_butterfly_up.Update(gameTime); a_butterfly_side.Update(gameTime); G_vidPlayer.Play(mossV); base.Update(gameTime); }

    Read the article

  • How can I convert an image from raw data in Android without any munging?

    - by stephelton
    I have raw image data (may be png, jpg, ...) and I want it converted in Android without changing its pixel depth (bpp). In particular, when I load a grayscale (8 bpp) image that I want to use as alpha (glTexImage() with GL_ALPHA), it converts it to 16 bpp (presumably 5_6_5). While I do have a plan b (actually, I'm probably on plan 'e' by now, this is really becoming annoying) I would really like to discover an easy way to do this using what is readily available in the api. So far, I'm using BitmapFactory.decodeByteArray(). While I'm at it. I'm doing this from a native environment via jni (passing the buffer in from C, and a new buffer back to C from Java). Any portable solution in C/C++ would be preferable, but I don't want to introduce anything that might break in future versions of Android, etc.

    Read the article

  • How does Minecraft renders its sunset and sky?

    - by Nick
    In Minecraft, the sunset looks really beautiful and I've always wanted to know how they do it. Do they use several skyboxes rendered over eachother? That is, one for the sky (which can turn dark and light depending on the time of the day), one for the sun and moon, and one for the orange horizon effect? I was hoping someone could enlighten me... I wish I could enter wireframe or something like that but as far as I know that is not possible.

    Read the article

  • Collision detection, stop gravity

    - by Scott Beeson
    I just started using Gamemaker Studio and so far it seems fairly intuitive. However, I set a room to "Room is Physics World" and set gravity to 10. I then enabled physics on my player object and created a block object to match a platform on my background sprite. I set up a Collision Detection event for the player and the block objects that sets the gravity to 0 (and even sets the vspeed to 0). I also put a notification in the collision event and I don't get that either. I have my key down and key up events working well, moving the player left and right and changing the sprites appropriately, so I think I understand the event system. I must just be missing something simple with the physics. I've tried making both and neither of the objects "solid". Pretty frustrating since it looks so easy. The player starting point is directly above the block object in the grid and the player does fall through the block. I even made the block sprite solid red so I could see it (initially it was invisible, obviously).

    Read the article

  • Physics System ignores collision in some rare cases

    - by Gajoo
    I've been developing a simple physics engine for my game. since the game physics is very simple I've decided to increase accuracy a little bit. Instead of formal integration methods like fourier or RK4, I'm directly computing the results after delta time "dt". based on the very first laws of physics : dx = 0.5 * a * dt^2 + v0 * dt dv = a * dt where a is acceleration and v0 is object's previous velocity. Also to handle collisions I've used a method which is somehow different from those I've seen so far. I'm detecting all the collision in the given time frame, stepping the world forward to the nearest collision, resolving it and again check for possible collisions. As I said the world consist of very simple objects, so I'm not loosing any performance due to multiple collision checking. First I'm checking if the ball collides with any walls around it (which is working perfectly) and then I'm checking if it collides with the edges of the walls (yellow points in the picture). the algorithm seems to work without any problem except some rare cases, in which the collision with points are ignored. I've tested everything and all the variables seem to be what they should but after leaving the system work for a minute or two the system the ball passes through one of those points. Here is collision portion of my code, hopefully one of you guys can give me a hint where to look for a potential bug! void PhysicalWorld::checkForPointCollision(Vec2 acceleration, PhysicsComponent& ball, Vec2& collisionNormal, float& collisionTime, Vec2 target) { // this function checks if there will be any collision between a circle and a point // ball contains informations about the circle (it's current velocity, position and radius) // collisionNormal is an output variable // collisionTime is also an output varialbe // target is the point I want to check for collisions Vec2 V = ball.mVelocity; Vec2 A = acceleration; Vec2 P = ball.mPosition - target; float wallWidth = mMap->getWallWidth() / (mMap->getWallWidth() + mMap->getHallWidth()) / 2; float r = ball.mRadius / (mMap->getWallWidth() + mMap->getHallWidth()); // r is ball radius scaled to match actual rendered object. if (A.any()) // todo : I need to first correctly solve the collisions in case there is no acceleration return; if (V.any()) // if object is not moving there will be no collisions! { float D = P.x * V.y - P.y * V.x; float Delta = r*r*V.length2() - D*D; if(Delta < eps) return; Delta = sqrt(Delta); float sgnvy = V.y > 0 ? 1: (V.y < 0?-1:0); Vec2 c1(( D*V.y+sgnvy*V.x*Delta) / V.length2(), (-D*V.x+fabs(V.y)*Delta) / V.length2()); Vec2 c2(( D*V.y-sgnvy*V.x*Delta) / V.length2(), (-D*V.x-fabs(V.y)*Delta) / V.length2()); float t1 = (c1.x - P.x) / V.x; float t2 = (c2.x - P.x) / V.x; if(t1 > eps && t1 <= collisionTime) { collisionTime = t1; collisionNormal = c1; } if(t2 > eps && t2 <= collisionTime) { collisionTime = t2; collisionNormal = c2; } } } // this function should step the world forward by dt. it doesn't check for collision of any two balls (components) // it just checks if there is a collision between the current component and 4 points forming a rectangle around it. void PhysicalWorld::step(float dt) { for (unsigned i=0;i<mObjects.size();i++) { PhysicsComponent &current = *mObjects[i]; Vec2 acceleration = current.mForces * current.mInvMass; float rt=dt; // stores how much more the world should advance while(rt > eps) { float collisionTime = rt; Vec2 collisionNormal = Vec2(0,0); float halfWallWidth = mMap->getWallWidth() / (mMap->getWallWidth() + mMap->getHallWidth()) / 2; // we check if there is any collision with any of those 4 points around the ball // if there is a collision both collisionNormal and collisionTime variables will change // after these functions collisionTime will be exactly the value of nearest collision (if any) // and if there was, collisionNormal will report in which direction the ball should return. checkForPointCollision(acceleration,current,collisionNormal,collisionTime,Vec2(floor(current.mPosition.x) + halfWallWidth,floor(current.mPosition.y) + halfWallWidth)); checkForPointCollision(acceleration,current,collisionNormal,collisionTime,Vec2(floor(current.mPosition.x) + halfWallWidth, ceil(current.mPosition.y) - halfWallWidth)); checkForPointCollision(acceleration,current,collisionNormal,collisionTime,Vec2( ceil(current.mPosition.x) - halfWallWidth,floor(current.mPosition.y) + halfWallWidth)); checkForPointCollision(acceleration,current,collisionNormal,collisionTime,Vec2( ceil(current.mPosition.x) - halfWallWidth, ceil(current.mPosition.y) - halfWallWidth)); // either if there is a collision or if there is not we step the forward since we are sure there will be no collision before collisionTime current.mPosition += collisionTime * (collisionTime * acceleration * 0.5 + current.mVelocity); current.mVelocity += collisionTime * acceleration; // if the ball collided with anything collisionNormal should be at least none zero in one of it's axis if (collisionNormal.any()) { collisionNormal *= Dot(collisionNormal, current.mVelocity) / collisionNormal.length2(); current.mVelocity -= 2 * collisionNormal; // simply reverse velocity along collision normal direction } rt -= collisionTime; } // reset all forces for current object so it'll be ready for later game event current.mForces.zero(); } }

    Read the article

  • the unity aspect ratio script looks good in computer but not in android phones

    - by Pooya Fayyaz
    I'm developing a game for android devices.and i have a script that solve the ratio problem but i have a problem in this code.and i dont know why.it looks perfect in computer even resize the game screen but in mobile phones have a problem.my game runs in landscape mode.this is the script : using UnityEngine; using System.Collections; using System.Collections.Generic; public class reso : MonoBehaviour { void Update() { // set the desired aspect ratio (the values in this example are // hard-coded for 16:9, but you could make them into public // variables instead so you can set them at design time) float targetaspect = 16.0f / 9.0f; // determine the game window's current aspect ratio float windowaspect = (float)Screen.width / (float)Screen.height; // current viewport height should be scaled by this amount float scaleheight = windowaspect / targetaspect; // obtain camera component so we can modify its viewport Camera camera = GetComponent<Camera>(); // if scaled height is less than current height, add letterbox if (scaleheight < 1.0f && Screen.width <= 490 ) { Rect rect = camera.rect; rect.width = 1.0f; rect.height = scaleheight; rect.x = 0; rect.y = (1.0f - scaleheight) / 2.0f; camera.rect = rect; } else // add pillarbox { float scalewidth = 1.0f / scaleheight; Rect rect = camera.rect; rect.width = scalewidth; rect.height = 1.0f; rect.x = (1.0f - scalewidth) / 2.0f; rect.y = 0; camera.rect = rect; } } } i figure that my problem occur in this part of the script: if (scaleheight < 1.0f) { Rect rect = camera.rect; rect.width = 1.0f; rect.height = scaleheight; rect.x = 0; rect.y = (1.0f - scaleheight) / 2.0f; camera.rect = rect; } and its look like this in my mobile phone in portrait: and in landscape mode:

    Read the article

  • Which Game Engine to Use for an Angry Bird style game? [JAVA] [on hold]

    - by Arch1tect
    Our team is building an Angry Bird Style game, and we have only about ten days. The game is a little more complex than Angry Bird because there are two players, they each have a castle with pigs to protect(not destroy:)). And the goal is to destroy the other player's pigs. I wonder what Game Engine would help us finish this game most efficiently. We at least need a physics engine but I guess game engine is more helpful since it usually includes physics engine. Correct me if I'm wrong. (So I'm wondering which game engine I should use, if it's just physics engine, I'll use box2d) Networking may or may not be added later depend on time we have. Thanks in advance for any advice! EDIT: image looks small, I'll add one:

    Read the article

  • 2D platformers: why make the physics dependent on the framerate?

    - by Archagon
    "Super Meat Boy" is a difficult platformer that recently came out for PC, requiring exceptional control and pixel-perfect jumping. The physics code in the game is dependent on the framerate, which is locked to 60fps; this means that if your computer can't run the game at full speed, the physics will go insane, causing (among other things) your character to run slower and fall through the ground. Furthermore, if vsync is off, the game runs extremely fast. Could those experienced with 2D game programming help explain why the game was coded this way? Wouldn't a physics loop running at a constant rate be a better solution? (Actually, I think a physics loop is used for parts of the game, since some of the entities continue to move normally regardless of the framerate. Your character, on the other hand, runs exactly [fps/60] as fast.) What bothers me about this implementation is the loss of abstraction between the game engine and the graphics rendering, which depends on system-specific things like the monitor, graphics card, and CPU. If, for whatever reason, your computer can't handle vsync, or can't run the game at exactly 60fps, it'll break spectacularly. Why should the rendering step in any way influence the physics calculations? (Most games nowadays would either slow down the game or skip frames.) On the other hand, I understand that old-school platformers on the NES and SNES depended on a fixed framerate for much of their control and physics. Why is this, and would it be possible to create a patformer in that vein without having the framerate dependency? Is there necessarily a loss of precision if you separate the graphics rendering from the rest of the engine? Thank you, and sorry if the question was confusing.

    Read the article

  • What are the pro/cons of Unity3D as a choice to make games ?

    - by jokoon
    We are doing our school project with Unity3d, since they were using Shiva the previous year (which seems horrible to me), and I wanted to know your point of view for this tool. Pros: multi platform, I even heard Google is going to implement it in Chrome everything you need is here scripting languages makes it a good choice for people who are not programming gurus Cons: multiplayer ? proprietary, you are totally dependent of unity and its limit and can't extend it it's less "making a game from scratch" C++ would have been a cool thing I really think this kind of tool is interesting, but is it worth it to use at school for a project that involves more than 3 programming persons ? What do we really learn in term of programming from using this kind of tool (I'm ok with python and js, but I hate C#) ? We could have use Ogre instead, even if we were learning direct x starting january...

    Read the article

  • Convience of mySQL over xml

    - by Bonechilla
    Currently I use XML to store specific information to correctly load a few things such as a list of specfied characters, scenes and music, Once more I use JAXB in combination with standard compression/decompression(ZIP) functionality to store a list of extrenous data. This data is called to add functionality to the character, somewhat like Skills in an RPG. Each skill is seperated into its own XML file with a grandlist which contains the names of each file with their extensions omitted and zipped in folder that gets encrypted. At first using xml was working fine however as the skill list grow i worry about its stability. I was wondering if I should begin storing the data in mySQL. Originally I planned to simply convert everything to JSON over xml but i think possibly mySQL would be a better move. Can anyone inform me of the key difference and pros and cons of each I guess i'm looking for the best way to store the data more conviently and would be easier to operate on. The data is mostly primatives and strings and the only arraylist of values i have i can just concat into a single field and parse later

    Read the article

  • How to efficiently map tokens to code in a script interpreter?

    - by lithander
    I'm writing an interpreter for a simple scripting language where each line is a complete, executable command. (Like the instructions in assembler) When parsing a line I have to map the requested command to actual code. My current solution looks like this: std::string op, param1, param2; //parse line, identify op, param1, param2 ... //call command specific code if(op == "MOV") ExecuteMov(AsNumber(param1)); else if(op == "ROT") ExecuteRot(AsNumber(param1)); else if(op == "SZE") ExecuteSze(AsNumber(param1)); else if(op == "POS") ExecutePos((AsNumber(param1), AsNumber(param2)); else if(op == "DIR") ExecuteDir((AsNumber(param1), AsNumber(param2)); else if(op == "SET") ExecuteSet(param1, AsNumber(param2)); else if(op == "EVL") ... The more commands are supported the more string comparisions I'll have to do to identify and call the associated method. Can you point me to a more efficient implementation in the described scenario?

    Read the article

  • Android Java rectangle collision detection not working

    - by Charlton Santana
    I had been hard coding a collision detection system which was buggy. Then I came across using rectangles for collsion detection. So I put it all in and it does not work, I put a log in and it never logged. Note to Java programmers who are not Android programers: Android uses the word Rect instead of Rectangle. Code for Block.java: public Rect getBounds(){ return new Rect (this.x, this.y, 10, 20); } Code for Sprite.java: public Rect getBounds(){ return new Rect (this.x, this.y, 20, 20); } Code for MainGame.java: for(Block block : BLOCKS) { block.draw(canvas); block.rigidbody(); Rect spriter = sprite.getBounds(); Rect blockr = block.getBounds(); if(spriter.intersect(blockr)){ showgameover = 1; Log.d(TAG, "Game Over"); } } Is anyone able to help?

    Read the article

  • As an indie game dev, what processes are the best for soliciting feedback on my design/spec/idea? [closed]

    - by Jess Telford
    Background I have worked in a professional environment where the process usually goes like the following: Brain storm idea Solidify the game mechanics / design Iterate on design/idea to create a more solid experience Spec out the details of the design/idea Build it Step 3. is generally done with the stakeholders of the game (developers, designers, investors, publishers, etc) to reach an 'agreement' which meets the goals of all involved. Due to this process involving a series of often opposing and unique view points, creative solutions can surface through discussion / iteration. This is backed up by a process for collating the changes / new ideas, as well as structured time for discussion. As a (now) indie developer, I have to play the role of all the stakeholders (developers, designers, investors, publishers, etc), and often find myself too close to the idea / design to do more than minor changes, which I feel to be local maxima when it comes to the best result (I'm looking for the global maxima, of course). I have read that ideas / game designs / unique mechanics are merely multipliers of execution, and that keeping them secret is just silly. In sharing the idea with others outside the realm of my own thinking, I hope to replicate the influence other stakeholders have. I am struggling with the collation of changes / new ideas, and any kind of structured method of receiving feedback. My question: As an indie game developer, how and where can I share my ideas/designs to receive meaningful / constructive feedback? How can I successfully collate the feedback into a new iteration of the design? Are there any specialized websites, etc?

    Read the article

  • GLSL Shader Effects: How to do motion blur, etc?

    - by DevilWithin
    I am not sure how right it is to ask this question, but still here it goes. I have a full 2D environment, with sprites going around as landscape, characters, etc And to make it more state-of-art looking, i want to implement a motion blur effect, similar to modern FPS's (i.e. crysis) blur when moving fast the camera. In a sidescroller, the desired effect is having this slight blur appearing to give the idea of fast movement, when the camera is moving. If anyone could give me some tips on doing this, im assuming in a pixel shader, i'd be grate. Also, if anyone has other good tips on cool pixel shader effects for 2D games it would be awesome, like some stylizing post fx, such as previous Prince of Persia illustrative style. Thanks

    Read the article

  • Learning to optimize with Assembly

    - by niktehpui
    I am a second year student of Computer Games Technology. I recently finished my first prototype of my "kind" of own pathfinder (that doesn't use A* instead a geometrical approach/pattern recognition, the pathfinder just needs the knowledge about the terrain that is in his view to make decisions, because I wanted an AI that could actually explore, if the terrain is already known, then it will walk the shortest way easily, because the pathfinder has a memory of nodes). Anyway my question is more general: How do I start optimizing algorithms/loops/for_each/etc. using Assembly, although general tips are welcome. I am specifically looking for good books, because it is really hard to find good books on this topic. There are some small articles out there like this one, but still isn't enough knowledge to optimize an algorithm/game... I hope there is a modern good book out there, that I just couldn't find...

    Read the article

  • Actually utilizing relational databases for entity systems

    - by Marc Müller
    Recently I was researching several entity systems and obviously I came across T=Machine's fantastic articles on the subject. In Part 5 of the series the author uses a relational schema to explain how an entity system is built and works. Since reading this, I have been wondering whether or not actually using a compact SQL library would be fast enough for real-time usage in video games. Performance seems to be the main issue with a full blown SQL database for management of all entities and components. However, as mentioned in T=Machine's post, basically all access to data inside the SQLDB is done sequentlially by each system over each component. Additionally, using a library like SQLite, one could easily improve performance by storing the entity data exclusively in RAM to increase access speeds. Disregarding possible performance issues, using a SQL database, in my opinion, would allow for a very intuitive implementation of entity systems and bring a long certain other benefits like easy de/serialization of game states and consistency checks like the uniqueness of entity IDs. Edit for clarification: The main question was whether using a SQL database for the actual entity management (not just storing the game state on the disk) in a real-time game would still yield a framerate appropriate for a game or even if someone is aware of projects that demonstrate SQL in a video game.

    Read the article

  • How can I save state from script in a multithreaded engine?

    - by Peter Ren
    We are building a multithreaded game engine and we've encountered some problems as described below. The engine have 3 threads in total: script, render, and audio. Each frame, we update these 3 threads simultaneously. As these threads updating themselves, they produce some tasks and put them into a public storage area. As all the threads finish their update, each thread go and copy the tasks for themselves one by one. After all the threads finish their task copying, we make the threads process those tasks and update these threads simultaneously as described before. So this is the general idea of the task schedule part of our engine. Ok, well, all the task schedule part work well, but here's the problem: For the simplest, I'll take Camera as an example: local oldPos = camera:getPosition() -- ( 0, 0, 0 ) camera:setPosition( 1, 1, 1 ) -- Won't work now, cuz the render thread will process the task at the beginning of the next frame local newPos = camera:getPosition() -- Still ( 0, 0, 0 ) So that's the problem: If you intend to change a property of an object in another thread, you have to wait until that thread process this property-changing message. As a result, what you get from the object is still the information in the last frame. So, is there a way to solve this problem? Or are we build the task schedule part in a wrong way? Thanks for your answers :)

    Read the article

  • Questions about game states

    - by MrPlow
    I'm trying to make a framework for a game I've wanted to do for quite a while. The first thing that I decided to implement was a state system for game states. When my "original" idea of having a doubly linked list of game states failed I found This blog and liked the idea of a stack based game state manager. However there were a few things I found weird: Instead of RAII two class methods are used to initialize and destroy the state Every game state class is a singleton(and singletons are bad aren't they?) Every GameState object is static So I took the idea and altered a few things and got this: GameState.h class GameState { private: bool m_paused; protected: StateManager& m_manager; public: GameState(StateManager& manager) : m_manager(manager), m_paused(false){} virtual ~GameState() {} virtual void update() = 0; virtual void draw() = 0; virtual void handleEvents() = 0; void pause() { m_paused = true; } void resume() { m_paused = false; } void changeState(std::unique_ptr<GameState> state) { m_manager.changeState(std::move(state)); } }; StateManager.h class GameState; class StateManager { private: std::vector< std::unique_ptr<GameState> > m_gameStates; public: StateManager(); void changeState(std::unique_ptr<GameState> state); void StateManager::pushState(std::unique_ptr<GameState> state); void popState(); void update(); void draw(); void handleEvents(); }; StateManager.cpp StateManager::StateManager() {} void StateManager::changeState( std::unique_ptr<GameState> state ) { if(!m_gameStates.empty()) { m_gameStates.pop_back(); } m_gameStates.push_back( std::move(state) ); } void StateManager::pushState(std::unique_ptr<GameState> state) { if(!m_gameStates.empty()) { m_gameStates.back()->pause(); } m_gameStates.push_back( std::move(state) ); } void StateManager::popState() { if(!m_gameStates.empty()) m_gameStates.pop_back(); } void StateManager::update() { if(!m_gameStates.empty()) m_gameStates.back()->update(); } void StateManager::draw() { if(!m_gameStates.empty()) m_gameStates.back()->draw(); } void StateManager::handleEvents() { if(!m_gameStates.empty()) m_gameStates.back()->handleEvents(); } And it's used like this: main.cpp StateManager states; states.changeState( std::unique_ptr<GameState>(new GameStateIntro(states)) ); while(gamewindow::gameWindow.isOpen()) { states.handleEvents(); states.update(); states.draw(); } Constructors/Destructors are used to create/destroy states instead of specialized class methods, state objects are no longer static but

    Read the article

  • GLSL: Strange light reflections [Solved]

    - by Tom
    According to this tutorial I'm trying to make a normal mapping using GLSL, but something is wrong and I can't find the solution. The output render is in this image: Image1 in this image is a plane with two triangles and each of it is different illuminated (that is bad). The plane has 6 vertices. In the upper left side of this plane are 2 identical vertices (same in the lower right). Here are some vectors same for each vertice: normal vector = 0, 1, 0 (red lines on image) tangent vector = 0, 0,-1 (green lines on image) bitangent vector = -1, 0, 0 (blue lines on image) here I have one question: The two identical vertices does need to have the same tangent and bitangent? I have tried to make other values to the tangents but the effect was still similar. Here are my shaders Vertex shader: #version 130 // Input vertex data, different for all executions of this shader. in vec3 vertexPosition_modelspace; in vec2 vertexUV; in vec3 vertexNormal_modelspace; in vec3 vertexTangent_modelspace; in vec3 vertexBitangent_modelspace; // Output data ; will be interpolated for each fragment. out vec2 UV; out vec3 Position_worldspace; out vec3 EyeDirection_cameraspace; out vec3 LightDirection_cameraspace; out vec3 LightDirection_tangentspace; out vec3 EyeDirection_tangentspace; // Values that stay constant for the whole mesh. uniform mat4 MVP; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Output position of the vertex, in clip space : MVP * position gl_Position = MVP * vec4(vertexPosition_modelspace,1); // Position of the vertex, in worldspace : M * position Position_worldspace = (M * vec4(vertexPosition_modelspace,1)).xyz; // Vector that goes from the vertex to the camera, in camera space. // In camera space, the camera is at the origin (0,0,0). vec3 vertexPosition_cameraspace = ( V * M * vec4(vertexPosition_modelspace,1)).xyz; EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace; // Vector that goes from the vertex to the light, in camera space. M is ommited because it's identity. vec3 LightPosition_cameraspace = ( V * vec4(LightPosition_worldspace,1)).xyz; LightDirection_cameraspace = LightPosition_cameraspace + EyeDirection_cameraspace; // UV of the vertex. No special space for this one. UV = vertexUV; // model to camera = ModelView vec3 vertexTangent_cameraspace = MV3x3 * vertexTangent_modelspace; vec3 vertexBitangent_cameraspace = MV3x3 * vertexBitangent_modelspace; vec3 vertexNormal_cameraspace = MV3x3 * vertexNormal_modelspace; mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); // You can use dot products instead of building this matrix and transposing it. See References for details. LightDirection_tangentspace = TBN * LightDirection_cameraspace; EyeDirection_tangentspace = TBN * EyeDirection_cameraspace; } Fragment shader: #version 130 // Interpolated values from the vertex shaders in vec2 UV; in vec3 Position_worldspace; in vec3 EyeDirection_cameraspace; in vec3 LightDirection_cameraspace; in vec3 LightDirection_tangentspace; in vec3 EyeDirection_tangentspace; // Ouput data out vec3 color; // Values that stay constant for the whole mesh. uniform sampler2D DiffuseTextureSampler; uniform sampler2D NormalTextureSampler; uniform sampler2D SpecularTextureSampler; uniform mat4 V; uniform mat4 M; uniform mat3 MV3x3; uniform vec3 LightPosition_worldspace; void main(){ // Light emission properties // You probably want to put them as uniforms vec3 LightColor = vec3(1,1,1); float LightPower = 40.0; // Material properties vec3 MaterialDiffuseColor = texture2D( DiffuseTextureSampler, vec2(UV.x,-UV.y) ).rgb; vec3 MaterialAmbientColor = vec3(0.1,0.1,0.1) * MaterialDiffuseColor; //vec3 MaterialSpecularColor = texture2D( SpecularTextureSampler, UV ).rgb * 0.3; vec3 MaterialSpecularColor = vec3(0.5,0.5,0.5); // Local normal, in tangent space. V tex coordinate is inverted because normal map is in TGA (not in DDS) for better quality vec3 TextureNormal_tangentspace = normalize(texture2D( NormalTextureSampler, vec2(UV.x,-UV.y) ).rgb*2.0 - 1.0); // Distance to the light float distance = length( LightPosition_worldspace - Position_worldspace ); // Normal of the computed fragment, in camera space vec3 n = TextureNormal_tangentspace; // Direction of the light (from the fragment to the light) vec3 l = normalize(LightDirection_tangentspace); // Cosine of the angle between the normal and the light direction, // clamped above 0 // - light is at the vertical of the triangle -> 1 // - light is perpendicular to the triangle -> 0 // - light is behind the triangle -> 0 float cosTheta = clamp( dot( n,l ), 0,1 ); // Eye vector (towards the camera) vec3 E = normalize(EyeDirection_tangentspace); // Direction in which the triangle reflects the light vec3 R = reflect(-l,n); // Cosine of the angle between the Eye vector and the Reflect vector, // clamped to 0 // - Looking into the reflection -> 1 // - Looking elsewhere -> < 1 float cosAlpha = clamp( dot( E,R ), 0,1 ); color = // Ambient : simulates indirect lighting MaterialAmbientColor + // Diffuse : "color" of the object MaterialDiffuseColor * LightColor * LightPower * cosTheta / (distance*distance) + // Specular : reflective highlight, like a mirror MaterialSpecularColor * LightColor * LightPower * pow(cosAlpha,5) / (distance*distance); //color.xyz = E; //color.xyz = LightDirection_tangentspace; //color.xyz = EyeDirection_tangentspace; } I have replaced the original color value by EyeDirection_tangentspace vector and then I got other strange effect but I can not link the image (not eunogh reputation) Is it possible that with this shaders is something wrong, or maybe in other place in my code e.g with my matrices?

    Read the article

  • Is it possible to calculate or mathematically prove if a game is balanced / fair?

    - by Lurca
    This question is not focussed on video games but games in general. I went to a boardgame trade fair yesterday and asked myself if there is a way to calculate the fairness of a game. Sure, some of them require a good portion of luck, but it might be possible to calculate if some character is overpowered. Especially in role-playing games and trading card games. How, for example, can the creators of "Magic: The Gathering" make sure that there isn't the "one card that beats them all", given the impressive number of available cards?

    Read the article

  • How to simulate pressure with particles?

    - by BeachRunnerJoe
    I'm trying to simulate pressure with a collection of spherical particles in a Unity game I'm building. A couple notes about the problem: The goal is to fill a constantly changing 2d space/void with small, frictionless spheres. The game is trying to simulate the ever-growing pressure of more objects being shoved into this space. The level itself is constantly scrolling from left to right, meaning if the space's dimensions are not changed by the user it will automatically get smaller (the leftmost part of the space will continually scroll off-screen). I'm wondering what some approaches are that I can take to tackling these problems... Knowing when to detect when there is space to fill and then add spheres to the space. Removing spheres from the space when it is shrinking. Strategies to simulate pressure on the spheres such that they "explode outwards" when more space is created. The current approach I am contemplating is using a constantly moving wall, that is off screen and moves with the screen, as this image illustrates: . This moving wall will push and trap the spheres into the space. As for adding new spheres, I was going to have either (1) spheres replicate themselves upon detecting free space, OR (2) spawn them at the left side of the space (where the wall is) - pushing the rest of the spheres to fill the space. I foresee problems with idea #1 because this likely wouldn't really create/simulate pressure; idea #2 seems more promising, but raises the question of how to provide a location for these new sphere particles to spawn (and the ramifications of spawning them when there IS no space). Thanks so much in advance for your wisdom!

    Read the article

< Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >