Search Results

Search found 26774 results on 1071 pages for 'distributed development'.

Page 479/1071 | < Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >

  • Event Driven Behavior Tree: deterministic traversal order with parallel

    - by Heisenbug
    I've studied several articles and listen some talks about behavior trees (mostly the resources available on AIGameDev by Alex J. Champandard). I'm particularly interested on event driven behavior trees, but I have still some doubts on how to implement them correctly using a scheduler. Just a quick recap: Standard Behavior Tree Each execution tick the tree is traversed from the root in depth-first order The execution order is implicitly expressed by the tree structure. So in the case of behaviors parented to a parallel node, even if both children are executed during the same traversing, the first leaf is always evaluated first. Event Driven BT During the first traversal the nodes (tasks) are enqueued using a scheduler which is responsible for updating only running ones every update The first traversal implicitly produce a depth-first ordered queue in the scheduler Non leaf nodes stays suspended mostly of the time. When a leaf node terminate(either with success or fail status) the parent (observer) is waked up allowing the tree traversing to continue and new tasks will be enqueued in the scheduler Without parallel nodes in the tree there will be up to 1 task running in the scheduler Without parallel nodes, the tasks in the queue(excluding dynamic priority implementation) will be always ordered in a depth-first order (is this right?) Now, from what is my understanding of a possible implementation, there are 2 requirements I think must be respected(I'm not sure though): Now, some requirements I think needs to be guaranteed by a correct implementation are: The result of the traversing should be independent from which implementation strategy is used. The traversing result must be deterministic. I'm struggling trying to guarantee both in the case of parallel nodes. Here's an example: Parallel_1 -->Sequence_1 ---->leaf_A ---->leaf_B -->leaf_C Considering a FIFO policy of the scheduler, before leaf_A node terminates the tasks in the scheduler are: P1(suspended),S1(suspended),leaf_A(running),leaf_C(running) When leaf_A terminate leaf_B will be scheduled (at the end of the queue), so the queue will become: P1(suspended),S1(suspended),leaf_C(running),leaf_B(running) In this case leaf_B will be executed after leaf_C at every update, meanwhile with a non event-driven traversing from the root node, the leaf_B will always be evaluated before leaf_A. So I have a couple of question: do I have understand correctly how event driven BT work? How can I guarantee the depth first order is respected with such an implementation? is this a common issue or am I missing something?

    Read the article

  • Is there a cross-platform special directory I can use for game save files?

    - by Suds
    I'm developing with LWJGL and Java on a Windows 7 laptop. I've successfully set up saving to the %appdata%\gamename\saves\ or long form c:\users\user\appdata\roaming\gamename\saves\ folder by using File dir = new File(System.getenv("APPDATA") + "\\gamename\\saves\\");. I have hobbyist level experience with Linux, and zero experience with OSX. My game will be fully cross platform. Is System.getenv("APPDATA"); cross platform? If so, where does it point to on Linux or OSX? Is there a best practices alternative that I should use?

    Read the article

  • C++ and SDL resource management for 2D game

    - by KuruptedMagi
    My first question is about stateManagers. I do not use the singleton pattern (read many random posts with various reasons not to use it), I have gameStateManager which runs the pointer cCurrentGameState-render(), etc. I want to make a transitioning game, this engine should ideally cover both a platformer and a bird's eye RPG (with some recoding, I just mean the base engine), both of which will load different levels and events, such as world map, dungeon, shops, etc. So I then thought, rather then having to store all this data within all the states, I would break the engine into gameStates, and playStates... when gameState reaches gameStatePlay(), gameStatePlay simply runs the usual handleInput, logic, and render for the playStates, just as the low level gameStateManager does. This lets me store all the player data within the base playstate class without storing useless data in the gameStates. Now I have added a seperate mapEditor, which uses editorStates from gameStateEditor. Is this too much usage of the gameState concept? It seems to work pretty well for me, so I was wondering if I am too far off a common implementation of this. My second question is on image resources. I have my sprite class with nothing but static members, mainly loadImage, applySurface, and my screen pointer. I also have a map pairing imageName enums with actual SDL_Surface pointers, and one pairing clipNumber enums with a wrapper class for a vector of clips, so that each reference in the map can have different amounts of clips with different sizes. I thought it would be better to store all these images, and screen within one static body, since 20 different goblins all use the same sprite sheet, and all need to print to the same screen, and of course, this way I do not need to pass my screen reference to every little entity. The imageMap seems to work very well, I can even add the ability to search through the map at creation of entity type to see if a particular image at creation, creating if it doesnt exist, and destroying the image if the last entity that needs it was just destroyed. The vectored clip map however, seems to take too long to initialize, so if i run past the state that initializes them to fast, the game crashes <. Plus, the clip map call is half of this line =P SPRITE::applySurface( cEditorMap.cTiles[x][y].iX, cEditorMap.cTiles[x][y].iY, SPRITE::mImages[ IMAGE_TILEMAP ], SPRITE::screen, SPRITE::mImageClips[IMAGE_TILEMAP]->clips.at( cEditorMap.cTiles[x][y].iTileType ) ); Again, do I have the right idea? I like the imageMap, but am I better off with each entity storing its own clips? My last question is about collision detection. I only grasp the basics, will look at per-pixel and circular soon, but how can I determine which side the collision comes from with just the basic square collision detection, I tried breaking each entity into 4 collision zones, but that just gave me problems with walking through walls and the like <. Also, is per-pixel color collision a good way to decide what collision just occured, or is checking multiple colors for multiple entities too taxing each cycle?

    Read the article

  • Complete Math Library for use in OpenGL ES 2.0 Game?

    - by Bunkai.Satori
    Are you aware of a complete (or almost complete) cross platform math library for use in OpenGL ES 2.0 games? The library should contain: Matrix2x2, Matrix 3x3, Matrix4x4 classes Quaternions Vector2, Vector3, Vector4 Classes Euler Angle Class Operations amongh the above mentioned classes, conversions, etc.. Standardly used math operations in 3D graphics (Dot Product, Cross Product, SLERP, etc...) Is there such Math API available either standalone or as a part of any package? Programming Language: Visual C++ but planned to be ported to OS X and Android OS.

    Read the article

  • Hydraulics in game

    - by Mungoid
    I'm not completely sure if this would be better in the Physics site or not as this question is more about how hydraulics should work in game as opposed to how they really work (although that is taken into account) - So I apologize if this is in the wrong place. A project we are on, we have a machine with hydraulics that are powered (They don't just look like they move something, they are the only thing moving/turning/lifting something) - However, the hydraulic extends the same speed no matter what it is pushing. So, say there is a 10 ton object attached to one end of the hydraulic and the other end is attached to a plate on the ground. In real life it takes a few seconds to build up pressure depending on how heavy the object is, but in our project the hydraulics don't care about that. It will lift a 100 ton object the same speed as a 10 ton object. We have a way to fake the hydraulic pressurizing by reducing the 'drive amount' (how fast or slow the hydraulic extends) when we sense that it is touching the ground and that does a relatively decent job but we would like to be able to take other things into account like engine speed, ratios, loads, etc. but we aren't too sure what we need to think about. I'm kinda wondering if anyone here has any experience with this and could offer some suggestions on what to take into account?

    Read the article

  • How does a collision engine work?

    - by JXPheonix
    Original question: Click me How exactly does a collision engine work? This is an extremely broad question. What code keeps things bouncing against each other, what code makes the player walk into a wall instead of walk through the wall? How does the code constantly refresh the players position and objects position to keep gravity and collision working as it should? If you don't know what a collision engine is, basically it's generally used in platform games to make the player acutally hit walls and the like. There's the 2D type and the 3D type, but they all accomplish the same thing: collision. So, what keeps a collision engine ticking?

    Read the article

  • Java keyboard input [on hold]

    - by dØd
    I'm trying to implement a input system that can detect whether a certain key was held or was only pressed briefly. So far I have this: KEY_INTERACTION_TRESHOLD = 400ms //inside a constructor shouldMeasure = true; @Override public void keyPressed(KeyEvent e) { if (shouldMeasure) { startTime = System.currentTimeMillis(); shouldMeasure = false; return; } System.out.println("Button is held down"); e.consume(); } @Override public void keyReleased(KeyEvent e) { if (System.currentTimeMillis() - startTime < KEY_INTERACTION_TRESHOLD) { System.out.println("Button was only pressed briefly"); } startTime = 0; shouldMeasure = true; e.consume(); } Now this works, but the problem is that there is this delay between when I press a key to hold and when the message 'Button is held down' gets displayed. I understand why this delay occurs (for example when you press and hold a letter there will be a similar delay between the first and the second letter printed out), but I would like to somehow avoid it. I'm using only the Java API.

    Read the article

  • How does one specify raster operations in XNA?

    - by Corey Ogburn
    I'm looking for a way to add a sprite using a particular logic operation (like XOR). I can't find anything on Google and I'm not sure where to look in the documentation. I've looked into SpriteBatch.Begin(...) and its Draw method and several options in the GraphicsDevice class, but I'm not recognizing anything capable of this. I'm still pretty new to XNA so I may just not have recognized the terminology to do this.

    Read the article

  • Which purpose do armor points serve?

    - by Bane
    I have seen a mechanic which I call "armor points" in many games: Quake, Counter Strike, etc. Generally, while the player has these armor points, he takes less damage. However, they act in a similar fashion that health points do: you lose them by taking said damage. Why would you design such a feature? Is this just health 2.0, or am I missing something? To me, armor only makes sense in, for example, RPG games, where it is a constant that determines your resistance. But I don't see why would it need to be reduceable during combat.

    Read the article

  • Change players state and controls in-game

    - by Samurai Fox
    I'm using Unity 3D Let's say the player is an ice cube. You control it like a normal player. On press of a button, ice transforms (with animation) into water. You control it completely different than the ice cube. Another great example would be: Player is human being and has normal FPS controls. On press of a button human transforms into birds and now has completely different controls. Now, my question is, what would be easier and better: make one object with animation transition and to stay in that state of anim. until button is pressed again make two object: ice and water. Ice has an animation of turning into water. So replace ice (with animation) with water object And if anyone knows this one too: how to switch between 2 different types of player controls.

    Read the article

  • Posting to facebook from unity3d on iOS and android

    - by Guye Incognito
    I've made a game in unity3d for iOS and android. We have our own server to manage high scores and stuff like that. We'd also like to have the possibility post high scores to facebook, and also do things like this.. If you and your friend are have both posted a score for our game to facebook and you post a better score then you can send them a notification. I'm reading around about this now, but I'm wondering whats the normal way people do this? Possible ways.. Use the unity facebook SDK Looks like it would work but there are different versions for iOS and android. Call the facebook graph API directly from our server. This would unify the iOS and android versions and also it makes sense as our server holds / deals with all the highscore info. I can just imagine difficulties with logging in / authentication

    Read the article

  • Why is my model's scale changing after rotating it?

    - by justnS
    I have just started a simple flight simulator and have implemented Roll and pitch. In the beginning, testing went very well; however, after about 15-20 seconds of constantly moving the thumbsticks in a random or circular motion, my model's scale begins to grow. At first I thought the model was moving closer to the camera, but i set break points when it was happening and can confirm the translation of my orientation matrix remains 0,0,0. Is this a result of Gimbal Lock? Does anyone see an obvious error in my code below? public override void Draw( Matrix view, Matrix projection ) { Matrix[] transforms = new Matrix[Model.Bones.Count]; Model.CopyAbsoluteBoneTransformsTo( transforms ); Matrix translateMatrix = Matrix.Identity * Matrix.CreateFromAxisAngle( _orientation.Right, MathHelper.ToRadians( pitch ) ) * Matrix.CreateFromAxisAngle( _orientation.Down, MathHelper.ToRadians( roll ) ); _orientation *= translateMatrix; foreach ( ModelMesh mesh in Model.Meshes ) { foreach ( BasicEffect effect in mesh.Effects ) { effect.World = _orientation * transforms[mesh.ParentBone.Index]; effect.View = view; effect.Projection = projection; effect.EnableDefaultLighting(); } mesh.Draw(); } } public void Update( GamePadState gpState ) { roll = 5 * gpState.ThumbSticks.Left.X; pitch = 5 * gpState.ThumbSticks.Left.Y; }

    Read the article

  • Enemy Spawning method in a Top-Down Shooter

    - by Chris Waters
    I'm working on a top-down shooter akin to DoDonPachi, Ikaruga, etc. The camera movement through the world is handled automatically with the player able to move inside of the camera's visible region. Along the way, enemies are scripted to spawn at particular points along the path. While this sounds straightforward, I could see two ways to define these points: Camera's position: 'trigger' spawning as the camera passes by the points Time along path: "30 seconds in, spawn 2 enemies" In both cases, the camera-relative positions would be defined as well as the behavior of the enemy. The way I see it, the way you define these points will directly affect how the 'level editor', or what have you, will work. Would there be any benefits of one approach over the other?

    Read the article

  • relationship between the model and the renderer

    - by acrilige
    I tried to build a simple graphics engine, and faced with this problems: i have a list of models that i need to draw, and object (renderer) that implements IRenderer interface with method DrawObject(Object* obj). Implementation of renderer depends on using graphics library (opengl/directx). 1st question: model should not know nothing about renderer implementation, but in this case where can i hold (cache) information that depends on renderer implementation? For example, if model have this definition: class Model { public: Model(); Vertex* GetVertices() const; private: Vertex* m_vertices; }; what is the best way to cache, for example, vertex buffer of this model for dx11? Hold it in renderer object? 2nd question: what is the best way for model to say renderer HOW it must be rendered (for example with texture, bump mapping, or may be just in one color). I thought it can be done with flags, like this: model-SetRenderOptions(RENDER_TEXTURE | RENDER_BUMPMAPPING | RENDER_LIGHTING); and in Renderer::DrawModel method check for each flag. But looks like it will become uncomfortable with the options count growth...

    Read the article

  • libgdx collision detection / bounding the object

    - by johnny-b
    i am trying to get collision detection so i am drawing a red rectangle to see if it is working, and when i do the code below in the update method. to check if it is going to work. the position is not in the right place. the red rectangle starts from the middle and not at the x and y point?Huh so it draws it wrong. i also have a getter method so nothing wrong there. bullet.set(getX(), getY(), getOriginX(), getOriginY()); this is for the render shapeRenderer.begin(ShapeType.Filled); shapeRenderer.setColor(Color.RED); shapeRenderer.rect(bullet.getX(), bullet.getY(), bullet.getOriginX(), bullet.getOriginY(), 15, 5, bullet.getRotation()); shapeRenderer.end(); i have tried to do it with a circle but the circle draws in the middle and i want it to be at the tip of the bullet. at the front of the bullet. x, y point. boundingCircle.set(getX() + getOriginX(), getY() + getOriginY(), 4.0f); shapeRenderer.begin(ShapeType.Filled); shapeRenderer.setColor(Color.RED); shapeRenderer.circle(bullet.getBoundingCircle().x, bullet.getBoundingCircle().y, bullet.getBoundingCircle().radius); shapeRenderer.end(); thank you need it to be of the x and y as the bullet is in the middle of the sprite when drawn originally via paint.

    Read the article

  • Continuous Collision Detection Techniques

    - by Griffin
    I know there are quite a few continuous collision detection algorithms out there , but I can't find a list or summary of different 2D techniques; only tutorials on specific algorithms. What techniques are out there for calculating when different 2D bodies will collide and what are the advantages / disadvantages of each? I say techniques and not algorithms because I have not yet decided on how I will store different polygons which might be concave or even have holes. I plan to make a decision on this based on what the algorithm requires (for instance if an algorithm breaks down a polygon into triangles or convex shapes I will simply store the polygon data in this form).

    Read the article

  • Render 3d object to 2d surface (embedded system)

    - by Martin Berger
    i am working on an embedded system of a sort, and in some free time i would like to test its drawing capabilities. System in question is ARM Cortex M3 microcontroller attached to EasyMX Stellaris board. And i have a small 320x240 TFT screen :) Now, i have some free time each day and i want to create rotating cube. Micro C PRO for ARM doesnt have 3d drawing capabilities, which means it must be done in software. From the book Introduction to 3D Game Programming with DirectX 10 i know matrix algebra for transformations but that is cool when you have DirectX to set camera right. I gues i could make 2d object to rotate, but how would i go with 3d one? Any ideas and examples are welcome. Although i would prefer advices. I'd like to understand this.

    Read the article

  • Resolving a collision between point and moving line

    - by Conundrumer
    I am designing a 2d physics engine that uses Verlet integration for moving points (velocities mentioned below can be derived), constraints to represent moving line segments, and continuous collision detection to resolve collisions between moving points and static lines, and collisions between moving/static points and moving lines. I already know how to calculate the Time of Impact for both types of collision events, and how to resolve moving point static line collisions. However, I can't figure out how to resolve moving/static point moving line collisions. Here are the initial conditions in a point and moving line collision event. We have a line segment joined by two points, A and B. At this instant, point P is touching/colliding with line AB. These points have unit mass and some might have an initial velocity, unless point P is static. The line is massless and has no explicit rotational component, since points A and B could freely move around, extending or contracting the line as a result (which will be fixed later by the constraint solver). Collision is inelastic. What are the final velocities of the points after collision?

    Read the article

  • Defining the track in a 2D racing game

    - by Ivan
    I am designing a top-down racing game using canvas (html5) which takes a lot of inspiration from Micro Machines. In MM, cars can move off the track, but they are reset/destroyed if they go too far. My maths knowledge isn't great, so I'm finding it hard to separate 3D/complex concepts from those which are directly relevant to my situation. For example, I have seen "splines" mentioned, is this something I should read up on or is that overkill for a 2D game? Could I use a single path which defines the centre of the track and check a car's distance from this line? A second path might be required as a "racing line" for AI. Any advice on methods/techniques/terms to read up on would be greatly appreciated.

    Read the article

  • Question about JPanel "transition" for Java Swing

    - by user16778
    I want to make like a sort of main menu (in GUI). When the user clicks the start button, the screen transition into another "screen" (JPanel). This image will make it easier to understand. http://i.imgur.com/Cfdry.png Currently, I have a MainMenu extends JPanel and that gets added into a driver class with a JFrame. I can't figure how to switch to another class like Game extends JPanel. So when the user clicks the start button in MainMenu, I want it to somehow hide itself and the Game to show itself. Thanks.

    Read the article

  • what's wrong with this Lua code (creating text inside listener in Corona)

    - by Greg
    If you double/triple click on the myObject here the text does NOT disappear. Why is this not working when there are multiple events being fired? That is, are there actually multiple "text" objects, with some existing but no longer having a reference to them held by the local "myText" variable? Do I have to manually removeSelf() on the local "myText" field before assigning it another "display.newText(...)"? display.setStatusBar( display.HiddenStatusBar ) local myText local function hideMyText(event) print ("hideMyText") myText.isVisible = false end local function showTextListener(event) if event.phase == "began" then print("showTextListener") myText = display.newText("Hello World!", 0, 0, native.systemFont, 30) timer.performWithDelay(1000, hideMyText, 1 ) end end -- Display object to press to show text local myObject = display.newImage( "inventory_button.png", display.contentWidth/2, display.contentHeight/2) myObject:addEventListener("touch", showTextListener) Question 2 - Also why is it the case that if I add a line BEFORE "myText = ..." of: a) "if myText then myText:removeSelf() end" = THIS FIXES THINGS, whereas b) "if myText then myText=nil end" = DOES NOT FIX THINGS Interested in hearing how Lua works here re the answer...

    Read the article

  • 3d trajectory - calculate initial velocity

    - by Skoder
    Hey, I've got a 2D projectile code sample working, but would like to extend it to 3D. How would I calculate the initial velocity of the Z-axis? At the moment, I've got: initVel.X = (float)Math.Cos(45.0); initVel.Y = (float)Math.Sin(45.0); How would I convert this to work in 3D (and add the initial velocity for the Z-axis)? In my example, X is across, Y is up down and Z is going into the screen. I also normalize the vector and multiply it by the speed. Thanks

    Read the article

  • Frame Independent Movement

    - by ShrimpCrackers
    I've read two other threads here on movement: Time based movement Vs Frame rate based movement?, and Fixed time step vs Variable time step but I think I'm lacking a basic understanding of frame independent movement because I don't understand what either of those threads are talking about. I'm following along with lazyfoo's SDL tutorials and came upon the frame independent lesson. http://lazyfoo.net/SDL_tutorials/lesson32/index.php I'm not sure what the movement part of the code is trying to say but I think it's this (please correct me if I'm wrong): In order to have frame independent movement, we need to find out how far an object (ex. sprite) moves within a certain time frame, for example 1 second. If the dot moves at 200 pixels per second, then I need to calculate how much it moves within that second by multiplying 200 pps by 1/1000 of a second. Is that right? The lesson says: "velocity in pixels per second * time since last frame in seconds. So if the program runs at 200 frames per second: 200 pps * 1/200 seconds = 1 pixel" But...I thought we were multiplying 200 pps by 1/1000th of a second. What is this business with frames per second? I'd appreciate if someone could give me a little bit more detailed explanation as to how frame independent movement works. Thank you.

    Read the article

  • Game has noticeable frame drops but when through a profiler it always runs smooth

    - by felipedrl
    I'm trying to optimize my PC game but I can find the bottleneck since every time I run it through a profiler (gDEBugger) it runs smooths. When running outside gDEBugger I get these annoying hiccups. It's not just the graphics, the sound also gets choppy. The drops are inconsistent across runs, i.e, sometimes I run the same scenario and get no drops at all, sometimes I get a few drops, and others the game is consistently slow. The only constant is: when running through gDEBugger I ALWAYS get a smooth run. I'm suspecting something outside my game is interfering and causing these drops, but what in the hell does gDEBugger do that nullifies these drops? A higher process priority? Any ideas? Thanks in advance.

    Read the article

  • How can I improve my isometric tile-picking algorithm?

    - by Cypher
    I've spent the last few days researching isometric tile-picking algorithms (converting screen-coordinates to tile-coordinates), and have obviously found a lot of the math beyond my grasp. I have come fairly close and what I have is workable, but I would like to improve on this algorithm as it's a little off and seems to pick down and to the right of the mouse pointer. I've uploaded a video to help visualize the current implementation: http://youtu.be/EqwWcq1zuaM My isometric rendering algorithm is based on what is found at this stackoverflow question's answer, with the exception that my x and y axis' are inverted (x increased down-right, while y increased up-right). Here is where I am converting from screen to tiles: // these next few lines convert the mouse pointer position from screen // coordinates to tile-grid coordinates. cameraOffset captures the current // mouse location and takes into consideration the camera's position on screen. System.Drawing.Point cameraOffset = new System.Drawing.Point( 0, 0 ); cameraOffset.X = mouseLocation.X + (int)camera.Left; cameraOffset.Y = ( mouseLocation.Y + (int)camera.Top ); // the camera-aware mouse coordinates are then further converted in an attempt // to select only the "tile" portion of the grid tiles, instead of the entire // rectangle. this algorithm gets close, but could use improvement. mouseTileLocation.X = ( cameraOffset.X + 2 * cameraOffset.Y ) / Global.TileWidth; mouseTileLocation.Y = -( ( 2 * cameraOffset.Y - cameraOffset.X ) / Global.TileWidth ); Things to make note of: mouseLocation is a System.Drawing.Point that represents the screen coordinates of the mouse pointer. cameraOffset is the screen position of the mouse pointer that includes the position of the game camera. mouseTileLocation is a System.Drawing.Point that is supposed to represent the tile coordinates of the mouse pointer. If you check out the above link to youtube, you'll notice that the picking algorithm is off a bit. How can I improve on this?

    Read the article

< Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >