Search Results

Search found 25518 results on 1021 pages for 'iterative development'.

Page 521/1021 | < Previous Page | 517 518 519 520 521 522 523 524 525 526 527 528  | Next Page >

  • Wall avoidance steering

    - by Vodemki
    I making a small steering simulator using the reynolds boid algorythm. Now I want to add a wall avoidance feature. My walls are in 3D and defined using two points like that: ---------. P2 | | P1 .--------- My agents have a velocity, a position, etc... Could you tell me how to make avoidance with my agents ? Vector2D ReynoldsSteeringModel::repulsionFromWalls() { Vector2D force; vector<Wall *> wallsList = walls(); Point2D pos = self()->position(); Vector2D velocity = self()->velocity(); for (unsigned i=0; i<wallsList.size(); i++) { //TODO } return force; } Then I use all the forces returned by my boid functions and I apply it to my agent. I just need to know how to do that with my walls ? Thanks for your help.

    Read the article

  • How are vertex shader outs sent as inputs to the fragment shader?

    - by Jeffrey
    I'm learning some OpenGL 3.2 way of doing things and I think it's quite great, I'm actually understanding more of shaders and non-fixed pipeline in 1 week rather than those 2 years I tried to learn OpenGL fixed pipeline functions. But here's my question: From what I think I've understood the vertex shader is run for each vertexes in the VBO. But the fragments shader is run per each pixel (is that right?) which is a huge number compared to let's say 3 vertexes of a triangle. Now it seems that in the vertex shader the out variables (like colors and stuff) are passed 1 to 1 to the fragment shader. But let's say that I pass to the fragment shader the position of the vertex in the vertex shader. How is all executed? What vertex (A, B or C of the hipothetical triangle) is passed per each fragment and why?

    Read the article

  • android shut-down errors / thread problems

    - by iQue
    Im starting to deal with some stuff in my game that I thought were "minor problems" and one of these are that I get an error every time I shut down my game that looks like this: 09-05 21:40:58.320: E/AndroidRuntime(30401): FATAL EXCEPTION: Thread-4898 09-05 21:40:58.320: E/AndroidRuntime(30401): java.lang.NullPointerException 09-05 21:40:58.320: E/AndroidRuntime(30401): at nielsen.happy.shooter.MainGamePanel.render(MainGamePanel.java:94) 09-05 21:40:58.320: E/AndroidRuntime(30401): at nielsen.happy.shooter.MainThread.run(MainThread.java:101) on these lines is my GameViews rendering-method and the line in my Thread that calls my GameViews rendering-method. Im guessing Something in my surfaceDestroyed(SurfaceHolder holder) is wrong, or maybe Im not ending the tread in the right place. My surfaceDestroyed-method: public void surfaceDestroyed(SurfaceHolder holder) { Log.d(TAG, "Surface is being destroyed"); boolean retry = true; ((MainThread)thread).setRunning(false); while (retry) { try { ((MainThread)thread).join(); retry = false; } catch (InterruptedException e) { } } Log.d(TAG, "Thread was shut down cleanly"); } Also, In my activity for this View my onPaus, onDestoy and onStop are empty, do I maybe need to add something there? The crash occurs when I press my home-button on the phone, or any other button that makes the game stop. But as I understand it the onPaus is called when you press the Home-button.. This is really new territory for me so Im sorry if im missing something obvious or something really big. adding my surfaceCreated method asweel since that is where I start this thread: public void surfaceCreated(SurfaceHolder holder) { controls = new GameControls(this); setOnTouchListener(controls); timer1.schedule(new Task(this), 0, delay1); thread.setRunning(true); thread.start(); } and aslong as this is running, my game is rendering and updating.

    Read the article

  • How to calculate continuous motion with angular velocity in 2d

    - by Rulk
    I'm really new with physics. Maybe someone would be able to help me to solve the next problem: I need to calculate position of an agent on the plane(2D) in next time step where time step is large(20+ seconds) What I know about agent's motion: Initial Position Direction(normalised vector) Velocity(linear function from time ) - object always moves along it's direction Angular Velocity(linear function from time) Optional: External force direction External force (linear function from time) Running discreet simulation with t-0 is not an option.

    Read the article

  • How many VBOs should I use and should I keep a copy of their data?

    - by CSharpie
    Firstofall, I am sorry if my question is to broad. I am developing a tile based game and switched from those gl.Begin calls to using VBOs. This is kind of working allready, I managed to render a hexagonal polygon with a simple shader applied. What I am not sure is, how to implement the "whole" tile concept. Concrete the questions are: Is it better to create 1 VBO for a single tile and render it n-Times in every different position, or render one huge VBO that represents the whole "world" Depending on the answer above, what is the best way to draw a "linegrid". Overlay with the same vbo using the respecting polygon.mode , or is there a way to let the shader to this? How would frustum-culling or mousepicking work then, do i need to keep the VBO-data in memory?

    Read the article

  • Spherical harmonics lighting interpolation

    - by TravisG
    I want to use hardware filtering to smooth out colors in texels of a texture when I'm accessing texels at coordinates that are not directly at the center of the texel, the catch being that the texels store 2 bands of spherical harmonics coefficients (=4 coefficients), not RGBA intensity values. Can I just use hardware filtering like that (GL_LINEAR with and without mip mapping) without any considerations? In other terms: If I were to first convert the coefficients back to intensity representations, than manually interpolate between two intensities, would the resulting intensity be the same as if I interpolated between the coefficient vectors directly and then converted the interpolated result to intensities?

    Read the article

  • Rending 2D Tile World (With Player In The Middle)

    - by Mick
    What I have at the moment is a series of data structures I'm using, and I would like to render the world onto the screen (just the visible parts). I've actually already done this several times (lots of rewrites), but it's a bit buggy (rounding seems to make the screen jump ever so slightly every x tiles the player walks past). Basically I've been confusing myself heavily on what I feel should be a pretty simple problem... so here I am asking for some help! OK! So I have a 50x50 array holding the tiles of the world. I have the player position as 2 floats, x ([0, 49]) and y ([0, 49]) in that array. I have the application size exactly in pixels (x and y). I have an arbitrary TILE_SIZE static int (based on screen pixels). What I think is heavily confusing me is using a 2d orthogonal projection in opengl which maps (0,0) to the top left of the screen and (SCREEN_SIZE_X, SCREEN_SIZE_Y) to the bottom right of the screen. gl.glMatrixMode(GL.GL_PROJECTION); gl.glLoadIdentity(); glu.gluOrtho2D(0, getActualWidth(), getActualHeight(), 0); gl.glMatrixMode(GL.GL_MODELVIEW); gl.glLoadIdentity(); The map tiles are set so that the (0,0) in the array is the bottom left. And the player has to be in the middle on the screen (SCREEN_SIZE_X/2, SCREEN_SIZE_Y/2). What I've been doing so far is trying to render 1-2 tiles more all around what would be displayed on the screen so that I don't have to worry about figuring out rendering half a tile from the top left, depending where the player is. It seems like such an easy problem but after spending about 40+hours on it rewriting it many times I think I'm at a point where I just can't think clearly anymore... Any help would be appreciated. It would be great if someone can provide some very basic pseudo code on keeping the player in the middle when your projection is mapped to screen coordinates and only rendering basically the tiles that you would be any be see. Thanks!

    Read the article

  • Java 2D World question

    - by Munkybunky
    I have a 2D world background made up of a Grid of graphics, which I display on screen with a viewport (800x600) and it all works. My question is I have the following code to convert the mouse co-ordinates to world co-ordinates then World co-ordinates to grid co-ordinates then grid co-ordinates to screen co-ordinates. //Add camerax to mouse screen co-ords to convert to world co-ords. int cursorx_world=(int)camerax+(int)GameInput.mousex; int cursorx_grid=(int)cursorx_world/blocksize; // World Co-ords / gridsize give grid co-ords int cursorx_screen=-(int)camerax+(cursorx_grid*blocksize); So is there anyway I can convert straight from mouse screen co-ords to screen co-ordinates?

    Read the article

  • How to add reflection definition to read json files on web game

    - by user3728735
    I have a game which I deployed for desktop and android, I can read json data and create my levels, but the problem is, when it comes to reading json files from web app, I get an error that logs, cannot read the json file, I researched a lot and I found out that I should add my json config class to configurations, I added this line to gameName.gwt.xml, which is in core folder <extend-configuration-property name="gdx.reflect.include" value="com.las.get.level.LevelConfig"/> but it did not work out too, I have no idea where should I place this line, or where should I change to make my web app work, so I can read json files

    Read the article

  • Climbing boxes in box2D

    - by Rothens
    I've just stepped into the world of Box2D with libgdx. I've already made a stack of boxes: They are dropped randomly ontop of each other. What I'd like to achieve is to make a character, that could freely climb on the boxes, (He can grip on the boxes anywhere, not just on the side/top of a box) but his weight affects the stack as well, so the boxes could fall down. My google-fu failed me... Is there any way to make this possible?

    Read the article

  • What should I do if my text exceeds my text render target boundaries?

    - by user1423893
    I have a method for drawing strings in 3D that does the following: Set a render target Draw each character as a quadrangle using a orthographic projection to the render target Unset the render target Draw the render target texture using a perspective projection and a world transform My problem is how to deal with strings whose characters length exceeds that of the render target dimensions? For example if I have string "This is a reallllllllllly long string" and the render target can't accommodate it, it will only capture "This is a realllll". The render target (and its size) could be set each frame but wouldn't that be far too costly?

    Read the article

  • Implementing AS 2.0 with FSM?

    - by Up2u
    i have seen many of references of AI and FSM like : http://www.richardlord.net/blog/fini...n-actionscript and sadly im still can't understand the point of the FSM on AS2.0 is it a must to create a class of each state ? i have a project of game and also it has an AI, the AI has 3 state n i said the state is distanceCheck, ChaseTarget, and Hit the target, the game that i create is FPS game and play via by mouse so what i mean is i have create an AI ( and is success ) but i want to convert it to FSM method ... i create : function of CheckDistanceState() and in that function i have to locked the target with an array, and sort it with the nearest distance and locked it and it trigger the function ChaseState(), and in the ChaseState() i insert the Hit() function to destroy the enemy, the 3 function that i created , i call it in the AI_cursor.onEnterframe, ( FPS game that only have a cursor in stage ) is there any chance to implement FSM to my code without to create a class ?? from what i read before , to create a class mean to create an external code outside of the frame ( i used to code in frame) and i stil dont understand about it. sorry if my explaination not clear ...

    Read the article

  • Scene transitions

    - by Mars
    It's my first time working with actual scenes/states, aka DrawableGameComponents, which work separate from one another. I'm now wondering what's the best way to make transitions between them, and how to affect them from other scenes. Lets say I wanted to "push" one screen to the right, with another one coming in at the same time. Naturally I'd have to keep drawing both, until the transition is complete. And I'd have to adjust the coordinates I'm drawing at while doing it. Is there a way around specifically handling this special case in every single scene? Or of I wanted to fade one into the other. Basically the question stays the same, how would you do that without having to handle it in every single scene? While writing this I'm realizing it will be the same thing for all kinds of transitions. Maybe a central Draw method in the manager could be a solution, where parameters and effects are applied when necessary. But this wouldn't work if objects that are drawn have their own method, and aren't drawn within the scene, or if an effect has to be applied to the whole scene. That means, maybe scenes have to be drawn to their own rendertarget? That way one call to the base class after the normal drawing could be enough, to apply the effects, while drawing it to the main render target. But I once heard there are problems when switching from target to target, back and forth. So is that even a viable option? As you can see, I have some basic ideas how it might work... but nothing specific. I'd like to learn what's the common way to achieve such things, a general way to apply all kinds of transitions.

    Read the article

  • Drawing flaming letters in 3D with OpenGL ES 2.0

    - by Chiquis
    I am a bit confused about how to achieve this. What I want is to "draw with flames". I have achieved this with textures successfully, but now my concern is about doing this with particles to achieve the flaming effect. Am I supposed to create a path along which I should add many particle emitters that will be emitting flame particles? I understand the concept for 2D, but for 3D are the particles always supposed to be facing the user? Something else I'm worried about is the performance hit that will occur by having that many particle emitters, because there can be many letters and drawings at the same time, and each of these elements will have many particle emitters. More detailed explanation: I have a path of points, which is my model. Imagine a dotted letter "S" for example. I want make the "S" be on fire. The "S" is just an example it can be a circle, triangle, a line, pretty much any path described by my set of points. For achieving this fire effect I thought about using particles. So I am using a program called "Particle Designer" to create a fire style particle emitter. This emitter looks perfect on 2D on the iphone screen dimensions. So then I thought that I could probably draw an S or any other figure if i place many particle emitters next to each other following the path described. To move from the 2D version to the 3D version I thought about, scaling the emitter (with a scale matrix multiplication in its model matrix) and then moving it to a point in my 3D world. I did this and it works. So now I have 1 particle emitter in the 3D world. My question is, is this how you would achieve a flaming letter? Is this too inefficient if i expect to have many flaming paths on my world? Am i supposed to rotate the particle's quad so that its always looking at the user? (the last one is because i noticed that if u look at it from the side the particles start to flatten out)

    Read the article

  • Synchronise graphics and logic code

    - by Skeith
    I have a procedural approach to the game loop that runs various classes. it looks like this: continue any in progress animations check for used input apply AI move things resolve events such as collisions draw it all to screen I have seen a lot of posts about how drawing should be running separately as fast as it can, possibly in another thread. My problem is that if the drawing runs as fast as it, can what happens if it tried to draw while I'm still applying the AI or resolving a collision? It could draw the wrong thing on screen. This seems to be a well established idea so there must be an explanation to this problem as I just cant get my head around it. The only solution I have is to update the screen so fast that any errors like that get refreshed before we see them but that sounds hacky. So how does this work / how would you implement it so that they are in sync but running at different speeds?

    Read the article

  • What is the recommended library for using Lua from C++?

    - by DevilWithin
    I am currently planning how to integrate Lua scripting in my 2D Game Engine, and i would like to go straight to the most adequate solution for having C++ classes and objects exposed. I've read this (if it helps you help): http://lua-users.org/wiki/BindingCodeToLua If you have a better scripting language to recomend, go for it ;D All help is welcome, i need to pickup the best solution to start implementing Thanks

    Read the article

  • Unable to create suitable graphics device?

    - by kraze
    I've been following the Eye of the Dragon tutorial, which is basicly a guide to making a 2D RPG game, obviously. I recently finished the tutorial about making pop up screens in the menu and changed the screen to load as a full screen. Whenever I try and load the game it just goes black and my mouse sits there. I cannot change out of it other then with CtrlAltDel. Once i do that it says No suitable graphics card found, unable to create graphics device. I read somewhere about XNA not allowing more then one screen when any one of them is full screen. but it wasnt very informative. Anyone have any ideas whats going on and/or how to fix this? Just incase if this helps this is the code for the graphics device: public Game1() { graphics = new GraphicsDeviceManager(this); graphics.PreferredBackBufferWidth = 900; graphics.PreferredBackBufferHeight = 768; graphics.IsFullScreen = true; this.Window.Title = "Eyes of the Dragon"; Content.RootDirectory = "Content"; }

    Read the article

  • Generating random tunnels

    - by IVlad
    What methods could we use to generate a random tunnel, similar to the one in this classic helicopter game? Other than that it should be smooth and allow you to navigate through it, while looking as natural as possible (not too symmetric but not overly distorted either), it should also: Most importantly - be infinite and allow me to control its thickness in time - make it narrower or wider as I see fit, when I see fit; Ideally, it should be possible to efficiently generate it with smooth curves, not rectangles as in the above game; I should be able to know in advance what its bounds are, so I can detect collisions and generate powerups inside the tunnel; Any other properties that let you have more control over it or offer optimization possibilities are welcome. Note: I'm not asking for which is best or what that game uses, which could spark extended discussion and would be subjective, I'm just asking for some methods that others know about or have used before or even think they might work. That is all, I can take it from there. Also asked on stackoverflow, where someone suggested I should ask here too. I think it fits in both places, since it's as much an algorithm question as it is a gamedev question, IMO.

    Read the article

  • When dealing with a static game board, what are some methods to make it more interesting?

    - by Ólafur Waage
    Let's say you have a game board that you look at. It does not move but there is some action going on. For example Chess, Checkers, Solitaire. The game I'm working on is not one of these but it's a good reference. What are some methods you can apply to the game or the design that increases the appeal of the game to the user? Of course you can make it prettier but what are some other methods you can use? For example: Visual cues, game design changes, user interface arrangement, etc.

    Read the article

  • Yet another frustum culling question

    - by Christian Frantz
    This one is kinda specific. If I'm to implement frustum culling in my game, that means each one of my cubes would need a bounding sphere. My first question is can I make the sphere so close to the edge of the cube that its still easily clickable for destroying and building? Frustum culling is easily done in XNA as I've recently learned, I just need to figure out where to place the code for the culling. I'm guessing in my method that draws all my cubes but I could be wrong. My camera class currently implements a bounding frustum which is in the update method like so frustum.Matrix = (view * proj); Simple enough, as I can call that when I have a camera object in my class. This works for now, as I only have a camera in my main game class. The problem comes when I decide to move my camera to my player class, but I can worry about that later. ContainmentType CurrentContainmentType = ContainmentType.Disjoint; CurrentContainmentType = CamerasFrustrum.Contains(cubes.CollisionSphere); Can it really be as easy as adding those two lines to my foreach loop in my draw method? Or am I missing something bigger here? UPDATE: I have added the lines to my draw methods and it works great!! So great infact that just moving a little bit removes the whole map. Many factors could of caused this, so I'll try to break it down. cubeBoundingSphere = new BoundingSphere(cubePosition, 0.5f); This is in my cube constructor. cubePosition is stored in an array, The vertices that define my cube are factors of 1 ie: (1,0,1) so the radius should be .5. I least I think it should. The spheres are created every time a cube is created of course. ContainmentType CurrentContainmentType = ContainmentType.Disjoint; foreach (Cube block in cube.cubes) { CurrentContainmentType = cam.frustum.Contains(cube.cubeBoundingSphere); ///more code here if (CurrentContainmentType != ContainmentType.Disjoint) { cube.Draw(effect); } Within my draw method. Now I know this works because the map disappears, its just working wrong. Any idea on what I'm doing wrong?

    Read the article

  • World to Pixel Transformation

    - by D00d
    My objects have a location in world coordinates (basically 1.0f is a meter). If I simply draw my objects using their world coordinates, each meter will correspond to a pixel. Obviously that's not what I want. Now, I don't want to have to apply a transformation to each and every object's position when I draw them. As I happen to be using XNA, and spritebatch allows a Matrix to be passed in as an argument in it's begin method, I was wondering if there is a way to pass the World to Pixel transformation in there. Any suggestions? So far Matrix.CreateScale(new Vector3(zoom, zoom, 1)) puts the objects in their proper spot, but it also scales up the sprites. Is there a way to transform the position without enlarging the sprite? Thanks

    Read the article

  • XNA WP7 Texture memory and ContentManager

    - by jlongstreet
    I'm trying to get my WP7 XNA game's memory under control and under the 90MB limit for submission. One target I identified was UI textures, especially fullscreen ones, that would never get unloaded. So I moved UI texture loads to their own ContentManager so I can unload them. However, this doesn't seem to affect the value of Microsoft.Phone.Info.DeviceExtendedProperties.GetValue("ApplicationCurrentMemoryUsage"), so it doesn't look like the memory is actually being released. Example: splash screens. In Game.LoadContent(): Application.Instance.SetContentManager("UI"); // set the current content manager for (int i = 0; i < mSplashTextures.Length; ++i) { // load the splash textures mSplashTextures[i] = Application.Instance.Content.Load<Texture2D>(msSplashTextureNames[i]); } // set the content manager back to the global one Application.Instance.SetContentManager("Global"); When the initial load is done and the title screen starts, I do: Application.Instance.GetContentManager("UI").Unload(); The three textures take about 6.5 MB of memory. Before unloading the UI ContentManager, I see that ApplicationCurrentMemoryUsage is at 34.29 MB. After unloading the ContentManager (and doing a full GC.Collect()), it's still at 34.29 MB. But after that, I load another fullscreen texture (for the title screen background) and memory usage still doesn't change. Could it be keeping the memory for these textures allocated and reusing it? edit: very simple test: protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); PrintMemUsage("Before texture load: "); // TODO: use this.Content to load your game content here red = this.Content.Load<Texture2D>("Untitled"); PrintMemUsage("After texture load: "); } private void PrintMemUsage(string tag) { Debug.WriteLine(tag + Microsoft.Phone.Info.DeviceExtendedProperties.GetValue("ApplicationCurrentMemoryUsage")); } protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here if (count++ == 100) { GC.Collect(); GC.WaitForPendingFinalizers(); PrintMemUsage("Before CM unload: "); this.Content.Dispose(); GC.Collect(); GC.WaitForPendingFinalizers(); PrintMemUsage("After CM unload: "); red = null; spriteBatch.Dispose(); GC.Collect(); GC.WaitForPendingFinalizers(); PrintMemUsage("After SpriteBatch Dispose(): "); } base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // TODO: Add your drawing code here if (red != null) { spriteBatch.Begin(); spriteBatch.Draw(red, new Vector2(0,0), Color.White); spriteBatch.End(); } base.Draw(gameTime); } This prints something like (it changes every time): Before texture load: 7532544 After texture load: 10727424 Before CM unload: 9875456 After CM unload: 9953280 After SpriteBatch Dispose(): 9953280

    Read the article

  • How do I create a big multiplayer world in UDK?

    - by Dorpe
    I want to create a big multiplayer world in UDK and I'm having a few difficulties. I created the biggest terrain possible but then any terrain related action I do takes forever. However, I've seen videos of people make same size terrain and working without a problem. My pc is strong enough, so maybe someone can tell me what I'm doing wrong. I want to make it even bigger then the biggest terrain size, so I was thinking of doing level streaming but then I read that streaming is working server side which means if I have a player on every terrain all terrains will still be loaded and I want to save as much memory possible so it will work well online. Thanks for any help you can give.

    Read the article

  • How to offset particles from point of origin

    - by Sun
    Hi I'm having troubles off setting particles from a point of origin. I want my particles to spread out after a certain radius from a the point of origin. For example, this is what I have right now: All particles emitted from a point of origin. What I want is this: Particles are offset from the point of origin by some amount, i.e after the circle. What is the best way to achieve this? At the moment, I have the point of origin, the position of each particle and its rotation angle. Sorry for the poor illustrations. Edit: I was mistaken, when a particle is created, I have only the point of origin. When the particle is created I am able to calculate the rotation of the particle in the update method after it has moved to a new location using atan2() method. This is how I create/manage particles: Created new particle at enemy ship death location, for every new particle which is added to the list, call Update and Draw to update its position, calculate new angle and draw it.

    Read the article

  • XNA When to call LoadContent

    - by Peteyslatts
    I have an enum in my game that denotes the game state ie MainMenu, InGame, GameOver, Exit and I was wondering if it would be advisable to add a new one in for PrepGame - in which the game creates viewports for however many players there are, creates the battlefield etc. I feel like this is a good idea except for one thing: should I make a call back to LoadContent() in this state? I could just put a switch statement in the LoadContent for my currentGameState. If it equals PrepGame load things like the skybox, ship models, texures, HUD graphics etc. Or is it a good idea to create an Asset Manager class in the first call to LoadContent() and load everything then. I feel like both approaches have different benefits: faster, but more load times vs slower initial load time, but then all my objects are referencing the same variables so I only have to load each on once. Any help is greatly appreciated. Thanks, Peter

    Read the article

< Previous Page | 517 518 519 520 521 522 523 524 525 526 527 528  | Next Page >