Search Results

Search found 25518 results on 1021 pages for 'iterative development'.

Page 560/1021 | < Previous Page | 556 557 558 559 560 561 562 563 564 565 566 567  | Next Page >

  • How can I plot a radius of all reachable points with pathfinding for a Mob (XNA)?

    - by PugWrath
    I am designing a tactical turn based game. The maps are 2d, but do have varying level-layers and blocking objects/terrain. I'm looking for an algorithm for pathfinding which will allow me to show an opaque shape representing all of the possible max-distance pixels that a mob can move to, knowing the mob's max pixel distance. Any thoughts on this, or do I just need to write a good pathfinding algorithm and use it to find the cutoff points for any direction in which an obstacle exists?

    Read the article

  • Targeting a vehicle with complex movement?

    - by e100
    Targeting a vehicle with known constant velocity is simple, and collision is guaranteed. Imprecise AI can be modeled by adding a small error factor. But how would one go about targeting a vehicle whose movements are more complex? Perhaps it's evading the AI or another game object. I've been thinking about how I'd do it myself in a FPS (in which bullets have finite speed) and think there might need to be at least couple of targeting modes based on the target's movement in the previous second or so: If it's near linear (peak acceleration in a certain range) target with the linear model If it's highly irregular (perhaps size of bounding box of recent positions could be used?) , target at an average For now I can assume 2d space, AI is stationary and projectile moves linearly.

    Read the article

  • How to Use Text in Unity3d

    - by ZiG-ZaG
    How Can i Create Text in Unity3D? I Use "3D Text" But Its Always on Top Of Everything! Can You Suggest Anything? I creating a 2D Game So its not Necessarily a 3D Text.. Edit: Because I Building a 2D Game My Scene is Full of Planes in Front of Camera And I want My Text to be Over One of the Planes and when plane is moving My Text appears behind it. But When I Use "3D Text" Its Always In Front of Everything. Sorry for My Bad English...

    Read the article

  • (Android) How are OpenGL ES 1 framebuffers and textures sized?

    - by jens
    I am trying to draw to a texture using a framebuffer using OpenGL ES 1.1 on Android, Java. Afterwords I want to overlay this texture full-screen over my game. In theory, this works like a charm, but somehow the coordinates are off. For testing I drew something at (0,0) with width and height 200, and it partly is off-screen. This is how I create the framebuffer: fb = new int[1]; depthRb = new int[1]; renderTex = new int[1]; gl11ep.glGenFramebuffersOES(1, fb, 0); gl11ep.glGenRenderbuffersOES(1, depthRb, 0); // the depth buffer gl.glGenTextures(1, renderTex, 0);// generate texture gl.glBindTexture(GL10.GL_TEXTURE_2D, renderTex[0]); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); texBuffer = ByteBuffer.allocateDirect(buf.length*4).order(ByteOrder.nativeOrder()).asIntBuffer(); gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_LUMINANCE, texW, texH, 0, GL10.GL_LUMINANCE, GL10.GL_UNSIGNED_BYTE, texBuffer); gl11ep.glBindRenderbufferOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); gl11ep.glRenderbufferStorageOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, GL11ExtensionPack.GL_DEPTH_COMPONENT16, texW, texH); Before I draw, I do this: gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, fb[0]); gl.glClearColor(0f, 0f, 0f, 0f); // specify texture as color attachment gl11ep.glFramebufferTexture2DOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, GL10.GL_TEXTURE_2D, renderTex[0], 0); // attach render buffer as depth buffer gl11ep.glFramebufferRenderbufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_DEPTH_ATTACHMENT_OES, GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); I set texW = 1024 and texH = 512. When rendering this texture fullscreen, with a lightmask (size 200x200) placed at (0, 0) and (texW/2, texH/2). You can see that it seems like the coordinate system doesnt start at (0,0) as that light overlaps the screen and the images are not drawn as squares (my lightcone-texture is a circle, not an ellipse). So, how is the coordinate system of this offscreen-drawn texture defined? Thanks

    Read the article

  • Complex, yet simple crafting system model

    - by KatShot
    I'm working on some arcade shooter/slasher, and the main logline is "Kick'em with everything you want". There's not so many enemies in GDD, main focus is on tons of weapons and gadgets to cause mayhem. To get weapon, you need to craft it, and now crafting system looks simple, like: 1) You got three slots for weapon parts (like A, B, C) 2) You collect misc weapon parts, and when you got atleast one for every slot, you can craft a weapon (for example, if you got A1, B1, B2, B3 and C1, you can craft such models - A1B1C1, A1B2C1, A1B3C1) As for me, this crafting system is too simple, because weapon parts will just fall from the top of screen, often enough. That's why I'm thinking about adding some more crafting system levels, like resources (collect 10 scrap pieces to make part A1 or C3), etc. My question is: How can i add some more complex, still simple, transparent levels in crafting system? upd. For example, in Minecraft or Terraria, first 5-10 crafting recipies quite transparent and simple IMHO. But then it turns into huge mess to understand, how to craft this or that (for example, fishing rod)

    Read the article

  • Random generation of interesting puzzle levels?

    - by monsterfarm
    I'm making a Sokoban-like game i.e. there's a grid that has some crates on it you can push and you have to get the crates on crosses to win the level (although I'm going to add some extra elements to it). Are there any general algorithms or reading material I can look at for how I could go about generating interesting (as in, not trivial to solve) levels for this style of game? I'm aware that random level generators exist for Sokoban but I'm having trouble finding the algorithm descriptions. I'm interested in making a game where the machine can generate lots of levels for me, sorted by difficulty. I'm even willing to constrain the rules of the game to make the level generation easier (e.g. I'll probably limit the grid size to about 7x7). I suspect there are some general ways to do level generation here as I've seen e.g. Traffic Jam-like games (where you have to move blocks around the free some block) with 1000s of levels where each one has a unique solution. One idea I had was to generate a random map in its final state (i.e. where all crates are on top of their crosses) and then the computer would pull (instead of push) these crates around to create a level. The nice property here is that we know the level is solvable. However, I'd need some heuristics to ensure the level was interesting.

    Read the article

  • Unity, Unrealistic Sphere On Inclined Plane

    - by user1086516
    So I am trying to model a ball rolling down an inclined surface in Unity based on what I am observing in real life but it is still quite off. In Unity it takes the ball about 3 seconds to travel from a place to another specified place where in real life it only takes 1 second. The ball isn't as fast to react to the incline as in real life (even though I have tried giving the ball and surface low or zero friction values) The ball does not accelerate as nearly as fast as it does in real life What do I do to give the ball more realistic behavior ? I have tried messing around with mass, physics materials, drag, and angular drag on the ball and surface but it doesn't seem to be helping.

    Read the article

  • How does process of updating code with Continous Integration work?

    - by BleakCabalist
    I want to draw a model of process of updating the source code with the use of Continous Integration. The main issue is I don't really understand how it works when there are several programmers working on various aspects of the code at the same time. I can't visualize it in my mind. Here's what I know but I might be wrong: New code is sent to repository. Continous Integration server asks Version Control System if there is a new code in repository. If there is than CIS executes tests on the code. If tests show there are problems than CIS orders VCS to revert back to working wersion of the code and communicates it to programmer. If tests are passed positively it compiles the repository code and makes new build of a game? New build is made not after ever single change, but at the end of the day I believe? Are my assumptions above correct? If yes, does it also work when there are several programmers updating repository at once? Is this enough to draw a model of the process in your opinions or did I miss something? Also, what software would I need for above process? Can you guys give examples for CIS software and VCS software and whatever else I need? Does CIS software perform code tests or do I need another tool for that and integrate it with CIS? Is there a repository software?

    Read the article

  • Transition Player Position

    - by Lycrios
    I'm currently working on a java MMO with a pretty solid start, but I've come across an issue I need a little help with. I'm working on player position's. Meaning there X/Y on the screen, if the PlayerA has a higher FPS(Frames Per Second) then other players, the resulting action will be that PlayerA will move faster than everyone else. I know the reasoning for this, it's because when the game draws I just use: x++; What would a better method be?

    Read the article

  • If I were to start an Android or iPhone app or game, what program should I use?

    - by John
    I don't really know a lot about programming and the only things I do is using codes with Gamemaker, but I have read that it is too basic and it can't be used with iPhone or Android. Is there anything free that I can use to make games for those platforms? Or if not, any suggestions for engines or anything else? I was wondering about Unity, for example, is that a good investment to use for making games?

    Read the article

  • Why do I have to divide the origin of a quad by 4 instead of 2?

    - by vinzBad
    I'm currently transitioning from C#/XNA to C#/OpenTK but I'm getting stuck at the basics. So I have this Sprite-Class: public static bool EnableDebugDraw = true; public float X; public float Y; public float OriginX = 0; public float OriginY = 0; public float Width = 0.1f; public float Height = 0.1f; public Color TintColor = Color.Red; float _layerDepth = 0f; public void Render() { Vector2[] corners = { new Vector2(X-OriginX,Y-OriginY), //top left new Vector2(X +Width -OriginX,Y-OriginY),//top right new Vector2(X +Width-OriginX,Y+Height-OriginY),//bottom rigth new Vector2(X-OriginX,Y+Height-OriginY)//bottom left }; GL.Color3(TintColor); GL.Begin(BeginMode.Quads); { for (int i = 0; i < 4; i++) GL.Vertex3(corners[i].X,corners[i].Y,_layerDepth); } GL.End(); if (EnableDebugDraw) { GL.Color3(Color.Violet); GL.PointSize(3); GL.Begin(BeginMode.Points); { for (int i = 0; i < 4; i++) GL.Vertex2(corners[i]); } GL.End(); GL.Color3(Color.Green); GL.Begin(BeginMode.Points); GL.Vertex2(X + OriginX, Y + OriginY); GL.End(); } With the following setup I try to set the origin of the quad to the middle of the quad. _sprite.OriginX = _sprite.Width / 2; _sprite.OriginY = _sprite.Height / 2; but this sets the origin to the upper right corner of the quad, so i have to _sprite.OriginX = _sprite.Width / 4; _sprite.OriginY = _sprite.Height / 4; However this is not the intended behaviour, could you advise me how I fix this?

    Read the article

  • I need help with 2D collision response (of stacking rotating polygons, with friction and gravity, for a game)

    - by Register Sole
    Hi I am looking for suggestions on how to write a collision response for game programming purpose (so not a scientific simulation). I am dealing with 2D polygons that are rotating, and I want them to be able to stack. I also want friction and gravity. I have a detection mechanism that returns the separating axis, how long the polygons are overlapping, and up to 2 points of contact. For the response, I am currently using an impulse-based response, which main idea is: find the separating axis, length of overlap, and the point of contact (if there are two, pick a random point between to simulate averaged force. i believe there are better ways than this) separate the object (modifying their positions, taking into account of their masses. i do not separate them completely though, to keep track that they are colliding to reduce jitter) calculate normal force based on the coefficient of restitution as if there is no friction. calculate friction, as if there is no normal force. I also assume that the direction of the friction is the same throughout the collision. apply the two forces (which result in a rather inaccurate result, since each force is calculated as if the other is not present. for non-rotating bodies though, this method is exact) I am aware that this method requires the coefficient of friction to be sufficiently small due to the assumption that the direction of friction stays the same in a collision. Also, the result is visually satisfying if gravity is not present. However, when there is gravity, objects on ground jitter and drift (even with zero coefficient of restitution)! It also happens for stacking objects. Larger coefficient of restitution and gravity increase the jittering. I hope you can help me with this. Some things i would like to know more about is how to handle collision with two point of contacts (how to end up having an object sitting still on the ground?), how to reduce, and prevent if possible, jitter and drift (do people use the most accurate method possible, or is there a trick to overcome this?), and how to handle multiple objects collision (for example, in the case of stacking objects, how do I check collisions between all of them and keep them all stable at every frame so they don't jitter?). A total reformulation of my algorithm is also welcomed, as long as it works. Another thing to note is that I am not making a Physics game, so I only need a visually satisfying response (though a realistic response is preferable, if it is not performance-heavy). But surely jittering and drifting objects on flat ground are not at all acceptable. In addition, I am a Physics student, so feel free to talk about impulse and whatever needed. Finally, I'm sorry for the long post. I tried to be as concise as I can. Thank you for reading it! EDIT It seems what I didn't manage to come up all this time is to separate resting contact as a class of its own and how to solve them. Currently reading the paper suggested by Jedediah. More suggestions on the topic are welcome :) CASE CLOSED After reading various papers referenced in the paper, successfully implemented simultaneous impulse method (referring to the original paper by Erin Catto, [Catt05]). Thanks maaaan!! The paper is wonderful. The current system is visibly much better than the previous. Still haven't separated resting contact as a class of its own though, which brings me to my next question. Love you all! Haha (sorry, I'm just so happy thanks to you).

    Read the article

  • Modular spaceship control

    - by SSS
    I am developing a physics based game with spaceships. A spaceship is constructed from circles connected by joints. Some of the circles have engines attached. Engines can rotate around the center of circle and create thrust. I want to be able to move the ship in a direction or rotate around a point by setting the rotation and thrust for each of the ship's engines. How can I find the rotation and thrust needed for each engine to achieve this?

    Read the article

  • How to update a mesh position base on a pressed key?

    - by steven166
    I have a mesh loaded from a file, like a tiger mesh. At the first time it locates at A position, then if I press a left key, it will moves to B position but the problem is if I press a left key one more time, it will move from B position to C position. It means that the amount I want to move the mesh will base on the current position instead of the first time rendering position. I can do it if I have a array vertices then I just update the vertex buffer, but a mesh loaded from a file does not have an array vertices, so how to do it? Anybody help me, please?

    Read the article

  • Game with changing logic

    - by rsprat
    I'm planing to develop a puzzle like mobile game (android/ios) with a different logic for each puzzle. Say, for example one puzzle could be a Rubik's cube and another one a ball maze. Many more new puzzles will appear during the life of the game, and I want the users to be able to play those new puzzles. The standard way for managing this would be through application updates. Each time a new puzzle or bunch of puzzles appear, create a new update for the app that the user can download. However, I would like to do it in a more transparent way. When a new puzzle appears, the basic info of the puzzle would be displayed in the app menu, and the user would be able to play it by just clicking it. What comes to my mind is that the app would automatically download a .dll or .jar and inject it in the application at runtime. Is that even possible? Are there any restrictions from the OS? Is there a better way for solving it? Thanks alot

    Read the article

  • moving in the wrong direction

    - by Will
    Solution: To move a unit forward: forward = Quaternion(0,0,0,1) rotation.normalize() # ocassionally ... pos += ((rotation * forward) * rotation.conjugated()).xyz().normalized() * speed I think the trouble stemmed from how the Euclid math library was doing Quaternion*Vector3 multiplication, although I can't see it. I have a vec3 position, a quaternion for rotation and a speed. I compute the player position like this: rot *= Quaternion().rotate_euler(0.,roll_speed,pitch_speed) rot.normalize() pos += rot.conjugated() * Vector3(0.,0.,-speed) However, printing the pos to console, I can see that I only ever seem to travel on the x-axis. When I draw the scene using the rot quaternion to rotate my camera, it shows a proper orientation. What am I doing wrong? Here's an example: You start off with rotation being an identity quaternion: w=1,x=0,y=0,z=0 You move forward; the code correctly decrements the Z You then pitch right over to face the other way; if you spin only 175deg it'll go in right direction; you have to spin past 180deg. It doesn't matter which direction you spin in, up or down, though Your quaternion can then be something like: w=0.1,x=0.1,y=0,z=0 And moving forward, you actually move backward?! (I am using the euclid Python module, but its the same as every other conjulate) The code can be tried online at http://williame.github.com/ludum_dare_24_evolution/ The only key that adjusts the speed is W and S. The arrow keys only adjust the pitch/roll. At first you can fly ok, but after a bit of weaving around you end up getting sucked towards one of the sides. The code is https://github.com/williame/ludum_dare_24_evolution/blob/cbacf61a7159d2c83a2187af5f2015b2dde28687/tiny1web.py#L102

    Read the article

  • GPU optimization question: pre-computed or procedural?

    - by Jay
    Good morning, I'm learning shader program and need some general direction. I want to add noise to my laser beam (like this). Which is the best way to handle it? I could pre-compute an image and pass it to the shader. I could then use the image to change the opacity and easily animate the smoke by changing the offset of the texture lookup. I could also generate noise in the shader and do the same thing the texture was used for. Is it generally better to avoid I/O to the graphics card or the opposite? Thanks!

    Read the article

  • What's the difference between Pygame's Sound and Music classes?

    - by Southpaw Hare
    What are the key differences between the Sound and Music classes in Pygame? What are the limitations of each? In what situation would one use one or the other? Is there a benefit to using them in an unintuitive way such as using Sound objects to play music files or visa-versa? Are there specifically issues with channel limitations, and do one or both have the potential to be dropped from their channel unreliably? What are the risks of playing music as a Sound?

    Read the article

  • How do audio based games such as Audiosurf and Beat Hazard work?

    - by The Communist Duck
    Note: I am not asking how to make a clone of one of these. I am asking about how they work. I'm sure everyone's seen the games where you use your own music files (or provided ones) and the games produce levels based on them, such as Audiosurf and Beat Hazard. Here is a video of Audiosurf in action, to show what I mean. If you provide a heavy metal song, you would get a completely different set of obstacles, enemies, and game experience from something like Vivaldi. What does interest me is how these games work. I do not know much about audio (well, data-side), but how do they process the song to understand when it is settling down or when it's speeding up? I guess they could just feed the pitch values (assuming those sorts of things exist in audio files) to form a level, but it wouldn't fully explain it. I'm either looking for an explanation, some links to articles about this sort of thing (I'm sure there's a term or terms for it), or even an open-source implementation of this kind of thing ;-) EDIT: After some searching and a little help, I found out about FFT (Fast Fourier Transform). This maybe a step in the right direction, but it is something that does not make any sense to me..or fits with my physics knowledge of waves.

    Read the article

  • Zelda-style top-down RPG. How to store tile and collision data?

    - by Delerat
    I'm looking to build a Zelda: LTTP style top-down RPG. I've read a lot on the subject and am currently going back and forth on a few solutions. I'm using C#, MonoGame, and Tiled. For my tile maps, these are the choices I can see in front of me: Store each tile as its own array. Each one having 3-4 layers, texture/animation, depth, flags, and maybe collision(depending on how I do it). I've read warning about memory issues going this route, and my biggest map will probably be 160x120 tiles. My average map however will be about 40x30. The number of tiles might cut in half if I decide to double my tile size, which is currently 16x16. This is the most appealing approach for me, as I feel like I would know how to save maps, make changes, and separate it into chunks for collision checks. Store the static parts of my tile map in multiple arrays acting as the different layers. Then I would just use entities for anything that wasn't static. All of the other tile data such as collisions, depth, etc., would be stored in their own layers as well I guess? This way just seems messy to me though. Regardless of which one I choose, I'm also unsure how to plan all of that other tile data. I could write a bunch of code that would know which integer represents what tile and it's data, but if I changed a tileset in Tiled and exported it again, all of those integers could potentially change and I'd have to adjust a whole bunch of code. My other issue is about how I could do collision. I want to at least support angled collision that slides you around the corners of objects like LTTP does, if not more oddball shapes as well. So do I: Store collision as a flag for binary collision. Could I get this to support angles? Would it be fine to store collision as an integer and have each number represent a certain angle of collision? Store a list of rectangles or other shapes and do collision that way? Sorry for the large two-part(three-part?) question. I felt like these needed to be asked together as I believe each choice influences the other.

    Read the article

  • How do I convert screen coordinates to between -1 and 1?

    - by bbdude95
    I'm writing a function that allows me to click on my tiles. The origin for my tiles is the center, however, the mouse's origin is the top left. I need a way to transform my mouse coordinates into my tile coordinates. Here is what I already have (but is not working): void mouseClick(int button, int state, int x, int y) { x -= 400; y -= 300; float xx = x / 100; // This gets me close but the number is still high. float yy = y / 100; // It needs to be between -1 and 1 }

    Read the article

  • How to pause and unpause the animation of a sprite?

    - by user1609578
    My game has a sprite representing a character. When the character picks up an item, the sprite should stop moving for a period of time. I use CCbezier to make the sprite move, like this: sprite->runaction(x) Now I want the sprite to stop its current action (moving) and later resume it. I can make the sprite stop by using: sprite->stopaction(x) but if I do that, I can't resume the movement. How can I do that?

    Read the article

  • OpenGL : Keeping alpha in a render buffer

    - by Cyan
    In my current task, i need to render a texture into a render buffer, in order to work on it (apply special filters) there. The result is then considered a "new texture", which is later displayed. This works fine, except when the texture contains some transparent/semi-transparent parts. My current guess it that, within the render buffer, the texture is "merged" with a kind of "grey background". In this case, it obviously impacts the R,G,B color components of transparent pixels. I've yet to find a way around this. Even manually assigning alpha after the rendering process doesn't save the day for semi-transparent pixels, which RGB are "tainted" by the grey background.

    Read the article

  • Elastic Collision Formula in Java

    - by Shijima
    I'm trying to write a Java formula based on this tutorial: 2-D elastic collisions without Trigonometry. I am in the section "Elastic Collisions in 2 Dimensions". In step 1, it mentions "Next, find the unit vector of n, which we will call un. This is done by dividing by the magnitude of n". My below code represents the normal vector of 2 objects (I'm using a simple array to represent the normal vector), but I am not really sure what the tutorial means by dividing the magnitude of n to get the un. int[] normal = new int[2]; normal[0] = ball2.x - ball1.x; normal[1] = ball2.y - ball1.y; Can anyone please explain what un is, and how I can calculate it with my array in Java?

    Read the article

  • How do I find a unit vector of another in Java?

    - by Shijima
    I'm writing a Java formula based on this tutorial: 2-D elastic collisions without Trigonometry. I am in the section "Elastic Collisions in 2 Dimensions". Part of step 1 says: Next, find the unit vector of n, which we will call un. This is done by dividing by the magnitude of n. My below code represents the normal vector of 2 objects (I'm using a simple array to represent the normal vector). int[] normal = new int[2]; normal[0] = ball2.x - ball1.x; normal[1] = ball2.y - ball1.y; I am unsure what the tutorial means by dividing the magnitude of n to get the un. What is un? How can I calculate it with my Java array?

    Read the article

< Previous Page | 556 557 558 559 560 561 562 563 564 565 566 567  | Next Page >