Search Results

Search found 25550 results on 1022 pages for 'mere development'.

Page 484/1022 | < Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >

  • How to shift a vector based on the rotation of another vector?

    - by bpierre
    I’m learning 2D programming, so excuse my approximations, and please, don’t hesitate to correct me. I am just trying to fire a bullet from a player. I’m using HTML canvas (top left origin). Here is a representation of my problem: The black vector represent the position of the player (the grey square). The green vector represent its direction. The red disc represents the target. The red vector represents the direction of a bullet, which will move in the direction of the target (red and dotted line). The blue cross represents the point from where I really want to fire the bullet (and the blue and dotted line represents its movement). This is how I draw the player (this is the player object. Position, direction and dimensions are 2D vectors): ctx.save(); ctx.translate(this.position.x, this.position.y); ctx.rotate(this.direction.getAngle()); ctx.drawImage(this.image, Math.round(-this.dimensions.x/2), Math.round(-this.dimensions.y/2), this.dimensions.x, this.dimensions.y); ctx.restore(); This is how I instanciate a new bullet: var bulletPosition = playerPosition.clone(); // Copy of the player position var bulletDirection = Vector2D.substract(targetPosition, playerPosition).normalize(); // Difference between the player and the target, normalized new Bullet(bulletPosition, bulletDirection); This is how I move the bullet (this is the bullet object): var speed = 5; this.position.add(Vector2D.multiply(this.direction, speed)); And this is how I draw the bullet (this is the bullet object): ctx.save(); ctx.translate(this.position.x, this.position.y); ctx.rotate(this.direction.getAngle()); ctx.fillRect(0, 0, 3, 3); ctx.restore(); How can I change the direction and position vectors of the bullet to ensure it is on the blue dotted line? I think I should represent the shift with a vector, but I can’t see how to use it.

    Read the article

  • Is there a good book or articles to learn about 2D Game Design and Effects?

    - by user28015
    I am not looking for a read how to develop games and how to implement one. I am looking for a general about possible effects in 2D Games and about general design of modern 2D gaming. I have programmed several smaller games over the years and also read books like "Golden Rules of Game Programming" by Martin Bronwlo. So I know how to implement games. What I am looking for are 2 things: Finishing touches such as effects like explosions, particles etc. Not how to make them, but how to design them so it looks right and cool. How to make a 2D game feel "more right" so that users get a satisfying gaming experience. I played a lot of 2D games but I could use some more advice.

    Read the article

  • How do I find the angle required to point to another object?

    - by Ginamin
    I am making an air combat game, where you can fly a ship in a 3D space. There is an opponent that flies around as well. When the opponent is not on screen, I want to display an arrow pointing in the direction the user should turn, as such: So, I took the camera location and the oppenent location and did this: double newDirection = atan2(activeCamera.location.y-ship_wrap.location.y, activeCamera.location.x-ship_wrap.location.x); After which, I get the position on the circumferance of a circle which surrounds my crosshairs, like such: trackingArrow.position = point((60*sin(angle)+240),60*cos(angle)+160); It all works fine, except it's the wrong angle! I assume my calculation for the new direction is incorrect. Can anyone help?

    Read the article

  • how does the rectangle bounds (x,y,width,height) in libgdx work

    - by JG22
    I cant work out how to use the rectangle bounds in libgdx I am currently using the superJumper example and have 2or 3 examples with that are pause Bounds = new Rectangle(320 - 64, 480 - 64, 64, 64); this is the pause button in the top right corner resume Bounds = new Rectangle(160 - 96, 240, 192, 36); this is a rectangle resume button in the middle of the page in the menu that comes up when the pause button is pressed. basically my question is aimed at the 360 -64 and 160 -96 because I don't know why this is used I need to create a rectangle that covers the left side of the screen and the same on the right because I want to create a on screen buttons, I have already created the does for these buttons and I have managed to get them to work but I can move the rectangles to where I want. Thank you If you can help

    Read the article

  • Estimate angle to launch missile, maths question

    - by Jonathan
    I've been working on this for an hour or two now and my maths really isn't my strong suit which is definitely not a good thing for a game programmer but that shouldn't stop me enjoying a hobby surely? After a few failed attempts I was hoping someone else out there could help so here's the situation. I'm trying to implement a bit of faked intelligence when the A.I fires it's missiles at a target in a 2D game world. By predicting the likely position the target will be in given it's current velocity and the time it will take the missile to reach it's target. I created an image to demonstrate my thinking: http://i.imgur.com/SFmU3.png which also contains the logic I use for accelerating the missile after launch. The ship that fires the missile can fire within a total of 40 degree angle, 20 either side of itself, but this could likely become variable. My current attempt was to break the space between the two lines into segments which match the targets width. Then calculate the time it would take the missile to get to that location using the formula. So for each iteration of this we total up the values and that tells us the distance travelled, ad it would then just need compared to distance to the segment. startVelocity * ((startVelocity * acceleration)^(currentframe-1) So for example. If we start at a velocity of 1f/frame with an acceleration of 0.1f the formula, at frame 4, would be 1 * (1.1^3) = 1.331 But I quickly realized I was getting lost when trying to put this into practice. Does this seem like a correct starting point or am I going completely the wrong way about it? Any pointers would help me greatly. Maths really isn't my strong suit so I get easily lost in these matters and don't even really know a good phrase to search for with this. So I guess in summary my question is more about the correct way to approach this problem and any additional code samples on top of that would be great but I'm not averse to working out the complete code from helpful pointers.

    Read the article

  • Using elapsed time for SlowMo in XNA

    - by Dave Voyles
    I'm trying to create a slow-mo effect in my pong game so that when a player is a button the paddles and ball will suddenly move at a far slower speed. I believe my understanding of the concepts of adjusting the timing in XNA are done, but I'm not sure of how to incorporate it into my design exactly. The updates for my bats (paddles) are done in my Bat.cs class: /// Controls the bat moving up the screen /// </summary> public void MoveUp() { SetPosition(Position + new Vector2(0, -moveSpeed)); } /// <summary> /// Controls the bat moving down the screen /// </summary> public void MoveDown() { SetPosition(Position + new Vector2(0, moveSpeed)); } /// <summary> /// Updates the position of the AI bat, in order to track the ball /// </summary> /// <param name="ball"></param> public virtual void UpdatePosition(Ball ball) { size.X = (int)Position.X; size.Y = (int)Position.Y; } While the rest of my game updates are done in my GameplayScreen.cs class (I'm using the XNA game state management sample) Class GameplayScreen { ........... bool slow; .......... public override void Update(GameTime gameTime, bool otherScreenHasFocus, bool coveredByOtherScreen) base.Update(gameTime, otherScreenHasFocus, false); if (IsActive) { // SlowMo Stuff Elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds; if (Slowmo) Elapsed *= .8f; MoveTimer += Elapsed; double elapsedTime = gameTime.ElapsedGameTime.TotalMilliseconds; if (Keyboard.GetState().IsKeyDown(Keys.Up)) slow = true; else if (Keyboard.GetState().IsKeyDown(Keys.Down)) slow = false; if (slow == true) elapsedTime *= .1f; // Updating bat position leftBat.UpdatePosition(ball); rightBat.UpdatePosition(ball); // Updating the ball position ball.UpdatePosition(); and finally my fixed time step is declared in the constructor of my Game1.cs Class: /// <summary> /// The main game constructor. /// </summary> public Game1() { IsFixedTimeStep = slow = false; } So my question is: Where do I place the MoveTimer or elapsedTime, so that my bat will slow down accordingly?

    Read the article

  • Best way to mask 2D sprites in XNA?

    - by electroflame
    I currently am trying to mask some sprites. Rather than explaining it in words, I've made up some example pictures: The area to mask (in white) Now, the red sprite that needs to be cropped. The final result. Now, I'm aware that in XNA you can do two things to accomplish this: Use the Stencil Buffer. Use a Pixel Shader. I have tried to do a pixel shader, which essentially did this: float4 main(float2 texCoord : TEXCOORD0) : COLOR0 { float4 tex = tex2D(BaseTexture, texCoord); float4 bitMask = tex2D(MaskTexture, texCoord); if (bitMask.a > 0) { return float4(tex.r, tex.g, tex.b, tex.a); } else { return float4(0, 0, 0, 0); } } This seems to crop the images (albeit, not correct once the image starts to move), but my problem is that the images are constantly moving (they aren't static), so this cropping needs to be dynamic. Is there a way I could alter the shader code to take into account it's position? Alternatively, I've read about using the Stencil Buffer, but most of the samples seem to hinge on using a rendertarget, which I really don't want to do. (I'm already using 3 or 4 for the rest of the game, and adding another one on top of it seems overkill) The only tutorial I've found that doesn't use Rendertargets is one from Shawn Hargreaves' blog over here. The issue with that one, though is that it's for XNA 3.1, and doesn't seem to translate well to XNA 4.0. It seems to me that the pixel shader is the way to go, but I'm unsure of how to get the positioning correct. I believe I would have to change my onscreen coordinates (something like 500, 500) to be between 0 and 1 for the shader coordinates. My only problem is trying to work out how to correctly use the transformed coordinates. Thanks in advance for any help!

    Read the article

  • How to perform game object smoothing in multiplayer games

    - by spaceOwl
    We're developing an infrastructure to support multiplayer games for our game engine. In simple terms, each client (player) engine sends some pieces of data regarding the relevant game objects at a given time interval. On the receiving end, we step the incoming data to current time (to compensate for latency), followed by a smoothing step (which is the subject of this question). I was wondering how smoothing should be performed ? Currently the algorithm is similar to this: Receive incoming state for an object (position, velocity, acceleration, rotation, custom data like visual properties, etc). Calculate a diff between local object position and the position we have after previous prediction steps. If diff doesn't exceed some threshold value, start a smoothing step: Mark the object's CURRENT POSITION and the TARGET POSITION. Linear interpolate between these values for 0.3 seconds. I wonder if this scheme is any good, or if there is any other common implementation or algorithm that should be used? (For example - should i only smooth out the position? or other values, such as speed, etc) any help will be appreciated.

    Read the article

  • HLSL 5 interpolation issues

    - by metredigm
    I'm having issues with the depth components of my shadowmapping shaders. The shadow map rendering shader is fine, and works very well. The world rendering shader is more problematic. The only value which seems to definitely be off is the pixel's position from the light's perspective, which I pass in parallel to the position. struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; The reason that I used the semantic 'TEXCOORD2' on the light's pixel position is because I believe that the problem lies with Direct3D's interpolation of values between shaders, and I started trying random semantics and also forcing linear and noperspective interpolations. In the world rendering shader, I observed in the pixel shader that the Z value of light_pos was always extremely close to, but less than the W value. This resulted in a depth result of 0.999 or similar for every pixel. Here is the vertex shader code : struct Vertex { float3 position : POSITION; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; cbuffer Camera : register (b0) { matrix world; matrix view; matrix projection; }; cbuffer Light : register (b1) { matrix light_world; matrix light_view; matrix light_projection; }; Pixel RenderVertexShader(Vertex input) { Pixel output; output.position = mul(float4(input.position, 1.0f), world); output.position = mul(output.position, view); output.position = mul(output.position, projection); output.world_pos = mul(float4(input.position, 1.0f), world); output.world_pos = mul(output.world_pos, light_view); output.world_pos = mul(output.world_pos, light_projection); output.texcoord = input.texcoord; output.normal = input.normal; return output; } I suspect interpolation to be the culprit, as I used the camera matrices in place of the light matrices in the vertex shader, and had the same problem. The problem is evident as both of the same vectors were passed to a pixel from the VS, but only one of them showed a change in the PS. I have already thoroughly debugged the matrices' validity, the cbuffers' validity, and the multiplicative validity. I'm very stumped and have been trying to solve this for quite some time. Misc info : The light projection matrix and the camera projection matrix are the same, generated from D3DXMatrixPerspectiveFovLH(), with an FOV of 60.0f * 3.141f / 180.0f, a near clipping plane of 0.1f, and a far clipping plane of 1000.0f. Any ideas on what is happening? (This is a repost from my question on Stack Overflow)

    Read the article

  • How do I do random isometric paths?

    - by user406470
    I'm working on an Isometric city generator, and I am looking for a little push in the right direction. I'm looking to randomly generate roads on a isometric plane. I have never done pathfinding before, and I've googled it and didn't find any articles relating to what I am trying to do. Basically, my program generates a random isometric city and, I am hoping to add roads to that. Any help is appreciated!

    Read the article

  • Coordinate spaces and transformation matrices

    - by Belgin
    I'm trying to get an object from object space, into projected space using these intermediate matrices: The first matrix (I) is the one that transforms from object space into inertial space, but since my object is not rotated or translated in any way inside the object space, this matrix is the 4x4 identity matrix. The second matrix (W) is the one that transforms from inertial space into world space, which is just a scale transform matrix of factor a = 14.1 on all coordinates, since the inertial space origin coincides with the world space origin. /a 0 0 0\ W = |0 a 0 0| |0 0 a 0| \0 0 0 1/ The third matrix (C) is the one that transforms from world space, into camera space. This matrix is a translation matrix with a translation of (0, 0, 10), because I want the camera to be located behind the object, so the object must be positioned 10 units into the z axis. /1 0 0 0\ C = |0 1 0 0| |0 0 1 10| \0 0 0 1/ And finally, the fourth matrix is the projection matrix (P). Bearing in mind that the eye is at the origin of the world space and the projection plane is defined by z = 1, the projection matrix is: /1 0 0 0\ P = |0 1 0 0| |0 0 1 0| \0 0 1/d 0/ where d is the distance from the eye to the projection plane, so d = 1. I'm multiplying them like this: (((P x C) x W) x I) x V, where V is the vertex' coordinates in column vector form: /x\ V = |y| |z| \1/ After I get the result, I divide x and y coordinates by w to get the actual screen coordinates. Apparenly, I'm doing something wrong or missing something completely here, because it's not rendering properly. Here's a picture of what is supposed to be the bottom side of the Stanford Dragon: Also, I should add that this is a software renderer so no DirectX or OpenGL stuff here.

    Read the article

  • Deferred rendering order?

    - by Nick Wiggill
    There are some effects for which I must do multi-pass rendering. I've got the basics set up (FBO rendering etc.), but I'm trying to get my head around the most suitable setup. Here's what I'm thinking... The framebuffer objects: FBO 1 has a color attachment and a depth attachment. FBO 2 has a color attachment. The render passes: Render g-buffer: normals and depth (used by outline & DoF blur shaders); output to FBO no. 1. Render solid geometry, bold outlines (as in toon shader), and fog; output to FBO no. 2. (can all render via a single fragment shader -- I think.) (optional) DoF blur the scene; output to the default frame buffer OR ELSE render FBO2 directly to default frame buffer. (optional) Mesh wireframes; composite over what's already in the default framebuffer. Does this order seem viable? Any obvious mistakes?

    Read the article

  • How to shade a texture two different colors?

    - by Venesectrix
    To give an example of what I'm asking about, I'll use Saints Row 3 since I've been playing that lately. In that game you can customize your looks and your car's appearance a lot. Your coat can have a primary color and a trim color. Your car can have a primary color and a stripe color, etc. Is there just a single coat texture that is being shaded two different colors somehow or are they overlaying a transparent second texture for the trim/stripes that gets shaded differently? If it's just one texture I'd like to know how it's done. If it's two different textures it seems like it's a waste of space. The second texture would be the same size as the first one but mostly transparent if you just wanted to lay it on top of the first one. Or are they just carefully positioning a second, smaller texture so that it aligns properly with the first one?

    Read the article

  • Optimizing hierarchical transform

    - by Geotarget
    I'm transforming objects in 3D space by transforming each vector with the object's 4x4 transform matrix. In order to achieve hierarchical transform, I transform the child by its own matrix, and then the child by the parent matrix. This becomes costly because objects deeper in the display tree have to be transformed by all the parent objects. This is what's happening, in summary: Root -- transform its verts by Root matrix Parent -- transform its verts by Parent, Root matrix Child -- transform its verts by Child, Parent, Root matrix Is there a faster way to transform vertices to achieve hierarchical transform? What If I first concatenated each transform matrix with the parent matrices, and then transform verts by that final resulting matrix, would that work and wouldn't that be faster? Root -- transform its verts by Root matrix Parent -- concat Parent, Root matrices, transform its verts by Concated matrix Child -- concat Child, Parent, Root matrices, transform its verts by Concated matrix

    Read the article

  • Trouble with speed and vectors

    - by Eegabooga
    I'm working on adding bullets to my game. Right now I can shoot bullets in the direction that I would like from a ship by getting the ship's angle: int speed = 5; int dx = -(cos(degreesToRadians(ship.angle)) * speed); // rate of change in the x direction int dy = -(sin(degreesToRadians(ship.angle)) * speed); // rate of change in the y direction bulletPosition.addX(dx); // addX(dx) is simply bulletPosition.x += dx bulletPosition.addY(dy); The ship is pretty much the exact same thing, except I use the += operator: int dx += -(cos(degreesToRadians(angle)) * 0.15) int dy += -(sin(degreesToRadians(angle)) * 0.15); shipPosition.addX(dx); shipPosition.addY(dy); I would like to be able to add the ship's velocity to the bullet's velocity, but I'm a little confused as to how should get the speed from the ship's vector. I thought that adding the ship's dx to the bullet's dx like int dx = -(cos(degreesToRadians(ship.angle)) * speed * dx) would work because I'm adding the rate of change of the ship to the rate of change of the bullet, but that doesn't work. So here's the final question: How can I get the speed of my ship and apply it to my bullet's speed? Thanks in advance for all help :)

    Read the article

  • How can I clear explosions in my function?

    - by hustlerinc
    Hi I have a function to place bombs, and a for loop that places explosions on the tiles where possible. My problem is that I can't remove the explosions after a while. I've tried everything I can come up with so now I turn here as a last resort. The function looks like this: function Bomb(){ var placebomb = false; if(placeBomb && player.bombs != 0){ map[player.Y][player.X].object = 2; var bombX = player.X; var bombY = player.Y; placeBomb = false; player.bombs--; setTimeout(explode, 3000); } function explode(){ var explodeNorth = true; var explodeEast = true; var explodeSouth = true; var explodeWest = true; map[bombY][bombX].explosion = 1; delete map[bombY][bombX].object; for(i=0;i<=player.bombRadius;i++){ if(explodeNorth && map[bombY-i][bombX]){ if(!map[bombY-i][bombX].wall){ if(!map[bombY-i][bombX].object){ map[bombY-i][bombX].explosion = 1; } else var explodeNorth = false; delete map[bombY-i][bombX].object; map[bombY-i][bombX].explosion = 1; } else var explodeNorth = false; } if(explodeEast && map[bombY][bombX+i]){ if(!map[bombY][bombX+i].wall){ if(!map[bombY][bombX+i].object){ map[bombY][bombX+i].explosion = 1; } else var explodeEast = false; delete map[bombY][bombX+i].object; map[bombY][bombX+i].explosion = 1; } else var explodeEast = false; } if(explodeSouth && map[bombY+i][bombX]){ if(!map[bombY+i][bombX].wall){ if(!map[bombY+i][bombX].object){ map[bombY+i][bombX].explosion = 1; } else var explodeSouth = false; delete map[bombY+i][bombX].object; map[bombY+i][bombX].explosion = 1; } else var explodeSouth = false; } if(explodeWest && map[bombY][bombX-i]){ if(!map[bombY][bombX-i].wall){ if(!map[bombY][bombX-i].object){ map[bombY][bombX-i].explosion = 1; } else var explodeWest = false; delete map[bombY][bombX-i].object; map[bombY][bombX-i].explosion = 1; } else var explodeWest = false; } } player.bombs++; } } If anyone can think of a good way to remove the explosion after a delay please help.

    Read the article

  • How should I structure moving from overworld to menu system / combat?

    - by persepolis
    I'm making a text-based "Arena" game where the player is the owner of 5 creatures that battle other teams for loot, experience and glory. The game is very simple, using Python and a curses emulator. I have a static ASCII map of an "overworld" of sorts. My character, represented by a glyph, can move about this static map. There are locations all over the map that the character can visit, that break down into two types: 1) Towns, which are a series of menus that will allow the player to buy equipment for his team, hire new recruits or do other things. 2) Arenas, where the player's team will have a "battle" interface with actions he can perform, messages about the fight, etc. Maybe later, an ASCII representation of the fight but for now, just screens of information with action prompts. My main problem is what kind of design or structure I should use to implement this? Right now, the game goes through a master loop which waits for keyboard input and then moves the player about the screen. My current thinking is this: 1) Upon keyboard input, the Player coordinates are checked against a list of Location objects and if the Player coords match the Location coords then... 2) ??? I'm not sure if I should then call a seperate function to initiate a "menu" or "combat" mode. Or should I create some kind of new GameMode object that contains a method itself for drawing the screen, printing the necessary info? How do I pass my player's team data into this object? My main concern is passing around the program flow into all these objects. Should I be calling straight functions for different parts of my game, and objects to represent "things" within my game? I was reading about the MVC pattern and how this kind of problem might benefit - decouple the GUI from the game logic and user input but I have no idea how this applies to my game.

    Read the article

  • Passing an objects rotation down through its children

    - by MintyAnt
    In my topdown 2d game you have a player with a sword, like an old Zelda game. The sword is a seperate entity, and its collision box "rotates" around the player like an orbit, but always follows the player wherever he goes. The player and sword both have a vector2 heading. The sword is a weapon object that is attached to the character. In order to allow swinging in a direction, I have the following property inside sword (RotateCopy returns a copy of the mHeading after rotation) public Vector2 Heading { get { return mHeading.RotateCopy(mOwner.Rotation); } } This seems a bit messy to me, and slower than it could be. Is there a better way to "translate" the base/owner component rotations through to whatever component I am using, like this sword? Would using a rotation MATRIX be better? (Curretnly rotates by sin/cos) If so, how can I "add" up the matrices? Thank you.

    Read the article

  • generating maps

    - by gardian06
    This is a conglomeration question when answering please specify which part you are addressing. I am looking at creating a maze type game that utilizes elevation. I have a few features I would like to have, but am unsure as to some of the implementation. I have done work doing fileIO maze generation (using a key to read the file, and then generate the level based on that file), but I am unsure how to think about this with elevation in the mix. I think height maps might be a good approach, but don't know how to represent them effectively. for a height map which is more beneficial XML(containing h[u,v] data and key definition), CSV (item1 is key reference, item2 is elevation), or another approach that I have not thought of yet? When it comes to placing the elevation values themselves what kind of deltah values are appropriate to have it noticeable at about a 60degree angle while not really effecting gravity driven physics (assuming some effect while moving up/down hill)? I am thinking of maybe going to procedural generation at some point, but am wondering if it is practical to have a procedurally generated grid (wall squares possibly same dimensions as the open space squares), or if designing to a thin wall open spaces is better? this decision will effect the amount of work need on the graphics end for uniform vs. irregular walls. EDIT: game will be a elevation maze shooter. levels/maps will be mazes with elevation the player has to negotiate. elevations will have effects on "combat" vision, and movement

    Read the article

  • Interaction using Kinect in XNA

    - by Sweta Dwivedi
    So i have written a program to play a sound file when ever my RightHand.Joint touches the 3D model . . It goes like this . . even though the code works somehow but not very accurate . . for example it will play the sound when my hand is slightly under my 3D object not exactly on my 3D object . How do i make it more accurate? here is the code . . (HandX & HandY is the values coming from the Skeleton data RightHand.Joint.X etc) and also this calculation doesnt work with Animated Sprites..which i need to do foreach (_3DModel s in Solar) { float x = (float)Math.Floor(((handX * 0.5f) + 0.5f) * (resolution.X)); float y = (float)Math.Floor(((handY * -0.5f) + 0.5f) * (resolution.Y)); float z = (float)Math.Floor((handZ) / 4 * 20000); if (Math.Sqrt(Math.Pow(x - s.modelPosition.X, 2) + Math.Pow(y - s.modelPosition.Y, 2)) < 15) { //Exit(); PlaySound("hyperspace_activate"); Console.WriteLine("1" + "handx:" + x + "," + " " + "modelPos.X:" + s.modelPosition.X + "," + " " + "handY:" + y + "modelPos.Y:" + s.modelPosition.Y); } else { Console.WriteLine("2" + "handx:" + x + "," + " " + "modelPos.X:" + s.modelPosition.X + "," + " " + "handY:" + y + "modelPos.Y:" + s.modelPosition.Y); } }

    Read the article

  • Numbers not adding up? (What am I not understanding here?) [closed]

    - by Milo
    I have the following output: Short version: The last numbers on the S= lines increase by H and SHOULD theoretically be linearly decreasing, ex: -285,-290,-295...but the fourth one jumps to -252. Yet, every other number is linearly increasing. Why is that and how could I fix that? To explain the numbers, it comes from slider value changed. I have a slider whose value is used to generate the float on the next line. Everything should be growing linearly here. This value is used to determine the size of a flow layout and it is also used in conjunction with a scrollbar. But basically I have a background for the flow layout and that number is the start location for rendering it. The numbers should linearly change to create a smooth transition but when that one jumps, it looks weird on screen and I dont understand why the numbers are jumping every X slider value changes. Mathematically what could be causing this? Here is the code for rendering the background and the function that is called when value changes: void LobbyTableManager::renderBG( GraphicsContext* g, agui::Rectangle& absRect, agui::Rectangle& childRect ) { float scale = 0.35f; int w = m_bgSprite->getWidth() * getTableScale() * scale; int h = m_bgSprite->getHeight() * getTableScale() * scale; int numX = ceil(absRect.getWidth() / (float)w) + 2; int numY = ceil(absRect.getHeight() / (float)h) + 2; int startY = childRect.getY(); int numAttempts = 0; while(startY + h < absRect.getY() && numAttempts < 1000) { startY += h; if(moo) { std::cout << startY << ","; } numAttempts++; } g->holdDrawing(); for(int i = 0; i < numX; ++i) { for(int j = 0; j < numY; ++j) { g->drawScaledSprite(m_bgSprite,0,0,m_bgSprite->getWidth(),m_bgSprite->getHeight(), absRect.getX() + (i * w) + (offsetX),absRect.getY() + (j * h) + startY,w,h,0); } } g->unholdDrawing(); g->setClippingRect(cx,cy,cw,ch); } void LobbyTableManager::setTableScale( float scale ) { scale += 0.3f; scale *= 2.0f; float scrollRel = m_vScroll->getRelativeValue(); setScale(scale); rescaleTables(); resizeFlow(); updateScrollBars(); float newVal = scrollRel * m_vScroll->getMaxValue(); m_vScroll->setValue(newVal); } void LobbyTableManager::valueChanged( agui::VScrollBar* source,int val ) { m_flow->setLocation(0,-val); } Any insight on mathematically why the anomaly might happen every Nth time would be helpful. I just dont understand why if every number linearly increates it jumps from -295 to -252! Thanks

    Read the article

  • Tic-Tac-Toe game AI

    - by David Jones
    I'm looking into creating a simple tic tac toe/noughts and crosses game in Actionscript3 and am trying to understand the ideas behind the AI used in a game like this. I've seen some simplistic examples online but from what I've read a game tree or something like minimax is the best way to go about this. Can anyone help explain or reference any good examples of this? I've seen that there is a library called as3ds - data structures for game developers which has a number of classes that might help tie this together? Any info/examples or help is much appreciated.

    Read the article

  • Previewing a Demo Level in Mobile for UDK?

    - by Reno Yeo
    I've already clicked on "Emulate Mobile Features" and everything has been compiled. I've also set the mobile previewer settings to iPhone 4's dimensions and features. However, when i click on the mobile previewer, a new window pops up but it goes into a "Not Responding" mode after a while. Is there anything I'm doing wrong? To be honest, I'm afraid of the difficulty curve required in learning UDK, but I am interested in developing a game for it.

    Read the article

  • Find Nearest Object

    - by ultifinitus
    I have a fairly sizable game engine created, and I'm adding some needed features, such as this, how do I find the nearest object from a list of points? In this case, I could simply use the Pythagorean theorem to find the distance, and check the results. I know I can't simply add x and y, because that's the distance to the object, if you only took right angle turns. However I'm wondering if there's something else I could do? I also have a collision system, where essentially I turn objects into smaller objects on a smaller grid, kind of like a minimap, and only if objects exist in the same gridspace do I check for collisions, I could do the same thing, only make the gridspace larger to check for closeness. (rather than checking every. single. object) however that would take additional setup in my base class and clutter up the already cluttered object. TL;DR Question: Is there something efficient and accurate that I can use to detect which object is closest, based on a list of points and sizes?

    Read the article

  • NVidia control panel SSAO not working

    - by János Turánszki
    I am just before implementing screen space ambient occlusion in my game, but first I wanted to try enabling it from NVidia control panel only to find out that it is greyed out so that I can not enable it. With this I could enable SSAO for some other games, but not every one. I know this technique requires the depth buffer and (optionally) a normal map texture to sample information from which I already have access to given I have a deferred renderer working. After that I actually thought to roll back to a previous version of my game which still uses forward rendering so the depth buffer is actually bound to the backbuffer which I render to from the get-go so that maybe the NVidia control panel would somehow make use of it. It was not working with forward rendering either. (I also tried FXAA in the control panel and that works - but it doesn't need any depth or normal texture) So my question is that how can I enable this function so that it would work by enabling it in the NVidia control panel?

    Read the article

< Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >