Search Results

Search found 25377 results on 1016 pages for 'development 4 0'.

Page 551/1016 | < Previous Page | 547 548 549 550 551 552 553 554 555 556 557 558  | Next Page >

  • A* PathFinding Not Consistent

    - by RedShft
    I just started trying to implement a basic A* algorithm in my 2D tile based game. All of the nodes are tiles on the map, represented by a struct. I believe I understand A* on paper, as I've gone through some pseudo code, but I'm running into problems with the actual implementation. I've double and tripled checked my node graph, and it is correct, so I believe the issue to be with my algorithm. This issue is, that with the enemy still, and the player moving around, the path finding function will write "No Path" an astounding amount of times and only every so often write "Path Found". Which seems like its inconsistent. This is the node struct for reference: struct Node { bool walkable; //Whether this node is blocked or open vect2 position; //The tile's position on the map in pixels int xIndex, yIndex; //The index values of the tile in the array Node*[4] connections; //An array of pointers to nodes this current node connects to Node* parent; int gScore; int hScore; int fScore; } Here is the rest: http://pastebin.com/cCHfqKTY This is my first attempt at A* so any help would be greatly appreciated.

    Read the article

  • Developing Games for Samsung Smart TV

    - by Caner Öncü
    We are planning to develop a game for Samsung Smart TVs. Although those TVs support Flash and HTML5 other specs fail at supporting a game engine. For ex: Using an engine that needs GPU is not possible with the default Samsung smart tv set. Or... WebGL is supported with Samsung SDK 4.1 but we don't know if SDK 4.1 is available for Smart TV series between 7000-9000 or not. We have tried to communicate with Samsung but they don't really seem to respond. Is there anyone who has developed a game for Samsung Smart TVs? If there is, can you name the game engines that can work with those TVs?

    Read the article

  • What exactly does an installer do and why might I need one?

    - by Jan
    this is probably the noob-question of the day: So I've written this game. Now there's the .exe file that does the work, a folder with my beautiful, beautiful assets and a bunch of .dll files and other stuff that I probably shouldn't touch. To run the game, I copy the whole lot to the desired computer, double-click the .exe file and start shooting some dudes. Yay! But what exactly is the difference between that and using an installer? What else does an installer do besides copying files and looking more professional than a .zip-file? Is there generally a lot of patching/configuring involved when trying to make a game run on a different computer? I tested my game on all windows computers I could get my greedy fingers on and it works great. Thanks for your time.

    Read the article

  • Creating a interactive grid for puzzle game

    - by Noupoi
    I am trying to make a slitherlink game, and am not too sure how to approach creating the game, more specifically the grid structure on which the puzzle will be played on. This is what a empty and completed slitherlink grid would look like. The numbers in the squares are sort of clues and the areas between the dots need to be clickable. http://i.stack.imgur.com/U1kXn.gif http://i.stack.imgur.com/RMwiv.gif I would like to create the game in VB .NET. What data structures should I try to use, and would it be beneficial using any frameworks such as XNA?

    Read the article

  • A separate solution for types, etc?

    - by hayer
    I'm currently in progress updating some engine-code(which does not work, so it is more like creating a engine). I've decided to swap over to SFML(instead of my own crappy renderer, window manager, and audio), Box2d(since I need physics, but have none), and some small utils I've built myself. The problem is that each of the project mentioned over use different types for things like Vector2, etc. So to the question; Is it a good idea to replace box2d and SFML vectors with my own vector class? (Which is one of my better implementations) My idea then was to have a seperate .lib with all my classes that should be shared between all the projects in the solution.

    Read the article

  • Changing location after CommitAnimations

    - by Will Youmans
    I'm using the following code to move a UIImageView: shootImg.image = [UIImage imageNamed:@"Projectile Left 1.png"]; [UIView beginAnimations:nil context:nil]; shootImg.center = CGPointMake(shootImg.center.x+1000, shootImg.center.y); [UIView commitAnimations]; This works but what I want to do is after [UIView CommitAnimations]; I want to set the location of shootImg using CGPointMake. If I just put it after commitAnimations then the animation doesn't fully complete. Any suggestions? I'm not using any frameworks like cocos2d and if you need to see any more code just ask.

    Read the article

  • Spritegroups and colorkeys

    - by Fristi
    I have a problem using spritegroups in pygame. In my situation I have 2 spritegroups, one for humans, one for "infected". A human is represented by a blue circle: image = pygame.Surface((32,32)) image.fill((255,255,255)) pygame.draw.circle(image,(0,0,255),(16,16),16) image = image.convert() image.set_colorkey((255,255,255)) An infected by a red one (same code, different color). I update my spritegroups as follows: self.humans.clear(self.screen, self.bg) self.humans.update(time_passed) self.humans.draw(self.screen) self.infected.clear(self.screen, self.bg) self.infected.update(time_passed) self.infected.draw(self.screen) Self.bg is defined: self.bg = pygame.Surface((SCREEN_WIDTH, SCREEN_HEIGHT)) self.bg.fill((255,255,255)) self.bg.convert() This all works, except that when a red circle overlaps with a blue one, you can see the white corners of the bounding box around the actual circle. Within a spritegroup it works, using the set_colorkey function. This does not happen with overlapping blue circles or overlapping red circles. I tried adding a colorkey to self.bg but that did not work. Same for adding a colorkey to self.screen.

    Read the article

  • Syntax error in Maya Python Script [on hold]

    - by Enchanter
    Ok this error is immensly frustrating as it is obviously a simple syntax issue. Basically I've written two lines of maya script in python designed to create a list of the names of all the joints of a model currently selected in the model viewer. Here are the two lines of script: import maya.cmds joints = ls(selection = true, type = 'joint') Upon compiling the code the script editor is saying there is a syntax error in the second line, but I do not see any reason why this code should not execute?

    Read the article

  • Touching a CGRect

    - by Coder404
    In my cocos2d app I am trying to determine when a CCSprite is touched Here is what I have: -(BOOL)ccTouchBegan:(UITouch *)touch withEvent:(UIEvent *)event{ NSMutableArray *targetsToDelete = [[NSMutableArray alloc] init]; for (CCSprite *target in _targets) { CGRect targetRect = CGRectMake(target.position.x - (target.contentSize.width/2), target.position.y - (target.contentSize.height/2), 27, 40); CGPoint touchLocation = [self convertTouchToNodeSpace:touch]; if (CGRectContainsPoint(targetRect, touchLocation)) { NSLog(@"Moo cheese!"); } } return YES; } For some reason it does not work. Can someone help me?

    Read the article

  • Pre-rendered fire. Where to find? [on hold]

    - by Vladivarius
    I'm studying game programming. I haven't yet implemented generated fire rendering in my ,,engine'' so I'm looking for some pre-rendered fire textures for early demo-scenes, but they seems strangely difficult to find. I'm currently using some that I ripped from DMC but I want to try out different ones. Does anyone know where to find these? Software that could generate them would also be ok. Thanks :)

    Read the article

  • How to make other semantics behave like SV_Position?

    - by object
    I'm having a lot of trouble with shadow mapping, and I believe I've found the problem. When passing vectors from the vertex shader to the pixel shader, does the hardware automatically change any of the values based on the semantic? I've compiled a barebones pair of shaders which should illustrate the problem. Vertex shader : struct Vertex { float3 position : POSITION; }; struct Pixel { float4 position : SV_Position; float4 light_position : POSITION; }; cbuffer Matrices { matrix projection; }; Pixel RenderVertexShader(Vertex input) { Pixel output; output.position = mul(float4(input.position, 1.0f), projection); output.light_position = output.position; // We simply pass the same vector in screenspace through different semantics. return output; } And a simple pixel shader to go along with it: struct Pixel { float4 position : SV_Position; float4 light_position : POSITION; }; float4 RenderPixelShader(Pixel input) : SV_Target { // At this point, (input.position.z / input.position.w) is a normal depth value. // However, (input.light_position.z / input.light_position.w) is 0.999f or similar. // If the primitive is touching the near plane, it very quickly goes to 0. return (0.0f).rrrr; } How is it possible to make the hardware treat light_position in the same way which position is being treated between the vertex and pixel shaders? EDIT: Aha! (input.position.z) without dividing by W is the same as (input.light_position.z / input.light_position.w). Not sure why this is.

    Read the article

  • What is the standard way of using Q15 values?

    - by Alex
    To process 8-bit pixels, to do things like gamma correction without losing information, we normally upsample the values, work in 16 bits or whatever, and then downsample them to 8 bits. Now, this is a somewhat new area for me, so please excuse incorrect terminology etc. For my needs I have chosen to work in "non-standard" Q15, where I only use the upper half of the range (0.0-1.0), and 0x8000 represents 1.0 instead of -1.0. This makes it much easier to calculate things in C. But I ran into a problem with SSSE3. It has the PMULHRSW instruction which multiplies Q15 numbers, but it uses the "standard" range of Q15 is [-1,1-2?¹5], so multplying (my) 0x8000 (1.0) by 0x4000 (0.5) gives 0xC000 (-0.5), because it thinks 0x8000 is -1. This is quite annoying. What am I doing wrong? Should I keep my pixel values in the 0000-7FFF range? This kind of defeats the purpose of it being a fixed-point format. Is there a way around this? Maybe some trick? Is there some kind of definitive treatise on Q15 which discusses all this?

    Read the article

  • Create Math Game with PHP, Ajax, Jquery

    - by Sambucasun
    I am developing a website where user can create their own game which can be joined by other users as well. It's a simple maths game which will shoot equations based on time or count specified. I want that moment user create a game, it will be listed in "current Games" section. Other users can check out the list and select the game to join. After game is created, creater should have a screen which should be having his name with display pic. Now gradually as others start joining the game, list should updated automatically. Once enough users are there i will start the game. The same list should be displayed to other users who join the game. Once game is over all will be displayed a summary list. I have gone through couple of threads but could not get clear idea. Do I need to use comet or other technology to create such game or simple PHP, Ajax or Jquery will suffice ? Also I want my website should be mobile compatible so i am designing it in html5. If i create this game using just Ajax then will there be any performance issue while playing through mobile. I am not much experienced so just need guidance for what should be appropriate or use for my requirement.

    Read the article

  • Game Asset Size Over Time

    - by jterrace
    The size (in bytes) of games have been growing over time. There are probably many factors contributing to this: trailer/cut scene videos being bundled with the game, more and higher-quality audio, multiple levels of detail being used, etc. What I'd really like to know is how the size of 3D models and textures that games ship with have changed over time. For example, if one were to look at the size of meshes and textures for Quake I (1996), Quake II (1997), Quake III: Arena (1999), Quake 4 (2005), and Enemy Territory: Quake Wars (2007), I'd imagine a steady increase in file size. Does anyone know of a data source for numbers like this?

    Read the article

  • Help to understand directions of sprites in XNA

    - by 3Dkreativ
    If I want to move a sprite to the right and upwards in a 45 degree angle I use Vector2 direction = new Vector2(1,-1); And if the sprite should move straight to right Vector2 direction = new Vector2(1,0); But how would I do if I want another directions, lets say somewhere between this values? I have tested with some other values, but then the sprite either moves to fast and or in another direction than I expected!? I suspect it's about normalize!? Another thing I have problem with to understand. I'm doing a simple asteroids game as a task in a class in C# and XNA. When the asteroids hit the window borders, I want them to bounce back in a random direction, but I can't do this before I understand how directions and Vector2 works. Help is preciated! Thanks!

    Read the article

  • Changing a Sprite When Hit in GameMaker

    - by Pixels_
    I am making a simple little Galaga style game. I want the objects sprite to change whenever it is hit. For example if a laser hits an alien then the sprite takes 1 out of 4 damage to its health points (HP). However I want the sprite to change from green to yellow after 1 hit, yellow to orange after 2 hits, orange to red after 3 hits, and red to pixel explosion after all 4 HPs are lost. That way you can easily distinguish the amount of health each alien has left. How can I do this? Preferably explain it in code.

    Read the article

  • How can you procedurally place objects in a non-gridded game?

    - by nickbadal
    This is a follow-up question to this question. I mistakenly worded the question, but got a good answer before I could correct myself, so I didn't want to delete it. Sorry! Now that I know that it is possible, I'd like to implement procedural world generation, but I don't want it to look gridded or blocky, where everything is obviously placed on an integer grid. I know that you can do this in gridded worlds by inputting a square's x and y into a noise function, or similar, but how can I generate a more natural looking object placement using procedural methods? This is in the context of an adventure game, if it matters.

    Read the article

  • Assigning a colour to imorted obj. files that are being used as default material

    - by Salino
    I am having a problem with assigning a colour to the different meshes that I have on one object. The technique that I have used is the first approach on this site. Is it possible to export a simulation (animation) from Blender to Unity? So what I would like to do is the following. I have about 107 meshes that are different frames from my shape key animation of my blender model. What I would like to have is that the first mesh will be bright green and up to the 40th mesh the colour turns to be white /greyish... the best would be if I could assign every mesh by hand a colour, however they are all default materials. And if I assign the object a colour, the whole "animation" is going to be in that colour

    Read the article

  • Why is my animation getting aborted?

    - by Homer_Simpson
    I have a class named Animation which handles my animations. The animation class can be called from multiple other classes. For example, the class Player.cs can call the animation class like this: Animation Playeranimation; Playeranimation = new Animation(TimeSpan.FromSeconds(2.5f), 80, 40, Animation.Sequences.forwards, 0, 5, false, true); //updating the animation public void Update(GameTime gametime) { Playeranimation.Update(gametime); } //drawing the animation public void Draw(SpriteBatch batch) { playeranimation.Draw(batch, PlayerAnimationSpritesheet, PosX, PosY, 0, SpriteEffects.None); } The class Lion.cs can call the animation class with the same code, only the animation parameters are changing because it's another animation that should be played: Animation Lionanimation; Lionanimation = new Animation(TimeSpan.FromSeconds(2.5f), 100, 60, Animation.Sequences.forwards, 0, 8, false, true); Other classes can call the animation class with the same code like the Player class. But sometimes I have some trouble with the animations. If an animation is running and then shortly afterwards another class calls the animation class too, the second animation starts but the first animation is getting aborted. In this case, the first animation couldn't run until it's end because another class started a new instance of the animation class. Why is an animation sometimes getting aborted when another animation starts? How can I solve this problem? My animation class: public class Animation { private int _animIndex, framewidth, frameheight, start, end; private TimeSpan PassedTime; private List<Rectangle> SourceRects = new List<Rectangle>(); private TimeSpan Duration; private Sequences Sequence; public bool Remove; private bool DeleteAfterOneIteration; public enum Sequences { forwards, backwards, forwards_backwards, backwards_forwards } private void forwards() { for (int i = start; i < end; i++) SourceRects.Add(new Rectangle(i * framewidth, 0, framewidth, frameheight)); } private void backwards() { for (int i = start; i < end; i++) SourceRects.Add(new Rectangle((end - 1 - i) * framewidth, 0, framewidth, frameheight)); } private void forwards_backwards() { for (int i = start; i < end - 1; i++) SourceRects.Add(new Rectangle(i * framewidth, 0, framewidth, frameheight)); for (int i = start; i < end; i++) SourceRects.Add(new Rectangle((end - 1 - i) * framewidth, 0, framewidth, frameheight)); } private void backwards_forwards() { for (int i = start; i < end - 1; i++) SourceRects.Add(new Rectangle((end - 1 - i) * framewidth, 0, framewidth, frameheight)); for (int i = start; i < end; i++) SourceRects.Add(new Rectangle(i * framewidth, 0, framewidth, frameheight)); } public Animation(TimeSpan duration, int frame_width, int frame_height, Sequences sequences, int start_interval, int end_interval, bool remove, bool deleteafteroneiteration) { Remove = remove; DeleteAfterOneIteration = deleteafteroneiteration; framewidth = frame_width; frameheight = frame_height; start = start_interval; end = end_interval; switch (sequences) { case Sequences.forwards: { forwards(); break; } case Sequences.backwards: { backwards(); break; } case Sequences.forwards_backwards: { forwards_backwards(); break; } case Sequences.backwards_forwards: { backwards_forwards(); break; } } Duration = duration; Sequence = sequences; } public void Update(GameTime dt) { PassedTime += dt.ElapsedGameTime; if (PassedTime > Duration) { PassedTime -= Duration; } var percent = PassedTime.TotalSeconds / Duration.TotalSeconds; if (DeleteAfterOneIteration == true) { if (_animIndex >= SourceRects.Count) Remove = true; _animIndex = (int)Math.Round(percent * (SourceRects.Count)); } else { _animIndex = (int)Math.Round(percent * (SourceRects.Count - 1)); } } public void Draw(SpriteBatch batch, Texture2D Textures, float PositionX, float PositionY, float Rotation, SpriteEffects Flip) { if (DeleteAfterOneIteration == true) { if (_animIndex >= SourceRects.Count) return; } batch.Draw(Textures, new Rectangle((int)PositionX, (int)PositionY, framewidth, frameheight), SourceRects[_animIndex], Color.White, Rotation, new Vector2(framewidth / 2.0f, frameheight / 2.0f), Flip, 0f); } }

    Read the article

  • Multithreaded game fails on SwapBuffers in render thread at exit

    - by user782220
    The render loop and windows message loop run on separate threads. The way the program exits is that after PostQuitMessage is called in WM_DESTROY the message loop thread signals the render loop thread to exit. As far as I can tell before the render loop thread can even process the signal it tries SwapBuffers and that fails. My question, is there something about how Windows processes WM_DESTROY and WM_QUIT, in maybe DefWindowProc that causes various objects associated with rendering to go away even though I haven't explicitly deleted anything? And that would explain why the rendering thread is making bad calls at exit?

    Read the article

  • Particle system lifetimes in OpenGL ES 2

    - by user16547
    I don't know how to work with my particle's lifetimes. My design is simple: each particle has a position, a speed and a lifetime. At each frame, each particle should update its position like this: position.y = position.y + INCREMENT * speed.y However, I'm having difficulties in choosing my INCREMENT. If I set it to some sort of FRAME_COUNT, it looks fine until FRAME_COUNT has to be set back to 0. The effect will be that all particles start over at the same time, which I don't want to happen. I want my particles sort of live "independent" of each other. That's the reason I need a lifetime, but I don't know how to make use of it. I added a lifetime for each particle in the particle buffer, but I also need an individual increment that's updated on each frame, so that when PARTICLE_INCREMENT = PARTICLE_LIFETIME, each increment goes back to 0. How can I achieve something like that?

    Read the article

  • Removing/Adding a specific variable from an object inside javascript array? [migrated]

    - by hustlerinc
    I have a map array with objects stuffed with variables looking like this: var map = [ [{ground:0, object:1}, {ground:0, item:2}, {ground:0, object:1, item:2}], [{ground:0, object:1}, {ground:0, item:2}, {ground:0, object:1, item:2}] ]; Now I would like to be able to delete and add one of the variables like item:2. 1) What would I use to delete specific variables? 2) What would I use to add specific variables? I just need 2 short lines of code, the rest like detecting if and where to execute I've figured out. I've tried delete map[i][j].item; with no results. Help appreciated.

    Read the article

  • How do I make cars on a one-dimensional track avoid collisions?

    - by user990827
    Using three.js, I use a simple spline to represent a road. Cars can only move forward on the spline. A car should be able to slow-down behind a slow moving car. I know how to calculate the distance between 2 cars, but how to calculate the proper speed in each game update? At the moment I simply do something like this: this.speed += (this.maxSpeed - this.speed) * 0.02; // linear interpolation to maxSpeed // the position on the spline (0.0 - 1.0) this.position += this.speed / this.road.spline.getLength(); This works. But how to implement the slow-down part? // transform from floats (0.0 - 1.0) into actual units var carInFrontPosition = carInFront.position * this.road.spline.getLength(); var myPosition = this.position * this.road.spline.getLength(); var distance = carInFrontPosition - myPosition; // WHAT TO DO HERE WITH THE DISTANCE? // HOW TO CALCULATE MY NEW SPEED? Obviously I have to somehow take current speed of the cars into account for calculation. Besides different maxSpeeds, I want each car to also have a different mass (causing it to accelerate slower/faster). But this mass has to be then also taken into account for braking (slowing down) so they don't crash into each other.

    Read the article

  • Can someone explain the (reasons for the) implications of colum vs row major in multiplication/concatenation?

    - by sebf
    I am trying to learn how to construct view and projection matrices, and keep reaching difficulties in my implementation owing to my confusion about the two standards for matrices. I know how to multiply a matrix, and I can see that transposing before multiplication would completely change the result, hence the need to multiply in a different order. What I don't understand though is whats meant by only 'notational convention' - from the articles here and here the authors appear to assert that it makes no difference to how the matrix is stored, or transferred to the GPU, but on the second page that matrix is clearly not equivalent to how it would be laid out in memory for row-major; and if I look at a populated matrix in my program I see the translation components occupying the 4th, 8th and 12th elements. Given that: "post-multiplying with column-major matrices produces the same result as pre-multiplying with row-major matrices. " Why in the following snippet of code: Matrix4 r = t3 * t2 * t1; Matrix4 r2 = t1.Transpose() * t2.Transpose() * t3.Transpose(); Does r != r2 and why does pos3 != pos for: Vector4 pos = wvpM * new Vector4(0f, 15f, 15f, 1); Vector4 pos3 = wvpM.Transpose() * new Vector4(0f, 15f, 15f, 1); Does the multiplication process change depending on whether the matrices are row or column major, or is it just the order (for an equivalent effect?) One thing that isn't helping this become any clearer, is that when provided to DirectX, my column major WVP matrix is used successfully to transform vertices with the HLSL call: mul(vector,matrix) which should result in the vector being treated as row-major, so how can the column major matrix provided by my math library work?

    Read the article

  • Best way to generate pieces in match-3 games, and then tracking them?

    - by JonLim
    I've been working on a match-3 style game in Actionscript using Flixel, and so far, I've been able to build the core mechanics of the game, including board generation, piece generation, piece swapping and movement, and checking algorithms. However, I am now running into issues with clearing out pieces and letting the above pieces fall down and generating new pieces. The reason I'm running into these issues is that when all of the pieces are generated, the pertinent values (position, sprite ID, and sprite object) are pushed into an array that helps me track everything, all the time. When pieces are moved, I swap the values of the corresponding arrays and life goes on. And that array is the core of my problem: if a row in the middle of the board clears out, ideally, all of the pieces above the cleared pieces should fall down to take their place and new pieces are generated at the top and also fall into place. Except if I try to do that now, all the pieces can fall down, but then I'd have to bump all of their values into the right arrays (oh god my head) and then generate new pieces and fit THOSE into the correct place in the array. Am I overthinking this? Or is there a far better way to track these pieces? Thanks guys!

    Read the article

< Previous Page | 547 548 549 550 551 552 553 554 555 556 557 558  | Next Page >