Search Results

Search found 26774 results on 1071 pages for 'distributed development'.

Page 484/1071 | < Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >

  • How should bots be recognised in a game?

    - by Bane
    I'm interested in how bots are usually written. Here's my situation: I plan to make an online 2D mecha game in HTML5, and the server-side will be done with node. It is intended to be multiplayer, but I also want to make bots in case there aren't enough players. How does my game logic see them, as players or as bots? Is there a standard by which I should make them? Also, any general tips and hints will be OK.

    Read the article

  • 2D Side scroller collision detection

    - by Shanon Simmonds
    I am trying to do some collision detection between objects and tiles, but the tiles do not have there own x and y position, they are just rendered to the x and y position given, there is an array of integers which has the ids of the tiles to use(which are given from an image and all the different colors are assigned different tiles) int x0 = camera.x / 16; int y0 = camera.y / 16; int x1 = (camera.x + screen.width) / 16; int y1 = (camera.y + screen.height) / 16; for(int y = y0; y < y1; y++) { if(y < 0 || y >= height) continue; // height is the height of the level for(int x = x0; x < x1; x++) { if(x < 0 || x >= width) continue; // width is the width of the level getTile(x, y).render(screen, x * 16, y * 16); } } I tried using the levels getTile method to see if the tile that the object was going to advance to, to see if it was a certain tile, but, it seems to only work in some directions. Any ideas on what I'm doing wrong and fixes would be greatly appreciated. What's wrong is that it doesn't collide properly in every direction and also this is how I tested for a collision in the objects class if(!level.getTile((x + xa) / 16, (y + ya) / 16).isSolid()) { x += xa; y += ya; } EDIT: xa and ya represent the direction as well as the movement, if xa is negative it means the object is moving left, if its positive it is moving right, and same with ya except negative for up, positive for down.

    Read the article

  • Issue with DFS imlemtation in objetive-c

    - by Hemant
    i am trying to to do something like this Below is my code: -(id) init{ if( (self=[super init]) ) { bubbles_Arr = [[NSMutableArray alloc] initWithCapacity: 9]; [bubbles_Arr insertObject:[NSMutableArray arrayWithObjects:@"1",@"1",@"1",@"1",@"1",nil] atIndex:0]; [bubbles_Arr insertObject:[NSMutableArray arrayWithObjects:@"3",@"3",@"5",@"5",@"1",nil] atIndex:1]; [bubbles_Arr insertObject:[NSMutableArray arrayWithObjects:@"5",@"3",@"5",@"3",@"1",nil] atIndex:2]; [bubbles_Arr insertObject:[NSMutableArray arrayWithObjects:@"5",@"3",@"5",@"3",@"1",nil] atIndex:3]; [bubbles_Arr insertObject:[NSMutableArray arrayWithObjects:@"1",@"1",@"1",@"1",@"1",nil] atIndex:4]; [bubbles_Arr insertObject:[NSMutableArray arrayWithObjects:@"5",@"5",@"3",@"5",@"1",nil] atIndex:5]; [bubbles_Arr insertObject:[NSMutableArray arrayWithObjects:@"5",@"5",@"5",@"5",@"5",nil] atIndex:6]; [bubbles_Arr insertObject:[NSMutableArray arrayWithObjects:@"5",@"5",@"5",@"5",@"5",nil] atIndex:7]; [bubbles_Arr insertObject:[NSMutableArray arrayWithObjects:@"5",@"5",@"5",@"5",@"5",nil] atIndex:8]; NOCOLOR = @"-1"; R = 9; C = 5; [NSTimer scheduledTimerWithTimeInterval:1.0 target:self selector:@selector(testting) userInfo:Nil repeats:NO]; } return self; } -(void)testting{ // NSLog(@"dataArray---- %@",dataArray.description); int startR = 0; int startC = 0; int color = 1 ;// red // NSString *color = @"5"; //reset visited matrix to false. for(int i = 0; i < R; i++) for(int j = 0; j < C; j++) visited[i][j] = FALSE; //reset count count = 0; [self dfs:startR :startC :color :false]; NSLog(@"count--- %d",count); NSLog(@"test--- %@",bubbles_Arr); } -(void)dfs:(int)ro:(int)co:(int)colori:(BOOL)set{ for(int dr = -1; dr <= 1; dr++) for(int dc = -1; dc <= 1; dc++) if((dr == 0 ^ dc == 0) && [self ok:ro+dr :co+dc]) // 4 neighbors { int nr = ro+dr; int nc = co+dc; NSLog(@"-- %d ---- %d",[[[bubbles_Arr objectAtIndex:nr] objectAtIndex:nc] integerValue],colori); if ((([[[bubbles_Arr objectAtIndex:nr] objectAtIndex:nc] integerValue]==1 || [[[bubbles_Arr objectAtIndex:nr] objectAtIndex:nc] isEqualToString:@"1"]) && !visited[nr][nc])) { visited[nr][nc] = true; count++; [self dfs:nr :nc :colori :set]; if(count>2) { [[bubbles_Arr objectAtIndex:nr] replaceObjectAtIndex:nc withObject:NOCOLOR]; [bubbles[nc+1][nr+1] setTexture:[[CCTextureCache sharedTextureCache] addImage:@"gray_tiger.png"]]; } } } } -(BOOL)ok:(int)r:(int)c{ return r >= 0 && r < R && c >= 0 && c < C; } But it's only working for left to right,not working for right to left. And it is also skipping first object. Thanks in advance.

    Read the article

  • Jittery Movement, Uncontrollably Rotating + Front of Sprite?

    - by Vipar
    So I've been looking around to try and figure out how I make my sprite face my mouse. So far the sprite moves to where my mouse is by some vector math. Now I'd like it to rotate and face the mouse as it moves. From what I've found this calculation seems to be what keeps reappearing: Sprite Rotation = Atan2(Direction Vectors Y Position, Direction Vectors X Position) I express it like so: sp.Rotation = (float)Math.Atan2(directionV.Y, directionV.X); If I just go with the above, the sprite seems to jitter left and right ever so slightly but never rotate out of that position. Seeing as Atan2 returns the rotation in radians I found another piece of calculation to add to the above which turns it into degrees: sp.Rotation = (float)Math.Atan2(directionV.Y, directionV.X) * 180 / PI; Now the sprite rotates. Problem is that it spins uncontrollably the closer it comes to the mouse. One of the problems with the above calculation is that it assumes that +y goes up rather than down on the screen. As I recorded in these two videos, the first part is the slightly jittery movement (A lot more visible when not recording) and then with the added rotation: Jittery Movement So my questions are: How do I fix that weird Jittery movement when the sprite stands still? Some have suggested to make some kind of "snap" where I set the position of the sprite directly to the mouse position when it's really close. But no matter what I do the snapping is noticeable. How do I make the sprite stop spinning uncontrollably? Is it possible to simply define the front of the sprite and use that to make it "face" the right way?

    Read the article

  • Creating Rectangle-based buttons with OnClick events

    - by Djentleman
    As the title implies, I want a Button class with an OnClick event handler. It should fire off connected events when it is clicked. This is as far as I've made it: public class Button { public event EventHandler OnClick; public Rectangle Rec { get; set; } public string Text { get; set; } public Button(Rectangle rec, string text) { this.Rec = rec; this.Text = text; } } I have no clue what I'm doing with regards to events. I know how to use them but creating them myself is another matter entirely. I've also made buttons without using events that work on a case-by-case basis. So basically, I want to be able to attach methods to the OnClick EventHandler that will fire when the Button is clicked (i.e., the mouse intersects Rec and the left mouse button is clicked).

    Read the article

  • 2D scene graph not transforming relative to parent

    - by Dr.Denis McCracleJizz
    I am currently in the process of coding my own 2D Scene graph, which is basically a port of flash's render engine. The problem I have right now is my rendering doesn't seem to be working properly. This code creates the localTransform property for each DisplayObject. Matrix m_transform = Matrix.CreateRotationZ(rotation) * Matrix.CreateScale(scaleX, scaleY, 1) * Matrix.CreateTranslation(new Vector3(x, y, z)); This is my render code. float dRotation; Vector2 dPosition, dScale; Matrix transform; transform = this.localTransform; if (parent != null) transform = localTransform * parent.localTransform; DecomposeMatrix(ref transform, out dPosition, out dRotation, out dScale); spriteBatch.Draw(this.texture, dPosition, null, Color.White, dRotation, new Vector2(originX, originY), dScale, SpriteEffects.None, 0.0f); Here is the result when I try to add the Stage then to the stage a First DisplayObjectContainer and then a second one. It may look fine but the problem lies in the fact that I add a first DisplayObjectContainer at (400,400) and the second one within it (that's the smallest one) at position (0,0). So he should be right over its parent but he gets render within the parent at the same position the parent has (400, 400) for some reason. It's just as if I double the parent's localMatrix and then render the second cat there. This is the code i use to loop through every childs. base.Draw(spriteBatch); foreach (DisplayObject childs in _childs) { childs.Draw(spriteBatch); }

    Read the article

  • Quaternion based rotation and pivot position

    - by Michael IV
    I can't figure out how to perform matrix rotation using Quaternion while taking into account pivot position in OpenGL.What I am currently getting is rotation of the object around some point in the space and not a local pivot which is what I want. Here is the code [Using Java] Quaternion rotation method: public void rotateTo3(float xr, float yr, float zr) { _rotation.x = xr; _rotation.y = yr; _rotation.z = zr; Quaternion xrotQ = Glm.angleAxis((xr), Vec3.X_AXIS); Quaternion yrotQ = Glm.angleAxis((yr), Vec3.Y_AXIS); Quaternion zrotQ = Glm.angleAxis((zr), Vec3.Z_AXIS); xrotQ = Glm.normalize(xrotQ); yrotQ = Glm.normalize(yrotQ); zrotQ = Glm.normalize(zrotQ); Quaternion acumQuat; acumQuat = Quaternion.mul(xrotQ, yrotQ); acumQuat = Quaternion.mul(acumQuat, zrotQ); Mat4 rotMat = Glm.matCast(acumQuat); _model = new Mat4(1); scaleTo(_scaleX, _scaleY, _scaleZ); _model = Glm.translate(_model, new Vec3(_pivot.x, _pivot.y, 0)); _model =rotMat.mul(_model);//_model.mul(rotMat); //rotMat.mul(_model); _model = Glm.translate(_model, new Vec3(-_pivot.x, -_pivot.y, 0)); translateTo(_x, _y, _z); notifyTranformChange(); } Model matrix scale method: public void scaleTo(float x, float y, float z) { _model.set(0, x); _model.set(5, y); _model.set(10, z); _scaleX = x; _scaleY = y; _scaleZ = z; notifyTranformChange(); } Translate method: public void translateTo(float x, float y, float z) { _x = x - _pivot.x; _y = y - _pivot.y; _z = z; _position.x = _x; _position.y = _y; _position.z = _z; _model.set(12, _x); _model.set(13, _y); _model.set(14, _z); notifyTranformChange(); } But this method in which I don't use Quaternion works fine: public void rotate(Vec3 axis, float angleDegr) { _rotation.add(axis.scale(angleDegr)); // change to GLM: Mat4 backTr = new Mat4(1.0f); backTr = Glm.translate(backTr, new Vec3(_pivot.x, _pivot.y, 0)); backTr = Glm.rotate(backTr, angleDegr, axis); backTr = Glm.translate(backTr, new Vec3(-_pivot.x, -_pivot.y, 0)); _model =_model.mul(backTr);///backTr.mul(_model); notifyTranformChange(); }

    Read the article

  • Confusion about Rotation matrices from Euler Angles

    - by xEnOn
    I am trying to learn more about Euler Angles so as to help myself in understanding how I can control my camera better in the game. I came across the following formula that converts Euler Angles to rotation matrices: In the equation, I could see that the first matrix from the left is the rotation matrix about x-axis, the second is about y-axis and the third is about z-axis. From my understanding about ordinary matrix transformations, the later transformation is always applied to the right hand side. And if I'm right about this, then the above equation should have a rotation order starting from rotating about z-axis, y-axis, then finally x-axis. But, from the symbols it seems that the rotation order start rotating about x-axis, then y-axis, then finally z-axis. What should the actual order of the rotation be? Also, I am confuse about if the input vector, in this case, would be a row vector on the left, or a column vector on the right?

    Read the article

  • Adaptive Characters: AI Solution Needs a Problem

    - by Roger F. Gay
    Have sophisticated adaptive programming, will travel - so to speak. I'm part of a group that developed sophisticated learning / adaptive software for robotics. The system "thinks" via its simulator, building and adapting code on its own; and then carries out the best solution. The software can also adapt to new situations, etc. http://mensnewsdaily.com/2007/05/16/robobusiness-robots-with-imagination/ It's easy to imagine using it with automated game characters that will adapt to the players moves and style - the easiest example would be fighting. The more the simulated fighter fights with the human player, the more it learns to counter that players fighting skills. But there should be more. Anyone have any ideas as to how adaptive characters might be interesting in games?

    Read the article

  • Many sources of movement in an entity system

    - by Sticky
    I'm fairly new to the idea of entity systems, having read a bunch of stuff (most usefully, this great blog and this answer). Though I'm having a little trouble understanding how something as simple as being able to manipualate the position of an object by an undefined number of sources. That is, I have my entity, which has a position component. I then have some event in the game which tells this entity to move a given distance, in a given time. These events can happen at any time, and will have different values for position and time. The result is that they'd be compounded together. In a traditional OO solution, I'd have some sort of MoveBy class, that contains the distance/time, and an array of those inside my game object class. Each frame, I'd iterate through all the MoveBy, and apply it to the position. If a MoveBy has reached its finish time, remove it from the array. With the entity system, I'm a little confused as how I should replicate this sort of behavior. If there were just one of these at a time, instead of being able to compound them together, it'd be fairly straightforward (I believe) and look something like this: PositionComponent containing x, y MoveByComponent containing x, y, time Entity which has both a PositionComponent and a MoveByComponent MoveBySystem that looks for an entity with both these components, and adds the value of MoveByComponent to the PositionComponent. When the time is reached, it removes the component from that entity. I'm a bit confused as to how I'd do the same thing with many move by's. My initial thoughts are that I would have: PositionComponent, MoveByComponent the same as above MoveByCollectionComponent which contains an array of MoveByComponents MoveByCollectionSystem that looks for an entity with a PositionComponent and a MoveByCollectionComponent, iterating through the MoveByComponents inside it, applying/removing as necessary. I guess this is a more general problem, of having many of the same component, and wanting a corresponding system to act on each one. My entities contain their components inside a hash of component type - component, so strictly have only 1 component of a particular type per entity. Is this the right way to be looking at this? Should an entity only ever have one component of a given type at all times?

    Read the article

  • What cars on roads game engines are there?

    - by David Thielen
    What game engines are there that support laying out a map of roads and handle vehicle movement on the roads. Something similar to the basic functionality in Transport Tycoon/Locomotion. I don't care about looks (although prettier is better) and top down or isometric is fine. I just need a simple way to create maps and move cars on it. And preferably the cars do take time to speed up and slow down as they go from stopped to full speed. Prefer in Windows (any API in Windows). I also prefer a free engine as this is just for internal use. I have found CarDriving 2D - does anyone know if it works well?

    Read the article

  • Triple buffering causes input lag?

    - by user782220
    Consider some time in between two vsyncs. Suppose the first display buffer is being used to display the current image, and suppose the game was really fast and computed and rendered the next image to the second display buffer and the next one after that to the third display buffer. That is the rendering to the second and third display buffer happens so fast that it occurs before the next vsync. Suppose input from the user comes in now. What you would like is for the results of the input to show up on the next vsync or (probably more typical) the vsync after that. However, with the third display buffer already rendered the input can only effect the image after that. Meaning the input will only take effect at best 3 vsyncs later. I wish i had an image to show the exact timings of what I mean.

    Read the article

  • What are the different ways to texture a terrain?

    - by ApocKalipsS
    I'm working with XNA on a 3D Game, and I'm trying to have a proper and nice environnement. I actually followed a tutorial to create a terrain from a Heightmap, and to texture it, I just apply a grass texture on it and tile it a number of times. But what I want to do is to have a really realistic texturing, but also generate it automatically (for example if I want to use a perlin noise to generate a terrain and then texture it). I already learnt about multi-texturing, loading a map file with different colors for different textures, but I don't think this is really efficient, for instance for cliffs or very steep areas it will tile a texture badly as it's a view from the top. Also I don't know how i'll draw roads or dirt paths with that. I hope you understood me despite my english! If you don't, basically, here what I want to do: How do I texture a randomly generated terrain? :) Thank you for your answers!

    Read the article

  • how to properly implement alpha blending in a complex 3d scene

    - by Gajet
    I know this question might sound a bit easy to answer but It's driving me crazy. There are too many possible situations that a good alpha blending mechanism should handle, and for each Algorithm I can think of there is something missing. these are the methods I've though about so far: first of I though about object sorting by depth, this one simply fails because Objects are not simple shapes, they might have curves and might loop inside each other. so I can't always tell which one is closer to camera. then I thought about sorting triangles but this one also might fail, thought I'm not sure how to implement it there is a rare case that might again cause problem, in which two triangle pass through each other. again no one can tell which one is nearer. the next thing was using depth buffer, at least the main reason we have depth buffer is because of the problems with sorting that I mentioned but now we get another problem. Since objects might be transparent, in a single pixel there might be more than one object visible. So for which Object should I store pixel depth? I then thought maybe I can only store the most front Object depth, and using that determine how should I blend next draw calls at that pixel. But again there was a problem, think about 2 semi transparent planes with a solid plane in middle of them. I was going to render the solid plane at the end, one can see the most distant plane. note that I was going to merge every two planes until there is only one color left for that pixel. Obviously I can use sorting methods too because of the same reasons I've explained above. Finally the only thing I imagine being able to work is to render all objects into different render targets and then sort those layers and display the final output. But this time I don't know how can I implement this algorithm.

    Read the article

  • OpenGL 3 and the Radeon HD 4850x2

    - by rotard
    A while ago, I picked up a copy of the OpenGL SuperBible fifth edition and slowly and painfully started teaching myself OpenGL the 3.3 way, after having been used to the 1.0 way from school way back when. Making things more challenging, I am primarily a .NET developer, so I was working in Mono with the OpenTK OpenGL wrapper. On my laptop, I put together a program that let the user walk around a simple landscape using a couple shaders that implemented per-vertex coloring and lighting and texture mapping. Everything was working brilliantly until I ran the same program on my desktop. Disaster! Nothing would render! I have chopped my program down to the point where the camera sits near the origin, pointing at the origin, and renders a square (technically, a triangle fan). The quad renders perfectly on my laptop, coloring, lighting, texturing and all, but the desktop renders a small distorted non-square quadrilateral that is colored incorrectly, not affected by the lights, and not textured. I suspect the graphics card is at fault, because I get the same result whether I am booted into Ubuntu 10.10 or Win XP. I did find that if I pare the vertex shader down to ONLY outputting the positional data and the fragment shader to ONLY outputting a solid color (white) the quad renders correctly. But as SOON as I start passing in color data (whether or not I use it in the fragment shader) the output from the vertex shader is distorted again. The shaders follow. I left the pre-existing code in, but commented out so you can get an idea what I was trying to do. I'm a noob at glsl so the code could probably be a lot better. My laptop is an old lenovo T61p with a Centrino (Core 2) Duo and an nVidia Quadro graphics card running Ubuntu 10.10 My desktop has an i7 with a Radeon HD 4850 x2 (single card, dual GPU) from Saphire dual booting into Ubuntu 10.10 and Windows XP. The problem occurs in both XP and Ubuntu. Can anyone see something wrong that I am missing? What is "special" about my HD 4850x2? string vertexShaderSource = @" #version 330 precision highp float; uniform mat4 projection_matrix; uniform mat4 modelview_matrix; //uniform mat4 normal_matrix; //uniform mat4 cmv_matrix; //Camera modelview. Light sources are transformed by this matrix. //uniform vec3 ambient_color; //uniform vec3 diffuse_color; //uniform vec3 diffuse_direction; in vec4 in_position; in vec4 in_color; //in vec3 in_normal; //in vec3 in_tex_coords; out vec4 varyingColor; //out vec3 varyingTexCoords; void main(void) { //Get surface normal in eye coordinates //vec4 vEyeNormal = normal_matrix * vec4(in_normal, 0); //Get vertex position in eye coordinates //vec4 vPosition4 = modelview_matrix * vec4(in_position, 0); //vec3 vPosition3 = vPosition4.xyz / vPosition4.w; //Get vector to light source in eye coordinates //vec3 lightVecNormalized = normalize(diffuse_direction); //vec3 vLightDir = normalize((cmv_matrix * vec4(lightVecNormalized, 0)).xyz); //Dot product gives us diffuse intensity //float diff = max(0.0, dot(vEyeNormal.xyz, vLightDir.xyz)); //Multiply intensity by diffuse color, force alpha to 1.0 //varyingColor.xyz = in_color * diff * diffuse_color.xyz; varyingColor = in_color; //varyingTexCoords = in_tex_coords; gl_Position = projection_matrix * modelview_matrix * in_position; }"; string fragmentShaderSource = @" #version 330 //#extension GL_EXT_gpu_shader4 : enable precision highp float; //uniform sampler2DArray colorMap; //in vec4 varyingColor; //in vec3 varyingTexCoords; out vec4 out_frag_color; void main(void) { out_frag_color = vec4(1,1,1,1); //out_frag_color = varyingColor; //out_frag_color = vec4(varyingColor, 1) * texture(colorMap, varyingTexCoords.st); //out_frag_color = vec4(varyingColor, 1) * texture(colorMap, vec3(varyingTexCoords.st, 0)); //out_frag_color = vec4(varyingColor, 1) * texture2DArray(colorMap, varyingTexCoords); }"; Note that in this code the color data is accepted but not actually used. The geometry is outputted the same (wrong) whether the fragment shader uses varyingColor or not. Only if I comment out the line varyingColor = in_color; does the geometry output correctly. Originally the shaders took in vec3 inputs, I only modified them to take vec4s while troubleshooting.

    Read the article

  • What is the best type of c# timer to use with a Unity game that uses many timers simultaneously?

    - by Kyle Seidlitz
    I am developing a stand-alone 3d game in Unity that will have anywhere from 1 to 200 timers running simultaneously. There will be a GameObject containing 1 timer. For this game timer durations will range from 5 minutes to 4 days. There will not be any countdown displays or any UI for the timers. Each object is a prefab, with all the necessary materials included. An attached script will handle the timer and all the necessary code to change the materials and make any sound effects. Once the timer is expired, the user will then click on the object again, and the object will be destroyed, and the user's inventory will be adjusted. If the user wants to save or end the game before all the timers are done, the start value of the still running timers is to be saved to an XML file such that when the game is started again, any still running timers will be checked to see if they have expired, where the object's materials will be changed appropriately. I am still trying to figure out what type of timer to use, and see also if there are any suggestions for saving and calculating times over several days. What class(es) of timers should I use? Are there any special issues I should look out for in terms of performance?

    Read the article

  • Space invaders clone not moving properly

    - by ThePlan
    I'm trying to make a basic space invaders clone in allegro 5, I've got my game set up, basic events and such, here is the code: #include <allegro5/allegro.h> #include <allegro5/allegro_image.h> #include <allegro5/allegro_primitives.h> #include <allegro5/allegro_font.h> #include <allegro5/allegro_ttf.h> #include "Entity.h" // GLOBALS ========================================== const int width = 500; const int height = 500; const int imgsize = 3; bool key[5] = {false, false, false, false, false}; bool running = true; bool draw = true; // FUNCTIONS ======================================== void initSpaceship(Spaceship &ship); void moveSpaceshipRight(Spaceship &ship); void moveSpaceshipLeft(Spaceship &ship); void initInvader(Invader &invader); void moveInvaderRight(Invader &invader); void moveInvaderLeft(Invader &invader); void initBullet(Bullet &bullet); void fireBullet(); void doCollision(); void updateInvaders(); void drawText(); enum key_t { UP, DOWN, LEFT, RIGHT, SPACE }; enum source_t { INVADER, DEFENDER }; int main(void) { if(!al_init()) { return -1; } Spaceship ship; Invader invader; Bullet bullet; al_init_image_addon(); al_install_keyboard(); al_init_font_addon(); al_init_ttf_addon(); ALLEGRO_DISPLAY *display = al_create_display(width, height); ALLEGRO_EVENT_QUEUE *event_queue = al_create_event_queue(); ALLEGRO_TIMER *timer = al_create_timer(1.0 / 60); ALLEGRO_BITMAP *images[imgsize]; ALLEGRO_FONT *font1 = al_load_font("arial.ttf", 20, 0); al_register_event_source(event_queue, al_get_keyboard_event_source()); al_register_event_source(event_queue, al_get_display_event_source(display)); al_register_event_source(event_queue, al_get_timer_event_source(timer)); images[0] = al_load_bitmap("defender.bmp"); images[1] = al_load_bitmap("invader.bmp"); images[2] = al_load_bitmap("explosion.bmp"); al_convert_mask_to_alpha(images[0], al_map_rgb(0, 0, 0)); al_convert_mask_to_alpha(images[1], al_map_rgb(0, 0, 0)); al_convert_mask_to_alpha(images[2], al_map_rgb(0, 0, 0)); initSpaceship(ship); initBullet(bullet); initInvader(invader); al_start_timer(timer); while(running) { ALLEGRO_EVENT ev; al_wait_for_event(event_queue, &ev); if(ev.type == ALLEGRO_EVENT_TIMER) { draw = true; if(key[RIGHT] == true) moveSpaceshipRight(ship); if(key[LEFT] == true) moveSpaceshipLeft(ship); } else if(ev.type == ALLEGRO_EVENT_DISPLAY_CLOSE) running = false; else if(ev.type == ALLEGRO_EVENT_KEY_DOWN) { switch(ev.keyboard.keycode) { case ALLEGRO_KEY_ESCAPE: running = false; break; case ALLEGRO_KEY_LEFT: key[LEFT] = true; break; case ALLEGRO_KEY_RIGHT: key[RIGHT] = true; break; case ALLEGRO_KEY_SPACE: key[SPACE] = true; break; } } else if(ev.type == ALLEGRO_KEY_UP) { switch(ev.keyboard.keycode) { case ALLEGRO_KEY_LEFT: key[LEFT] = false; break; case ALLEGRO_KEY_RIGHT: key[RIGHT] = false; break; case ALLEGRO_KEY_SPACE: key[SPACE] = false; break; } } if(draw && al_is_event_queue_empty(event_queue)) { draw = false; al_draw_bitmap(images[0], ship.pos_x, ship.pos_y, 0); al_flip_display(); al_clear_to_color(al_map_rgb(0, 0, 0)); } } al_destroy_font(font1); al_destroy_event_queue(event_queue); al_destroy_timer(timer); for(int i = 0; i < imgsize; i++) al_destroy_bitmap(images[i]); al_destroy_display(display); } // FUNCTION LOGIC ====================================== void initSpaceship(Spaceship &ship) { ship.lives = 3; ship.speed = 2; ship.pos_x = width / 2; ship.pos_y = height - 20; } void initInvader(Invader &invader) { invader.health = 100; invader.count = 40; invader.speed = 0.5; invader.pos_x = 300; invader.pos_y = 300; } void initBullet(Bullet &bullet) { bullet.speed = 10; } void moveSpaceshipRight(Spaceship &ship) { ship.pos_x += ship.speed; if(ship.pos_x >= width) ship.pos_x = width-30; } void moveSpaceshipLeft(Spaceship &ship) { ship.pos_x -= ship.speed; if(ship.pos_x <= 0) ship.pos_x = 0+30; } However it's not behaving the way I want it to behave, in fact the behavior for the ship movement is un-normal. Basically I specified that the ship only moves when the right/left key is down, however the ship is moving constantly to the direction of the key pressed, it never stops although it should only move while my key is down. Even more weird behavior, when I press the opposite key the ship completely stops no matter what else I press. What's wrong with the code? Why does the ship move constantly even after I specified it only moves when a key is down?

    Read the article

  • How should I account for the GC when building games with Unity?

    - by Eonil
    *As far as I know, Unity3D for iOS is based on the Mono runtime and Mono has only generational mark & sweep GC. This GC system can't avoid GC time which stops game system. Instance pooling can reduce this but not completely, because we can't control instantiation happens in the CLR's base class library. Those hidden small and frequent instances will raise un-deterministic GC time eventually. Forcing complete GC periodically will degrade performance greatly (can Mono force complete GC, actually?) So, how can I avoid this GC time when using Unity3D without huge performance degrade?

    Read the article

  • Switching songs - MediaPlayer lags the game

    - by Fibericon
    When the player encounters a boss in the game I'm working on, I want to have the music change. It seems simple enough with the MediaPlayer class to fade out the current song, switch to another, and then fade the new song in. However, at the point where the second song starts, the game freezes for a split second. The songs in question aren't particularly large either - the first song is 1.7mb and the second song is 3.1mb, both mp3 format. This is the code I'm using to do it: protected void switchSong(GameTime gameTime) { if (!bossSongPlaying) { MediaPlayer.Volume -= ((float)gameTime.ElapsedGameTime.TotalSeconds/10); if (MediaPlayer.Volume < 0.05f) { MediaPlayer.Play(bossSong); MediaPlayer.Volume = 1.0f; bossSongPlaying = true; } } } What can I do to eliminate that momentary hang?

    Read the article

  • Jumping a sprite while moving in a Bezier action

    - by marcg11
    I'm creating a game and I need the sprite to jump (move up and down basically) while it's moving on a bezier path so it moves vertically while it still follows the path. If I do this while it's moving along the bezier path: [mySprite runAction:[CCJumpBy actionWithDuration:0.1 position:ccp(0,0) height:10 jumps:1]]; It jumps vertically but instantly it returns to the position on the path. What I want is to jump relative to the path. Anyone knows something about it? It would looks something like this: the curve is a sequence of CCBezierBy's by the way. Thanks.

    Read the article

  • How can I mark a pixel in the stencil buffer?

    - by János Turánszki
    I never used the stencil buffer for anything until now, but I want to change this. I have an idea of how it should work: the gpu discards or keeps rasterized pixels before the pixel shader based on the stencil buffer value on the given position and some stencil operation. What I don't know is how would I mark a pixel in the stencil buffer with a specific value. For example I draw my scene and want to mark everything which is drawn with a specific material (this material could be looked up from a texture so ideally I should mark the pixel in the pixel shader), so that later when I do some post processing on my scene I would only do it on the marked pixels. I didn't find anything on the internet besides how to set up a stencil buffer and explaining the different stencil operations. I was expecting to find some System-Value semantics like SV_Depth to write to in the pixel shader (because the stencil buffer shares the same resource with the depth buffer in D3D11), but there is no such thing on MSDN. So how should I do this? If I am misunderstanding something please help me clear that up.

    Read the article

  • How can I achieve a 3D-like effect with spritebatch's rotation and scale parameters

    - by Alic44
    I'm working on a 2d game with a top-down perspective similar to Secret of Mana and the 2D Final Fantasy games, with one big difference being that it's an action rpg using a 3-dimensional physics engine. I'm trying to draw an aimer graphic (basically an arrow) at my characters' feet when they're aiming a ranged weapon. At first I just converted the character's aim vector to radians and passed that into spritebatch, but there was a problem. The position of every object in my world is scaled for perspective when it's drawn to the screen. So if the physics engine coordinates are (1, 0, 1), the screen coords are actually (1, .707) -- the Y and Z axis are scaled by a perspective factor of .707 and then added together to get the screen coordinates. This meant that the direction the aimer graphic pointed (thanks to its rotation value passed into spritebatch) didn't match up with the direction the projectile actually traveled over time. Things looked fine when the characters fired left, right, up, or down, but if you fired on a diagonal the perspective of the physics engine didn't match with the simplistic way I was converting the character's aim direction to a screen rotation. Ok, fast forward to now: I've got the aimer's rotation matched up with the path the projectile will actually take, which I'm doing by decomposing a transform matrix which I build from two rotation matrices (one to represent the aimer's rotation, and one to represent the camera's 45 degree rotation on the x axis). My question is, is there a way to get not just rotation from a series of matrix transformations, but to also get a Vector2 scale which would give the aimer the appearance of being a 3d object, being warped by perspective? Orthographic perspective is what I'm going for, I think. So, the aimer arrow would get longer when facing sideways, and shorter when facing north and south because of the perspective. At the same time, it would get wider when facing north and south, and less wide when facing right or left. I'd like to avoid actually drawing the aimer texture in 3d because I'm still using spritebatch's layerdepth parameter at this point in my project, and I don't want to have to figure out how to draw a 3d object within the depth sorting system I already have. I can provide code and more details if this is too vague as a question... This is my first post on stack exchange. Thanks a lot for reading! Note: (I think) I realize it can't be a technically correct 3D perspective, because the spritebatch's vector2 scaling argument doesn't allow for an object to be skewed the way it actually should be. What I'm really interested in is, is there a good way to fake the effect, or should I just drop it and not scale at all? Edit to clarify without the help of a picture (apparently I can't post them yet): I want the aimer arrow to look like it has been painted on the ground at the character's feet, so it should appear to be drawn on the ground plane (in my case the XZ plane) which should be tilted at a 45 degree angle (around the X axis) from the viewing perspective. Alex

    Read the article

  • Calc direction vector based on destination vector and distance from enemy in AS3

    - by Phil
    I'm working on a zombie game in AS3 where I want a character to be able to move away from a zombie depending upon how close the zombie is. The character also has a destination that it's trying to get too on the screen. Ok so I have 2 vectors, one pointing to my destination, and one pointing to the zombie which I then invert to get my "away" vector. I then turn the distance between my character and the zombie into a value between 0 and 1. And then I'm stuck on how to get a resultant vector for my character. How would I use my 0-1 value to calculate how much of the away vector is used and how much of the original destination vector is still left if that makes sense? to end up with 1 direction vector to move my character? So if the zombie is right where my character is, then my direction vector = away vector, and if I'm far away from the zombie than my direction vector = destination vector, but how do I calculate the in-between? Ideally need the answer in AS3.

    Read the article

  • How to detect whether an Object came to sleep at a specific position?

    - by Nils Riedemann
    I'm currently writing a small game with box2dweb and I need some direction for this: I'm throwing a Box and have to hit a specific place and trigger an event when the object that's been thrown isn't moving anymore, "fell asleep" so to say. What's the proper way / best practice for this? I'm currently thinking of asking the b2World whether an Object is within a specific AABB and then wait a few seconds, check if it's still there and then trigger the event. But this seems to me like the roundabout way and the object might still be moving inside of that AABB and eventually even drop out of the AABB.

    Read the article

  • Cocos2d v2.0 and OpenGL 2.0/1.0: where to start

    - by mm24
    I started developing my very first game 3 months ago using Cocos2d 2.0 for iPhone. I am now in the stage where I'd like to add some cool effects to the bullets and some special weapons (see my waveforms question here). I got a good answer in the cocos2d-iphone forum (see this one). Unfortunately I am a bit paralized now. I don't know if I will be overdoing by learning OpengGL 2.0 or if I should just stick ot the old 1.0. There is a good intro on various tutorial's written in Steffen Itterheims blog (see this post). I would like to add to my game: a blur effect to the bullets (here is a tutorial for OpenGL 1.0) a waveform (see above) some realistic water ripples (here is a nice sample code) So now, given that I don't want to overdo things but at the same time I want to achieve those effects, from where should I start? Should I discard the OpenGL 1.0 tutorials? OR should I use only OpenGL 1.0 code? How can I avoid confusion? I mean, it seems that the compiler recognizes both, but that there are some conflictual calls in some circumnstances, I am fairly sure this has some explanation, is there some reference to this somewhere?

    Read the article

< Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >