Search Results

Search found 25952 results on 1039 pages for 'development lifecycle'.

Page 429/1039 | < Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >

  • Processing component pools problem - Entity Subsystem

    - by mani3xis
    Architecture description I'm creating (designing) an entity system and I ran into many problems. I'm trying to keep it Data-Oriented and efficient as much as possible. My components are POD structures (array of bytes to be precise) allocated in homogeneous pools. Each pool has a ComponentDescriptor - it just contains component name, field types and field names. Entity is just a pointer to array of components (where address acts like an entity ID). EntityPrototype contains entity name and array of component names. Finally Subsystem (System or Processor) which works on component pools. Actual problem The problem is that some components dependents on others (Model, Sprite, PhysicalBody, Animation depends on Transform component) which makes a lot of problems when it comes to processing them. For example, lets define some entities using [S]prite, [P]hysicalBody and [H]ealth: Tank: Transform, Sprite, PhysicalBody BgTree: Transform, Sprite House: Transform, Sprite, Health and create 4 Tanks, 5 BgTrees and 2 Houses and my pools will look like: TTTTTTTTTTT // Transform pool SSSSSSSSSSS // Sprite pool PPPP // PhysicalBody pool HH // Health component There is no way to process them using indices. I spend 3 days working on it and I still don't have any ideas. In previous designs TransformComponent was bound to the entity - but it wasn't a good idea. Can you give me some advices how to process them? Or maybe I should change the overall design? Maybe I should create pools of entites (pools of component pools) - but I guess it will be a nightmare for CPU caches. Thanks

    Read the article

  • Radiosity using a hemisphere

    - by P. Avery
    I'm working on a radiosity processor. I'm projecting scene geometry onto a hemisphere at a high order of tessellation during a visibility pass onto a 1024x1024 render target. The problem is that the edges of certain triangles are not being rendered to the item buffer( render target )...so when I test certain edges( or pixels during pixel shader ) for visibility during a reconstruction pass, visible edges are not identified and as a result the pixel for that edge is discarded. One solution was to increase the resolution of the item buffer( up to 4096x4096 )...this helped and more edges were visible, however, this was not fullproof. How do I increase visibility? Here is a screenshot of a scene after radiosity is applied: the seams are edges along a triangle face that were not visible due to the resolution of the item buffer... fixed the problem by sampling the item buffer w/8 points:

    Read the article

  • Circular motion on low powered hardware

    - by Akroy
    I was thinking about platforms and enemies moving in circles in old 2D games, and I was wondering how that was done. I understand parametric equations, and it's trivial to use sin and cos to do it, but could an NES or SNES make real time trig calls? I admit heavy ignorance, but I thought those were expensive operations. Is there some clever way to calculate that motion more cheaply? I've been working on deriving an algorithm from trig sum identities that would only use precalculated trig, but that seems convoluted.

    Read the article

  • How can I implement an Iris Wipe effect?

    - by Vandell
    For those who doesn't know: An iris wipe is a wipe that takes the shape of a growing or shrinking circle. It has been frequently used in animated short subjects, such as those in the Looney Tunes and Merrie Melodies cartoon series, to signify the end of a story. When used in this manner, the iris wipe may be centered around a certain focal point and may be used as a device for a "parting shot" joke, a fourth wall-breaching wink by a character, or other purposes. Example from flasheff.com Your answer may or may not include a coding sample, a language agnostic explanation is considered enough.

    Read the article

  • Making a perfect map (not tile-based)

    - by Sri Harsha Chilakapati
    I would like to make a map system as in the GameMaker and the latest code is here. I've searched a lot in google and all of them resulted in tutorials about tile-maps. As tile maps do not fit for every type of game and GameMaker uses tiles for a different purpose, I want to make a "Sprite Based" map. The major problem I had experienced was collision detection being slow for large maps. So I wrote a QuadTree class here and the collision detection is fine upto 50000 objects in the map without PixelPerfect collision detection and 30000 objects with PixelPerferct collisions enabled. Now I need to implement the method "isObjectCollisionFree(float x, float y, boolean solid, GObject obj)". The existing implementation is becoming slow in Platformer games and I need suggestions on improvement. The current Implementation: /** * Checks if a specific position is collision free in the map. * * @param x The x-position of the object * @param y The y-position of the object * @param solid Whether to check only for solid object * @param object The object ( used for width and height ) * @return True if no-collision and false if it collides. */ public static boolean isObjectCollisionFree(float x, float y, boolean solid, GObject object){ boolean bool = true; Rectangle bounds = new Rectangle(Math.round(x), Math.round(y), object.getWidth(), object.getHeight()); ArrayList<GObject> collidables = quad.retrieve(bounds); for (int i=0; i<collidables.size(); i++){ GObject obj = collidables.get(i); if (obj.isSolid()==solid && obj != object){ if (obj.isAlive()){ if (bounds.intersects(obj.getBounds())){ bool = false; if (Global.USE_PIXELPERFECT_COLLISION){ bool = !GUtil.isPixelPerfectCollision(x, y, object.getAnimation().getBufferedImage(), obj.getX(), obj.getY(), obj.getAnimation().getBufferedImage()); } break; } } } } return bool; } Thanks.

    Read the article

  • View matrix question (rotate by 180 degrees)

    - by King Snail
    I am using a third party rendering API on top of OpenGL code and I cannot get my matrices correct. The API states this: We're right handed by default, and we treat y as up by convention. Since IwGx's coordinate system has (0,0) as the top left, you typically need a 180 degree rotation around Z in your view matrix. I think the viewer does this by default. In my OpenGL app I have access to the view and projection matrices separately. How can I convert them to fit the criteria used by my third party rendering API? I don't understand what they mean to rotate 180 degrees around Z, is that in the view matrix itself or something in the camera before making the view matrix. Any code would be helpful, thanks.

    Read the article

  • Simple question about a cocos2d based game template

    - by Zishan
    I am learning a cocos2d based Game template tutorial from here and now I am at this point of the tutorial. My question is, how can i run 30 different scenes in 30 different levels of 5 chapter? Now I am using this switch (gameData.selectedLevel) { case 1: [SceneManager goChapter1Level1Scene]; break; (... snip a whole lot of lines...) case 30: [SceneManager goChapter5Level6Scene]; break; default: break; } in the "- (void) onPlay: (CCMenuItemImage*) sender" method. But it work only for 6 levels scene of chapter1. Other 4 chapters levels are show as same as 1st chapter levels scene. they are not show their own level scene. can any one please teach me, how can I do this stuff using this game template?

    Read the article

  • Split a 2D scene in layers or have a z coordinate

    - by Bane
    I am in the process of writing a 2D game engine, and a dilemma emerged. Let me explain the situation... I have a Scene class, to which various objects can be added (Drawable, ParticleEmitter, Light2D, etc), and as this is a 2D scene, things will obviously be drawn over each other. My first thought was that I could have basic add and remove methods, but I soon realized that then there would be no way for the programmer to control the order in which things were drawn. So I can up with two options, each with its pros and cons. A) Would be to split the scene in layers. By that I mean instead of having the scene be a container of objects, have it be a container of layers, which are in turn the containers of objects. B) Would require to have some kind of z-coordinate, and then have the scene sorted so objects with lower z get drawn first. Option A is pretty solid, but the problem is with the lights. In what layer do I add it? Does it work cross-layer? On all bottom layers? And I still need the Z coordinate to calculate the shadow! Option B would require me to change all my code from having Vector2D positions, to some kind of class that inherits from Vector2D and adds a z coordinate to it (I don't want it to be a Vector3D because I still need all the same methods the 2D kind has, just with .z clamped on). Am I missing something? Is there an alternative to these methods? I'm working in Javascript, if that makes a difference.

    Read the article

  • Height Map Mapping to "Chunked" Quadrilateralized Spherical Cube

    - by user3684950
    I have been working on a procedural spherical terrain generator for a few months which has a quadtree LOD system. The system splits the six faces of a quadrilateralized spherical cube into smaller "quads" or "patches" as the player approaches those faces. What I can't figure out is how to generate height maps for these patches. To generate the heights I am using a 3D ridged multi fractals algorithm. For now I can only displace the vertices of the patches directly using the output from the ridged multi fractals. I don't understand how I generate height maps that allow the vertices of a terrain patch to be mapped to pixels in the height map. The only thing I can think of is taking each vertex in a patch, plug that into the RMF and take that position and translate into u,v coordinates then determine the pixel position directly from the u,v coordinates and determine the grayscale color based on the height. I feel as if this is the right approach but there are a few other things that may further complicate my problem. First of all I intend to use "height maps" with a pixel resolution of 192x192 while the vertex "resolution" of each terrain patch is only 16x16 - meaning that I don't have any vertices to sample for the RMF for most of the pixels. The main reason the height map resolution is higher so that I can use it to generate a normal map (otherwise the height maps serve little purpose as I can just directly displace vertices as I currently am). I am pretty much following this paper very closely. This is, essentially, the part I am having trouble with. Using the cube-to-sphere mapping and the ridged multifractal algorithm previously described, a normalized height value ([0, 1]) is calculated. Using this height value, the terrain position is calculated and stored in the first three channels of the positionmap (RGB) – this will be used to calculate the normalmap. The fourth channel (A) is used to store the height value itself, to be used in the heightmap. The steps in the first sentence are my primary problem. I don't understand how the pixel positions correspond to positions on the sphere and what positions are sampled for the RMF to generate the pixels if only vertices cannot be used.

    Read the article

  • Embed Unity3D and load multiple games from a single app

    - by Rafael Steil
    Is is possible to export an entire unity3d project/game as an AssetBundle and load it on iOS/Android/Windows on an app that doesn't know anything about such game beforehand? What I have in mind is something like the web plugin does - it loads a series of .unity3d files over http, and render inline in the browser window. Is it even possible to do something closer for iOS/Android? I have read a lot of docs so far, but still can't be sure: http://floored.com/blog/2013/integrating-unity3d-within-ios-native-application.html http://docs.unity3d.com/Manual/LoadingResourcesatRuntime.html http://docs.unity3d.com/Manual/AssetBundlesIntro.html The code from the post at http://forum.unity3d.com/threads/112703-Override-Unity-Data-folder-path?p=749108&viewfull=1#post749108 works for Android, but how about iOS and other platforms?

    Read the article

  • Algorithm for optimal control on space ship using accelerometer input data

    - by mm24
    Does someone have a good algorithm for controlling a space ship in a vertical shooter game using acceleration data? I have done a simple algorithm, but works very badly. I save an initial acceleration value (used to calibrate the movement according to the user's initial position) and I do subtract it from the current acceleration so I get a "calibrated" value. The problem is that basing the movement solely on relative acceleration has an effect of loss of sensitivity: certain movements are independent from the initial position. Would anyone be able to share a a better solution? I am wondering if I should use/integrate also inputs from gyroscope hardware. Here is my sample of code for a Cocos2d iOS game: - (void) accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration { if (calibrationLayer.visible){ [self evaluateCalibration:acceleration]; initialAccelleration=acceleration; return; } if([self evaluatePause]){ return; } ShooterScene * shooterScene = (ShooterScene *) [self parent]; ShipEntity *playerSprite = [shooterScene playerShip]; float accellerationtSensitivity = 0.5f; UIAccelerationValue xAccelleration = acceleration.x - initialAccelleration.x; UIAccelerationValue yAccelleration = acceleration.y - initialAccelleration.y; if(xAccelleration > 0.05 || xAccelleration < -0.05) { [playerSprite setPosition:CGPointMake(playerSprite.position.x + xAccelleration * 80, playerSprite.position.y + yAccelleration * 80)]; } else if(yAccelleration > 0.05 || yAccelleration < -0.05) { [playerSprite setPosition:CGPointMake(playerSprite.position.x + xAccelleration * 80, playerSprite.position.y + yAccelleration * 80)]; } }

    Read the article

  • How to visually "connect" skybox edges with terrain model

    - by David
    I'm working on a simple airplane game where I use skybox cube rendered using disabled depth test. Very close to the bottom side of the skybox is my terrain model. What bothers me is that the terrain is not connected to the skybox bottom. This is not visible while the plane flies low, but as it gets some altitude, the terrain looks smaller because of the perspective. Since the skybox center is always same as the camera position, the skybox moves with the plane, but the terrain goes into the distance. Ok, I think you understand the problem. My question is how to fix it. It's an airplane game so limiting max altitude is not possible. I thought about some way to stretch terrain to always cover whole bottom side of the skybox cube, but that doesn't feel right and I don't even know how would I calculate new terrain dimensions every frame. Here are some screenshot of games where you can clearly see the problem: (oops, I cannot post images yet) darker brown is the skybox bottom here: http://i.stack.imgur.com/iMsAf.png untextured brown is the skybox bottom here: http://i.stack.imgur.com/9oZr7.png

    Read the article

  • How to properly add texture to multi-fixture/shape b2Body

    - by Blazej Wdowikowski
    Hello to everyone this is my first poste here I hope that will be not fail start. At start I must say I make part 1 in Ray's Tutorial "How To Make A Game Like Fruit Ninja With Box2D and Cocos2D". But I wonder what when I want make more complex body with texture? Simple just add n b2FixtureDef to the same body. OK but what about texture? If I will take code from that tutorial it only fill last fixture. Probably it does not takes every b2Vec2 point. I was right, it did not. So quick refactor and from that -(id)initWithTexture:(CCTexture2D*)texture body:(b2Body*)body original:(BOOL)original { // gather all the vertices from our Box2D shape b2Fixture *originalFixture = body->GetFixtureList(); b2PolygonShape *shape = (b2PolygonShape*)originalFixture->GetShape(); int vertexCount = shape->GetVertexCount(); NSMutableArray *points = [NSMutableArray arrayWithCapacity:vertexCount]; for(int i = 0; i < vertexCount; i++) { CGPoint p = ccp(shape->GetVertex(i).x * PTM_RATIO, shape->GetVertex(i).y * PTM_RATIO); [points addObject:[NSValue valueWithCGPoint:p]]; } if ((self = [super initWithPoints:points andTexture:texture])) { _body = body; _body->SetUserData(self); _original = original; // gets the center of the polygon _centroid = self.body->GetLocalCenter(); // assign an anchor point based on the center self.anchorPoint = ccp(_centroid.x * PTM_RATIO / texture.contentSize.width, _centroid.y * PTM_RATIO / texture.contentSize.height); } return self; } I came up with that -(id)initWithTexture:(CCTexture2D*)texture body:(b2Body*)body original:(BOOL)original { int vertexCount = 0; //gather total number of b2Vect2 points b2Fixture *currentFixture = body->GetFixtureList(); while (currentFixture) { //new b2PolygonShape *shape = (b2PolygonShape*)currentFixture->GetShape(); vertexCount += shape->GetVertexCount(); currentFixture = currentFixture->GetNext(); } NSMutableArray *points = [NSMutableArray arrayWithCapacity:vertexCount]; // gather all the vertices from our Box2D shape b2Fixture *originalFixture = body->GetFixtureList(); while (originalFixture) { //new NSLog((NSString*)@"-"); b2PolygonShape *shape = (b2PolygonShape*)originalFixture->GetShape(); int currentVertexCount = shape->GetVertexCount(); for(int i = 0; i < currentVertexCount; i++) { CGPoint p = ccp(shape->GetVertex(i).x * PTM_RATIO, shape->GetVertex(i).y * PTM_RATIO); [points addObject:[NSValue valueWithCGPoint:p]]; } originalFixture = originalFixture->GetNext(); } if ((self = [super initWithPoints:points andTexture:texture])) { _body = body; _body->SetUserData(self); _original = original; // gets the center of the polygon _centroid = self.body->GetLocalCenter(); // assign an anchor point based on the center self.anchorPoint = ccp(_centroid.x * PTM_RATIO / texture.contentSize.width,_centroid.y * PTM_RATIO / texture.contentSize.height); } return self; } I was working for simple two fixtures body like b2BodyDef bodyDef; bodyDef.type = b2_dynamicBody; bodyDef.position = position; bodyDef.angle = rotation; b2Body *body = world->CreateBody(&bodyDef); b2FixtureDef fixtureDef; fixtureDef.density = 1.0; fixtureDef.friction = 0.5; fixtureDef.restitution = 0.2; fixtureDef.filter.categoryBits = 0x0001; fixtureDef.filter.maskBits = 0x0001; b2Vec2 vertices[] = { b2Vec2(0.0/PTM_RATIO,50.0/PTM_RATIO), b2Vec2(0.0/PTM_RATIO,0.0/PTM_RATIO), b2Vec2(50.0/PTM_RATIO,30.1/PTM_RATIO), b2Vec2(60.0/PTM_RATIO,60.0/PTM_RATIO) }; b2PolygonShape shape; shape.Set(vertices, 4); fixtureDef.shape = &shape; body->CreateFixture(&fixtureDef); b2Vec2 vertices2[] = { b2Vec2(20.0/PTM_RATIO,50.0/PTM_RATIO), b2Vec2(20.0/PTM_RATIO,0.0/PTM_RATIO), b2Vec2(70.0/PTM_RATIO,30.1/PTM_RATIO), b2Vec2(80.0/PTM_RATIO,60.0/PTM_RATIO) }; shape.Set(vertices2, 4); fixtureDef.shape = &shape; body->CreateFixture(&fixtureDef); But if I try put secondary shape upper than first it starting wierd, texture goes crazy. For example not mention about more complex shapes. What's more if shapes have one common point texture will not render for them at all [For that I use Physics Edytor like in tutorial part1] BTW. I use PolygonSprite and in method createWithWorld... another shapes. Uff.. Question So my question is, why texture coords are in such a mess up? It's my modify method or just wrong approach? Maybe I should remove duplicated from points array?

    Read the article

  • DX9 Deferred Rendering, GBuffer displays as clear color only

    - by Fire31
    I'm trying to implement Catalin Zima's Deferred Renderer in a very lightweight c++ DirectX 9 app (only renders a skydome and a model), at this moment I'm trying to render the gbuffer, but I'm having a problem, the screen shows only the clear color, no matter how much I move the camera around. However, removing all the render target operations lets the app render the scene normally, even if the models are being applied the renderGBuffer effect. Any ideas of what I'm doing wrong?

    Read the article

  • How to cull liquids

    - by Cyral
    I use culling on my Tiles in my 2D Tile Based Platformer, so only ones needed are drawn on screen. Thats easy to do. However, My Liquid tiles (Water, lava, etc) require an Update Method aswell as the normal Draw, which does checks against tiles, makes it flow, etc. So how should I cull liquid updates in my game? Not culling is to slow, culling only on screen looks awkward when you move. What do you think would be best for the player? Maybe someway of culling the visible tiles PLUS also adding the width/height of the viewport to start culling tiles at a fast enough rate in front of the player so it dosent look awkward when moving? (Not sure how to do this though, something with MaxSpeed of player and width of screen)

    Read the article

  • Embed IF text parser in another game?

    - by DragonFax
    Are there any existing interactive fiction text parsing engines that I can embed in another game or application? I'm looking to use something as a library. I can pass it the available objects and verbs from my own side. It will parse the sentences from the user and give me back some sort of structure/AST describing what the user asked for. Then my own code can then act upon that request. I don't need something SIRI level. The simple sentences and actions that current IF games support is fine. But I'm not looking to write a whole text/sentence parser myself. This isn't an If game and I can't write it entirely in an interactive-fiction language like inform 7. Unfortunatly, I can't seem to find any examples of anyone using the text parsing capabilities of these engines without writing the entire game in that engine's language.

    Read the article

  • XNA 4.0 SpriteFont not displaying all Characters

    - by Iain Brown
    Am looking for a little help and trying to use SpriteFont in my XNA 4.0 game but the problem is am displaying to string "This is a test" but all that's displayed on the screen is "This is st" so the "a te" are missing from the screen. The space is there for the characters but the letters are not. The code am using is: spriteBatch.Begin(SpriteSortMode.BackToFront, BlendState.AlphaBlend); spriteBatch.DrawString(font,"this is a test",new Vector2(692,372),Color.White); spriteBatch.Draw(texture,new Rectangle(0,0,100,100),Color.White); spriteBatch.End(); Any help with this would be great!

    Read the article

  • Representing heightmaps, on disk and when drawing

    - by gardian06
    This is a conglomeration question when answering please specify which part you are addressing. I am looking at creating a maze type game that utilizes elevation. I have a few features I would like to have, but am unsure as to some of the implementation. I have done work doing fileIO maze generation (using a key to read the file, and then generate the level based on that file), but I am unsure how to think about this with elevation in the mix. I think height maps might be a good approach, but don't know how to represent them effectively. for a height map which is more beneficial XML(containing h[u,v] data and key definition), CSV (item1 is key reference, item2 is elevation), or another approach that I have not thought of yet? When it comes to placing the elevation values themselves what kind of deltah values are appropriate to have it noticeable at about a 60degree angle while not really effecting gravity driven physics (assuming some effect while moving up/down hill)? I am thinking of maybe going to procedural generation at some point, but am wondering if it is practical to have a procedurally generated grid (wall squares possibly same dimensions as the open space squares), or if designing to a thin wall open spaces is better? this decision will effect the amount of work need on the graphics end for uniform vs. irregular walls. EDIT: Game will be a elevation maze shooter. Levels/maps will be mazes with elevation the player has to negotiate. Elevations will have effects on "combat" vision, and movement.

    Read the article

  • Prevent oversteering catastrophe in racing games

    - by jdm
    When playing GTA III on Android I noticed something that has been annoying me in almost every racing game I've played (maybe except Mario Kart): Driving straight ahead is easy, but curves are really hard. When I switch lanes or pass somebody, the car starts swiveling back and forth, and any attempt to correct it makes it only worse. The only thing I can do is to hit the brakes. I think this is some kind of oversteering. What makes it so irritating is that it never happens to me in real life (thank god :-)), so 90% of the games with vehicles inside feel unreal to me (despite probably having really good physics engines). I've talked to a couple of people about this, and it seems either you 'get' racing games, or you don't. With a lot of practice, I did manage to get semi-good at some games (e.g. from the Need for Speed series), by driving very cautiously, braking a lot (and usually getting a cramp in my fingers). What can you do as a game developer to prevent the oversteering resonance catastrophe, and make driving feel right? (For a casual racing game, that doesn't strive for 100% realistic physics) I also wonder what games like Super Mario Kart exactly do differently so that they don't have so much oversteering? I guess one problem is that if you play with a keyboard or a touchscreen (but not wheels and pedals), you only have digital input: gas pressed or not, steering left/right or not, and it's much harder to steer appropriately for a given speed. The other thing is that you probably don't have a good sense of speed, and drive much faster than you would (safely) in reality. From the top of my head, one solution might be to vary the steering response with speed.

    Read the article

  • Staggered Isometric Map: Calculate map coordinates for point on screen

    - by Chris
    I know there are already a lot of resources about this, but I haven't found one that matches my coordinate system and I'm having massive trouble adjusting any of those solutions to my needs. What I learned is that the best way to do this is to use a transformation matrix. Implementing that is no problem, but I don't know in which way I have to transform the coordinate space. Here's an image that shows my coordinate system: How do I transform a point on screen to this coordinate system?

    Read the article

  • Flickering when accessing texture by offset

    - by TravisG
    I have this simple compute shader that basically just takes the input from one image and writes it to another. Both images are 128/128/128 in size and glDispatchCompute is called with (128/8,128/8,128/8). The source images are cleared to 0 before this compute shader is executed, so no undefined values should be floating around in there. (I have the appropriate memory barrier on the C++ side set before the 3D texture is accessed). This version works fine: #version 430 layout (location = 0, rgba16f) uniform image3D ping; layout (location = 1, rgba16f) uniform image3D pong; layout (local_size_x = 8, local_size_y = 8, local_size_z = 8) in; void main() { ivec3 sampleCoord = gl_GlobalInvocationID.xyz; imageStore(pong, imageLoad(ping,sampleCoord)); } Reading values from pong shows that it's just a copy, as intended. However, when I load data from ping with an offset: #version 430 layout (location = 0, rgba16f) uniform image3D ping; layout (location = 1, rgba16f) uniform image3D pong; layout (local_size_x = 8, local_size_y = 8, local_size_z = 8) in; void main() { ivec3 sampleCoord = gl_GlobalInvocationID.xyz; imageStore(pong, imageLoad(ping,sampleCoord+ivec3(1,0,0))); } The data that is written to pong seems to depend on the order of execution of the threads within the work groups, which makes no sense to me. When reading from the pong texture, visible flickering occurs in some spots on the texture. What am I doing wrong here?

    Read the article

  • Collision with half semi-circle

    - by heitortsergent
    I am trying to port a game I made using Flash/AS3, to the Windows Phone using C#/XNA 4.0. You can see it here: http://goo.gl/gzFiE In the flash version I used a pixel-perfect collision between meteors (it's a rectangle, and usually rotated) that spawn outside the screen, and move towards the center, and a shield in the center of the screen(which is half of a semi-circle, also rotated by the player), which made the meteor bounce back in the opposite direction it came from, when they collided. My goal now is to make the meteors bounce in different angles, depending on the position it collides with the shield (much like Pong, hitting the borders causes a change in the ball's angle). So, these are the 3 options I thought of: -Pixel-perfect collision (microsoft has a sample(http://create.msdn.com/en-US/education/catalog/tutorial/collision_2d_perpixel_transformed)) , but then I wouldn't know how to change the meteor angle after the collision -3 BoundingCircle's to represent the half semi-circle shield, but then I would have to somehow move them as I rotate the shield. -Farseer Physics. I could make a shape composed of 3 lines, and use that as the collision object for the shield. Is there any other way besides those? Which would be the best way to do it(it's aimed towards mobile devices, so pixel-perfect is probably not a good choice)? Most of the time there's always a easier/better way than what we think of...

    Read the article

  • Is using multiple canvas objects a good practice?

    - by user1818924
    We're developing a jump and run game with HTML5 and JavaScript and have to build an own game framework for this. Here we have some difficulties and would like to ask you for some advice: We have a "Stage" object, which represents the root of our game and is a global div-wrapper. The stage can contain multiple "Scenes", which are also div-elements. We would implement a Scene for the playing task, for pause, etc. and switch between them. Each scene can therefore contain multiple "Layers", representing a canvas. These Layer contain "ObjectEntities", which represent images or other shapes like rectangles, etc. Each Objectentity has its own temporaryCanvas, to be able to draw images for one entity, whereas another contains a rectangle. We set an activeScene in our Stage, so when the game is played, just the active scene is drawn. Calling activeScene.draw(), calls all sublayers to draw, which draw their entities (calling drawImage(entity.canvas)). But is this some kind of good practice? Having multiple canvas to draw? Each game loop every layer-context is cleared and drawn again. E.g. we just have a still Background-Layer, … wouldn't it be more useful to draw this once and not to clear it every time and redraw it? Or should we use a global canvas for example in the Stage and just use this canvas to draw? But we thought this would be to expensive...

    Read the article

  • Turn-based JRPG battle system architecture resources

    - by BenoitRen
    The past months I've been busy programming a 2D JRPG (Japanese-style RPG) in C++ using the SDL library. The exploration mode is more or less done. Now I'm tackling the battle mode. I have been unable to find any resources about how a classic turn-based JRPG battle system is structured. All I find are discussions about damage formula. I've tried googling, searching gamedev.net's message board, and crawling through C++-related questions here on Stack Exchange. I've also tried reading source code of existing open source RPGs, but without a guide of some sort it's like trying to find a needle in a haystack. I'm not looking for a set of rules like D&D or anything similar. I'm talking purely about code and object structure design. A battle system asks the player for input using menus. Next the battle turn is executed as the heroes and the enemies execute their actions. Can anyone point me in the right direction? Thanks in advance.

    Read the article

  • Music Rhythm Game Difficulty Question

    - by David Dimalanta
    I have curious question about music rhythm based genre while I'm making a code for the game. Is it really better if I set a random pattern encountered on every music played or there is a specific pattern depending on the music and the difficulty? I have observed the Guitar Hero 3 game for the game console where the difficulty is set on the number of strings used and possible number of combo (e.g. two-string combo). Compared to the Tap Tap Revenge for the Android and iPhone, the difficulty based on the number of BPM (Beat per Minute), meaning, number of targets spawn and must be hit.

    Read the article

< Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >