Search Results

Search found 32114 results on 1285 pages for 'general development'.

Page 537/1285 | < Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >

  • How do I separate model positions from view positions in MVC?

    - by tieTYT
    Using MVC in games (as opposed to web apps) always confuses me when it comes to the view. How am I supposed to keep the model agnostic of how the view is presenting things? I always end up giving the Model a position that holds x and y but invariably, these values end up being in units of pixels and that feels wrong. I can see the advantage* of avoiding that but how am I supposed to? This idea was suggested: Don't think of it in units of pixels, think of them in arbitrary distance units that just happen map to pixels at a 1:1 ratio. Oh, the resolution is half of what it was? We are now taking the x/y coordinates at 50% value for screen display, and your spells casting range is still 300 units long, which now is 150 pixels. But those numbers conveniently work out. What do I do if the numbers divide in such a way that I get decimal places? Floating points are unsafe. I think allowing decimal places would eventually cause really weird bugs in my game. *It'd let me write the model once and write different views depending on the device.

    Read the article

  • Problem creating levels using inherited classes/polymorphism

    - by Adam
    I'm trying to write my level classes by having a base class that each level class inherits from...The base class uses pure virtual functions. My base class is only going to be used as a vector that'll have the inherited level classes pushed onto it...This is what my code looks like at the moment, I've tried various things and get the same result (segmentation fault). //level.h class Level { protected: Mix_Music *music; SDL_Surface *background; SDL_Surface *background2; vector<Enemy> enemy; bool loaded; int time; public: Level(); virtual ~Level(); int bgX, bgY; int bg2X, bg2Y; int width, height; virtual void load(); virtual void unload(); virtual void update(); virtual void draw(); }; //level.cpp Level::Level() { bgX = 0; bgY = 0; bg2X = 0; bg2Y = 0; width = 2048; height = 480; loaded = false; time = 0; } Level::~Level() { } //virtual functions are empty... I'm not sure exactly what I'm supposed to include in the inherited class structure, but this is what I have at the moment... //level1.h class Level1: public Level { public: Level1(); ~Level1(); void load(); void unload(); void update(); void draw(); }; //level1.cpp Level1::Level1() { } Level1::~Level1() { enemy.clear(); Mix_FreeMusic(music); SDL_FreeSurface(background); SDL_FreeSurface(background2); music = NULL; background = NULL; background2 = NULL; Mix_CloseAudio(); } void Level1::load() { music = Mix_LoadMUS("music/song1.xm"); background = loadImage("image/background.png"); background2 = loadImage("image/background2.png"); Mix_OpenAudio(48000, MIX_DEFAULT_FORMAT, 2, 4096); Mix_PlayMusic(music, -1); } void Level1::unload() { } //functions have level-specific code in them... Right now for testing purposes, I just have the main loop call Level1 level1; and use the functions, but when I run the game I get a segmentation fault. This is the first time I've tried writing inherited classes, so I know I'm doing something wrong, but I can't seem to figure out what exactly.

    Read the article

  • Working out of a vertex array for destrucible objects

    - by bobobobo
    I have diamond-shaped polygonal bullets. There are lots of them on the screen. I did not want to create a vertex array for each, so I packed them into a single vertex array and they're all drawn at once. | bullet1.xyz | bullet1.rgb | bullet2.xyz | bullet2.rgb This is great for performance.. there is struct Bullet { vector<Vector3f*> verts ; // pointers into the vertex buffer } ; This works fine, the bullets can move and do collision detection, all while having their data in one place. Except when a bullet "dies" Then you have to clear a slot, and pack all the bullets towards the beginning of the array. Is this a good approach to handling lots of low poly objects? How else would you do it?

    Read the article

  • Perminantly Sync a wiimote with a computer

    - by Adam Geisweit
    i have tried to look up many ways to sync up my wiimotes to my computer so that i can program games with it, but every time it only syncs them up temporarily, or if it says it can permanently sync it, it doesn't actually do it. it gets tiresome when i have to keep on reconnecting it every time i want to save battery life. how would i be able to sync up my wiimote to my computer so that if i turn off my wiimote, i can just hit any button and it will automatically sync it up?

    Read the article

  • Correct way to use Farseer Physics in XNA

    - by user1640602
    I am using Farseer Physics for my 2D sidescroller game and I'm not sure how to proceed with it. I currently have a Sprite class (handles nothing but graphics), a GameObject class (contains specific object info like hit points), a World object which contains the list of Bodies, and a Level object which contains all of these objects. Originally I was trying to keep track of the Sprites, GameObjects, and Bodies separately because I felt that would provide loose coupling but it quickly became a headache. So my new idea was to add a Sprite member to the GameObject class but I'm still not sure how to maintain the Bodies because they have to communicate with GameObject. Specifically, my issue is this: The position of the Body is used to draw the Sprite inside of the Level. In order to do that I would have to maintain a link between GameObjects and Bodies. Is this correct or is there a better way to architect my game? If any of this is unclear please ask and I'll try to clarify. Thank you in advance for any help.

    Read the article

  • Clicking on clues and other objects in a 2D cluedo like game

    - by Anearion
    I'm a java/android programmer, but I don't have any experience in game programming, I'm already reading proper books, like "Pro Android Games", but my concerns are more about the ideas behind game programming than the techniques themselves. I'm working on a 2D game, something like Cluedo to let you understand the genre. I would like to know how should I act with the "scenes", for example, a room with a desk, TV, windows and a lamp. I need to make some items tappable and others not. Is it common to use one image (invisible to the user) with every different item a different color, then call the getColor() method on the image? Or use one image as background, and separate images for all the items? If the latter, how can I set the positioning? and should I use imageView or imageButton? I'm sorry if those are really low quality questions, but as "outsider" ( I'm 23 and still finishing my university ) it's pretty hard learn alone.

    Read the article

  • Turn-Based RPG Battle Instance Layout For Larger Groups

    - by SoulBeaver
    What a title, eh? I'm currently designing a videogame; a turn-based RPG like Final Fantasy (because everybody knows Final Fantasy). It's a 2D sprite game. These are my ideas for combat: -The player has a group of 15 members (main character included) -During battle, five of the group are designated as active, and appear in the battle. -These five may be switched out at leisure, or when one of the five die. -At any time, the Waiting members can cast buffs, be healed by the active members, or perform special attacks. -Battles should contain 10+ monsters at least. I'm aiming for 20, but I'm not sure if that's possible yet. -Battles should feel larger than normal due to the interaction of Waiting members, active members and the increased amount of monsters per battle. -The player has two rows in which to put the Active members: front and back. -Depending on the implementation, I might allow comboing of player attacks and skills. These are just design ideas, so beware! I have not been able to test this out yet- I have no idea yet if any of these ideas bunched together will make for a compelling game. What sounds good on paper doesn't necessarily have to be good in practice! What I'm asking now is how to create the layout for this. My starting point are the battles in Final Fantasy VI, with up to 5-6 monsters on the left and the characters on the right- monsters on both sides if it's a pincer attack. However, this view would not work feasible with my goal of 20 monsters and 5 characters. All the monsters on the left would appear cluttered unless I scale them far far back. If I create a pincer-like map, then there would be no real pincer-attack possible. If I space the monsters out I force the player to scroll the screen- a game mechanic I've come across and not enjoyed imho. My question is: does anybody have any layouts or guides for designing battle maps in turn-based RPGs, especially with a larger number of enemies taken into consideration? How should it look? I am not asking for specific combat mechanics, just the layout for the moment.

    Read the article

  • Locomotion-system with irregular IK

    - by htaunay
    Im having some trouble with locomtions (Unity3D asset) IK feet placement. I wouldn't call it "very bad", but it definitely isn't as smooth as the Locomotion System Examples. The strangest behavior (that is probably linked to the problem) are the rendered foot markers that "guess" where the characters next step will be. In the demo, they are smooth and stable. However, in my project, they keep flickering, as if Locomotion changed its "guess" every frame, and sometimes, the automatic defined step is too close to the previous step, or sometimes, too distant, creating a very irregular pattern. The configuration is (apparently)Identical to the human example in the demo, so I guessing the problem is my model and/or animation. Problem is, I can't figure out was it is =S Has anyone experienced the same problem? I uploaded a video of the bug to help interpreting the issue (excuse the HORRIBLE quality, I was in a hurry).

    Read the article

  • Game maps with "counties" that are split along lines that aren't necessarily straight

    - by pm_2
    I want to create a game environment that supports a 2D map. This is a really basic map, but must be split along lines that are not necessarily straight. So imagine a country with county boundaries. I then want to be able to detect drag / drop events within these counties. What I'm really looking for here is a pointer to where to start on this (how it has been done before - any existing libraries out there), as I'm sure that what I'm trying to do is not new - although I can't find a beginners guide for this anywhere.

    Read the article

  • Why do I have an error when adding states in slick?

    - by SystemNetworks
    When I was going to create another state I had an error. This is my code: public static final int play2 = 3; and public Game(String gamename){ this.addState(new mission(play2)); } and public void initStatesList(GameContainer gc) throws SlickException{ this.getState(play2).init(gc, this); } I have an error in the addState. above the above code. I don't know where is the problem. But if you want the whole code it is here: package javagame; import org.newdawn.slick.*; import org.newdawn.slick.state.*; public class Game extends StateBasedGame{ public static final String gamename = "NET FRONT"; public static final int menu = 0; public static final int play = 1; public static final int train = 2; public static final int play2 = 3; public Game(String gamename){ super(gamename); this.addState(new Menu(menu)); this.addState(new Play(play)); this.addState(new train(train)); this.addState(new mission(play2)); } public void initStatesList(GameContainer gc) throws SlickException{ this.getState(menu).init(gc, this); this.getState(play).init(gc, this); this.getState(train).init(gc, this); this.enterState(menu); this.getState(play2).init(gc, this); } public static void main(String[] args) { try{ AppGameContainer app =new AppGameContainer(new Game(gamename)); app.setDisplayMode(1500, 1000, false); app.start(); }catch(SlickException e){ e.printStackTrace(); } } } //SYSTEM NETWORKS(C) 2012 NET FRONT

    Read the article

  • CCSpriteHole in cocos2d 2.0?

    - by rakkarage
    i was using this cocos2d class CCSpriteHole in cocos2d 1.0 fine... http://jpsarda.tumblr.com/post/15779708304/new-cocos2d-iphone-extensions-a-progress-bar-and-a i am trying to convert it to cocos2d 2.0... i got it to compile by changing glVertexPointer to glVertexAttribPointer like in the 2.0 version of CCSpriteScale9 here http://jpsarda.tumblr.com/post/9162433577/scale9grid-for-cocos2d and changing contentSizeInPixels_ to contentSize_... -(id) init { if( (self=[super init]) ) { opacityModifyRGB_ = YES; opacity_ = 255; color_ = colorUnmodified_ = ccWHITE; capSize=capSizeInPixels=CGSizeZero; //Not used blendFunc_.src = CC_BLEND_SRC; blendFunc_.dst = CC_BLEND_DST; // update texture (calls updateBlendFunc) [self setTexture:nil]; // default transform anchor anchorPoint_ = ccp(0.5f, 0.5f); vertexDataCount=24; vertexData = (ccV2F_C4F_T2F*) malloc(vertexDataCount * sizeof(ccV2F_C4F_T2F)); [self setTextureRectInPixels:CGRectZero untrimmedSize:CGSizeZero]; } return self; } -(id) initWithTexture:(CCTexture2D*)texture rect:(CGRect)rect { NSAssert(texture!=nil, @"Invalid texture for sprite"); // IMPORTANT: [self init] and not [super init]; if( (self = [self init]) ) { [self setTexture:texture]; [self setTextureRect:rect]; } return self; } -(id) initWithTexture:(CCTexture2D*)texture { NSAssert(texture!=nil, @"Invalid texture for sprite"); CGRect rect = CGRectZero; rect.size = texture.contentSize; return [self initWithTexture:texture rect:rect]; } -(id) initWithFile:(NSString*)filename { NSAssert(filename!=nil, @"Invalid filename for sprite"); CCTexture2D *texture = [[CCTextureCache sharedTextureCache] addImage: filename]; if( texture ) return [self initWithTexture:texture]; return nil; } +(id)spriteWithFile:(NSString*)f { return [[self alloc] initWithFile:f]; } - (void) dealloc { if (vertexData) free(vertexData); } -(void) updateColor { ccColor4F color4; color4.r=(float)color_.r/255.0f; color4.g=(float)color_.g/255.0f; color4.b=(float)color_.b/255.0f; color4.a=(float)opacity_/255.0f; for (int i=0; i<vertexDataCount; i++) { vertexData[i].colors=color4; } } -(void)updateTextureCoords:(CGRect)rect { CCTexture2D *tex = texture_; if(!tex) return; float atlasWidth = (float)tex.pixelsWide; float atlasHeight = (float)tex.pixelsHigh; float left,right,top,bottom; left = rect.origin.x/atlasWidth; right = left + rect.size.width/atlasWidth; top = rect.origin.y/atlasHeight; bottom = top + rect.size.height/atlasHeight; // // |/|/|/| // CGSize capTexCoordsSize=CGSizeMake(capSizeInPixels.width/atlasWidth, capSizeInPixels.height/atlasHeight); // From left to right //Top band // Left vertexData[0].texCoords=(ccTex2F){left,top}; vertexData[1].texCoords=(ccTex2F){left,top+capTexCoordsSize.height}; vertexData[2].texCoords=(ccTex2F){left+capTexCoordsSize.width,top}; vertexData[3].texCoords=(ccTex2F){left+capTexCoordsSize.width,top+capTexCoordsSize.height}; // Center vertexData[4].texCoords=(ccTex2F){right-capTexCoordsSize.width,top}; vertexData[5].texCoords=(ccTex2F){right-capTexCoordsSize.width,top+capTexCoordsSize.height}; // Right vertexData[6].texCoords=(ccTex2F){right,top}; vertexData[7].texCoords=(ccTex2F){right,top+capTexCoordsSize.height}; //Center band // Left vertexData[8].texCoords=(ccTex2F){left,bottom-capTexCoordsSize.height}; vertexData[9].texCoords=(ccTex2F){left,top+capTexCoordsSize.height}; vertexData[10].texCoords=(ccTex2F){left+capTexCoordsSize.width,bottom-capTexCoordsSize.height}; vertexData[11].texCoords=(ccTex2F){left+capTexCoordsSize.width,top+capTexCoordsSize.height}; // Center vertexData[12].texCoords=(ccTex2F){right-capTexCoordsSize.width,bottom-capTexCoordsSize.height}; vertexData[13].texCoords=(ccTex2F){right-capTexCoordsSize.width,top+capTexCoordsSize.height}; // Right vertexData[14].texCoords=(ccTex2F){right,bottom-capTexCoordsSize.height}; vertexData[15].texCoords=(ccTex2F){right,top+capTexCoordsSize.height}; //Bottom band //Left vertexData[16].texCoords=(ccTex2F){left,bottom}; vertexData[17].texCoords=(ccTex2F){left,bottom-capTexCoordsSize.height}; vertexData[18].texCoords=(ccTex2F){left+capTexCoordsSize.width,bottom}; vertexData[19].texCoords=(ccTex2F){left+capTexCoordsSize.width,bottom-capTexCoordsSize.height}; // Center vertexData[20].texCoords=(ccTex2F){right-capTexCoordsSize.width,bottom}; vertexData[21].texCoords=(ccTex2F){right-capTexCoordsSize.width,bottom-capTexCoordsSize.height}; // Right vertexData[22].texCoords=(ccTex2F){right,bottom}; vertexData[23].texCoords=(ccTex2F){right,bottom-capTexCoordsSize.height}; } -(void) updateVertices { float left=0; //-spriteSizeInPixels.width*0.5f; float right=left+contentSize_.width; float bottom=0; //-spriteSizeInPixels.height*0.5f; float top=bottom+contentSize_.height; float holeLeft=holeRect.origin.x*CC_CONTENT_SCALE_FACTOR(); float holeRight=holeLeft+holeRect.size.width*CC_CONTENT_SCALE_FACTOR(); float holeBottom=holeRect.origin.y*CC_CONTENT_SCALE_FACTOR(); float holeTop=holeBottom+holeRect.size.height*CC_CONTENT_SCALE_FACTOR(); // // |/|/|/| // // From left to right //Top band // Left vertexData[0].vertices=(ccVertex2F){left,top}; vertexData[1].vertices=(ccVertex2F){left,holeTop}; vertexData[2].vertices=(ccVertex2F){holeLeft,top}; vertexData[3].vertices=(ccVertex2F){holeLeft,holeTop}; // Center vertexData[4].vertices=(ccVertex2F){holeRight,top}; vertexData[5].vertices=(ccVertex2F){holeRight,holeTop}; // Right vertexData[6].vertices=(ccVertex2F){right,top}; vertexData[7].vertices=(ccVertex2F){right,holeTop}; //Center band // Left vertexData[8].vertices=(ccVertex2F){left,holeBottom}; vertexData[9].vertices=(ccVertex2F){left,holeTop}; vertexData[10].vertices=(ccVertex2F){holeLeft,holeBottom}; vertexData[11].vertices=(ccVertex2F){holeLeft,holeTop}; // Center vertexData[12].vertices=(ccVertex2F){holeRight,holeBottom}; vertexData[13].vertices=(ccVertex2F){holeRight,holeTop}; // Right vertexData[14].vertices=(ccVertex2F){right,holeBottom}; vertexData[15].vertices=(ccVertex2F){right,holeTop}; //Bottom band //Left vertexData[16].vertices=(ccVertex2F){left,bottom}; vertexData[17].vertices=(ccVertex2F){left,holeBottom}; vertexData[18].vertices=(ccVertex2F){holeLeft,bottom}; vertexData[19].vertices=(ccVertex2F){holeLeft,holeBottom}; // Center vertexData[20].vertices=(ccVertex2F){holeRight,bottom}; vertexData[21].vertices=(ccVertex2F){holeRight,holeBottom}; // Right vertexData[22].vertices=(ccVertex2F){right,bottom}; vertexData[23].vertices=(ccVertex2F){right,holeBottom}; } -(void) setHole:(CGRect)r inRect:(CGRect)totalSurface { holeRect=r; self.contentSize=totalSurface.size; holeRect.origin=ccpSub(holeRect.origin,totalSurface.origin); CGPoint holeCenter=ccp(holeRect.origin.x+holeRect.size.width*0.5f,holeRect.origin.y+holeRect.size.height*0.5f); self.anchorPoint=ccp(holeCenter.x/contentSize_.width,holeCenter.y/contentSize_.height); //[self updateTextureCoords:rectInPixels_]; [self updateVertices]; [self updateColor]; } -(void) draw { BOOL newBlend = NO; if( blendFunc_.src != CC_BLEND_SRC || blendFunc_.dst != CC_BLEND_DST ) { newBlend = YES; glBlendFunc( blendFunc_.src, blendFunc_.dst ); } glBindTexture(GL_TEXTURE_2D, [texture_ name]); glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[0].vertices); glVertexAttribPointer(kCCVertexAttrib_TexCoords, 2, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[0].texCoords); glVertexAttribPointer(kCCVertexAttrib_Color, 4, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[0].colors); glDrawArrays(GL_TRIANGLE_STRIP, 0, 8); glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[8].vertices); glVertexAttribPointer(kCCVertexAttrib_TexCoords, 2, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[8].texCoords); glVertexAttribPointer(kCCVertexAttrib_Color, 4, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[8].colors); glDrawArrays(GL_TRIANGLE_STRIP, 0, 8); glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[16].vertices); glVertexAttribPointer(kCCVertexAttrib_TexCoords, 2, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[16].texCoords); glVertexAttribPointer(kCCVertexAttrib_Color, 4, GL_FLOAT, GL_FALSE, sizeof(ccV2F_C4F_T2F), &vertexData[16].colors); glDrawArrays(GL_TRIANGLE_STRIP, 0, 8); if( newBlend ) glBlendFunc(CC_BLEND_SRC, CC_BLEND_DST); } -(void)setTextureRectInPixels:(CGRect)rect untrimmedSize:(CGSize)untrimmedSize { rectInPixels_ = rect; rect_ = CC_RECT_PIXELS_TO_POINTS( rect ); //[self setContentSizeInPixels:untrimmedSize]; [self updateTextureCoords:rectInPixels_]; } -(void)setTextureRect:(CGRect)rect { CGRect rectInPixels = CC_RECT_POINTS_TO_PIXELS( rect ); [self setTextureRectInPixels:rectInPixels untrimmedSize:rectInPixels.size]; } // // RGBA protocol // #pragma mark CCSpriteHole - RGBA protocol -(GLubyte) opacity { return opacity_; } -(void) setOpacity:(GLubyte) anOpacity { opacity_ = anOpacity; // special opacity for premultiplied textures if( opacityModifyRGB_ ) [self setColor: (opacityModifyRGB_ ? colorUnmodified_ : color_ )]; [self updateColor]; } - (ccColor3B) color { if(opacityModifyRGB_){ return colorUnmodified_; } return color_; } -(void) setColor:(ccColor3B)color3 { color_ = colorUnmodified_ = color3; if( opacityModifyRGB_ ){ color_.r = color3.r * opacity_/255; color_.g = color3.g * opacity_/255; color_.b = color3.b * opacity_/255; } [self updateColor]; } -(void) setOpacityModifyRGB:(BOOL)modify { ccColor3B oldColor = self.color; opacityModifyRGB_ = modify; self.color = oldColor; } -(BOOL) doesOpacityModifyRGB { return opacityModifyRGB_; } #pragma mark CCSpriteHole - CocosNodeTexture protocol -(void) updateBlendFunc { if( !texture_ || ! [texture_ hasPremultipliedAlpha] ) { blendFunc_.src = GL_SRC_ALPHA; blendFunc_.dst = GL_ONE_MINUS_SRC_ALPHA; [self setOpacityModifyRGB:NO]; } else { blendFunc_.src = CC_BLEND_SRC; blendFunc_.dst = CC_BLEND_DST; [self setOpacityModifyRGB:YES]; } } -(void) setTexture:(CCTexture2D*)texture { // accept texture==nil as argument NSAssert( !texture || [texture isKindOfClass:[CCTexture2D class]], @"setTexture expects a CCTexture2D. Invalid argument"); texture_ = texture; [self updateBlendFunc]; } -(CCTexture2D*) texture { return texture_; } @end but now positioning and scaling seem to not work? and it starts in the wrong position... but changing the opacity still works. so i was wondering if anyone can see why my 2.0 version is not working? or if maybe there is a better way to do a sprite hole with cocos2d/opengl 2.0? shaders? thanks

    Read the article

  • Coordinate and positioning problem on iOS with cocos2d-x

    - by Vexille
    I'm using cocos2d-x alongside with Marmalade and running some tests and tutorials before starting an actual project with them. So far things are working reasonably well on the windows simulator, Android and even on Blackberry's Playbook, but on iOS devices (iPhone and iPad) the positioning seems to be off. To make things clearer, I put together a scene that just draws an image in the middle of the screen. It worked as expected on everything else, but this is the result I got on an iPhone: To get the coordinates for the center of the screen I'm using the VisibleRect class from the TestCpp sample. It just uses sharedOpenGLView to get the visible size and visible origin, and calculate the center from that. CCSprite* test = CCSprite::create("Ball.png", CCRectMake(0, 0, 80, 80) ); test->setPosition( ccp(VisibleRect::center().x, VisibleRect::center().y) ); this->addChild(test); Also I have a noBorder policy set on AppDelegate: CCEGLView::sharedOpenGLView()->setDesignResolutionSize(designSize.width, designSize.height, kResolutionNoBorder); One funny thing is that I tried to deploy the TestCpp sample project to some iOS devices and it worked reasonably well on the iPhone, but on the iPad the application was only being drawn on a small portion of the screen - just like what happened on the iPhone when I tried using the ShowAll policy.

    Read the article

  • How should I model an economy-based game in code?

    - by Matthew G.
    I'd like to create an economy game based on an ancient civilization. I'm not sure how to design it. If I were working on a smaller game, like a copy of "Space Invaders," I'd have no problem structuring it like this: Main Control Class Graphics Class Player Class Enemy class I don't understand how I'd do this for larger projects like my economy game. Do I create a country class that contains a bunch of towns? Do the towns contain a lot building class, most contain classes of people? Do I make a path finding class that the player can access to get around?

    Read the article

  • Frame Buffer Objects vs calling TexCoord2f?

    - by sensae
    I'm learning the basics of OpenGL with lwjgl currently, and following a guide I've got textured quads that can move around a scene. I've been reading about Frame Buffer Objects, and I'm not really clear on their purpose and their benefit. My understanding is that I'll create a FBO with the texture I'd like, load the FBO, draw a quad, then unload the FBO. What would the technique I'm currently doing for texture management be called, and how does it differ from using FBOs? What are the benefits to using FBOs? How does it fit into the grand rendering scheme of things?

    Read the article

  • Has a multi player graphic adventure* ever been made?

    - by Petruza
    By graphic adventure, I mean point & click LucasArts-type games. Those games have a mostly linear structure in nature, and usually don't offer as many variants as other games types like action, rpg, strategy, which makes this genre difficult to implement a multi-player feature. I'd like to know if there has been any attempts on doing such a thing, and if it would be viable, as players going offline or leaving a game in the middle would affect significantly the other players' game.

    Read the article

  • How are trajectories calculated and transmitted to other players in Multi-Player ?

    - by giulio
    I play alot of COD4. And can see tracers for gunfire, missles, care packages fall from helicopters etc. There is alot of activity. I am curious to know the algorithm (at a high level) that manages all this action when you have 20 people on a map shooting each other to death ? This question touches on the subject but doesn't ask for a more in-depth answer as to how you the developers go about calculating and transmitting movement and collision detection for projectiles, be it missles/bullets or any other object that is flying through the air in real-time.

    Read the article

  • Scan-Line Z-Buffering Dilemma

    - by Belgin
    I have a set of vertices in 3D space, and for each I retain the following information: Its 3D coordinates (x, y, z). A list of pointers to some of the other vertices with which it's connected by edges. Right now, I'm doing perspective projection with the projecting plane being XY and the eye placed somewhere at (0, 0, d), with d < 0. By doing Z-Buffering, I need to find the depth of the point of a polygon (they're all planar) which corresponds to a certain pixel on the screen so I can hide the surfaces that are not visible. My questions are the following: How do I determine to which polygon does a pixel belong to so I could use the formula of the plane which contains the polygon to find the Z-coordinate? Are my data structures correct? Do I need to store something else entirely in order for this to work? I'm just projecting the vertices onto the projection plane and joining them with lines based on the pointer lists.

    Read the article

  • Get collision details from Rectangle.Intersects()

    - by Daniel Ribeiro
    I have a Breakout game in which, at some point, I detect the collision between the ball and the paddle with something like this: // Ball class rectangle.Intersects(paddle.Rectangle); Is there any way I can get the exact coordinates of the collision, or any details about it, with the current XNA API? I thought of doing some basic calculations, such as comparing the exact coordinates of each object on the moment of the collision. It would look something like this: // Ball class if((rectangle.X - paddle.Rectangle.X) < (paddle.Rectangle.Width / 2)) // Collision happened on the left side else // Collision happened on the right side But I'm not sure this is the correct way to do it. Do you guys have any tips on maybe an engine I might have to use to achieve that? Or even good coding practices using this method?

    Read the article

  • what knowledge would I need to make a good simulation games

    - by Skeith
    I have an idea for a game like theme park but don't know how simulation games are made. I am not some noob on his first game so I appreciated constructive answers instead of "its hard, don't do it". What I want is to know how simulation game mechanics are put together. I figure it would be heaver on the AI than normal games and not knowing much about AI would like to know some programming techniques I should look into for this style game. specific techniques please not just a book on ai. what sort of architecture would be used? I guess it would have some sort of probability engine with pre designed events that are triggered based on the AI state. Would it use a FSM or be purely event driven ? Any information on how a sims game functions would be cool.

    Read the article

  • Implement Fast Inverse Square Root in Javascript?

    - by BBz
    The Fast Inverse Square Root from Quake III seems to use a floating-point trick. As I understand, floating-point representation can have some different implementations. So is it possible to implement the Fast Inverse Square Root in Javascript? Would it return the same result? float Q_rsqrt(float number) { long i; float x2, y; const float threehalfs = 1.5F; x2 = number * 0.5F; y = number; i = * ( long * ) &y; i = 0x5f3759df - ( i >> 1 ); y = * ( float * ) &i; y = y * ( threehalfs - ( x2 * y * y ) ); return y; }

    Read the article

  • Getting a texture from a renderbuffer in OpenGL?

    - by Rushyo
    I've got a renderbuffer (DepthStencil) in an FBO and I need to get a texture from it. I can't have both a DepthComponent texture and a DepthStencil renderbuffer in the FBO, it seems, so I need some way to convert the renderbuffer to a DepthComponent texture after I'm done with it for use later down the pipeline. I've tried plenty of techniques to grab the depth component from the renderbuffer for weeks but I always come out with junk. All I want at the end is the same texture I'd get from an FBO if I wasn't using a renderbuffer. Can anyone post some comprehensive instructions or code that covers this seemingly simple operation? EDIT: Linky to an extract version of the code http://dl.dropbox.com/u/9279501/fbo.cs Screeny of the Depth of Field effect + FBO - without depth(!) http://i.stack.imgur.com/Hj9Oe.jpg Screeny without Depth of Field effect + FBO - depth working fine http://i.stack.imgur.com/boOm1.jpg

    Read the article

  • Normal vector of a face loaded from an FBX model during collision?

    - by Corey Ogburn
    I'm loading a simple 6 sided cube from a UV-mapped FBX model and I'm using a BoundingBox to test for collisions. Once I determine there's a collision, I want to use the normal vector of the collided surface to correct the movement of whatever collided with the cube. I suppose this is a two-part question: 1) How can I determine which face of the cube was collided with in a collision? 2) How can I get the normal vector of that surface?

    Read the article

  • Particle trajectory smoothing: where to do the simulation?

    - by nkint
    I have a particle system in which I have particles that are moving to a target and the new targets are received via network. The list of new target are some noisy coordinates of a moving target stored in the server that I want to smooth in the client. For doing the smoothing and the particle I wrote a simple particle engine with standard euler integration model. So, my pseudo code is something like that: # pseudo code class Particle: def update(): # do euler motion model integration: # if the distance to the target is more than a limit # add a new force to the accelleration # seeking the target, # and add the accelleration to velocity # and velocity to the position positionHistory.push_back(position); if history.length > historySize : history.pop_front() class ParticleEngine: particleById = dict() # an associative array # where the keys are the id # and particle istances are sotred as values # this method is called each time a new tcp packet is received and parsed def setNetTarget(int id, Vec2D new_target): particleById[id].setNewTarget(new_target) # this method is called each new frame def draw(): for p in particleById.values: p.update() beginVertex(LINE_STRIP) for v in p.positionHistory: vertex(v.x, v.y) endVertex() The new target that are arriving are noisy but setting some accelleration/velocity parameters let the particle to have a smoothed trajectories. But if a particle trajectory is a circle after a while the particle position converge to the center (a normal behaviour of euler integration model). So I decided to change the simulation and use some other interpolation (spline?) or smooth method (kalman filter?) between the targets. Something like: switch( INTERPOLATION_MODEL ): case EULER_MOTION: ... case HERMITE_INTERPOLATION: ... case SPLINE_INTERPOLATION: ... case KALMAN_FILTER_SMOOTHING: ... Now my question: where to write the motion simulation / trajectory interpolation? In the Particle? So I will have some Particle subclass like ParticleEuler, ParticleSpline, ParticleKalman, etc..? Or in the particle engine?

    Read the article

  • Good practices in screen states management?

    - by DevilWithin
    I wonder what are the best ways to organize different screens in a game? I am thinking of it like this: Inheriting a base State class, and overriding update and render methods, to handle the current screen. Then, under certain events a StateManager is able to activate another Screen State, and the game screen changes as only the current State is rendered. On the activation of a new screen, effects like fading could be added, and also the same goes for its deactivation. This way a flow of screen could be made. By saying when A ends, B starts, allowing for complex animations etc. Toughts?

    Read the article

  • Finding diagonal objects of an object in 3d space

    - by samfisher
    Using Unity3d, I have a array which is having 8 GameObjects in grid and one object (which is already known) is in center like this where K is already known object. All objects are equidistant from their adjacent objects (even with the diagonal objects) which means (distance between 4 & K) == (distance between K & 3) = (distance between 2 & K) 1 2 3 4 K 5 6 7 8 I want to remove 1,3,6,8 from array (the diagonal objects). How can I check that at runtime? my problem is the order of objects {1-8} is not known so I need to check each object's position with K to see if it is a diagonal object or not. so what check should I put with the GameObjects (K and others) to verify if this object is in diagonal position Regards, Sam

    Read the article

< Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >