Search Results

Search found 25518 results on 1021 pages for 'iterative development'.

Page 504/1021 | < Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >

  • Can i change the order of these OpenGL / Win32 calls?

    - by Adam Naylor
    I've been adapting the NeHe ogl/win32 code to be more object orientated and I don't like the way some of the calls are structured. The example has the following pseudo structure: Register window class Change display settings with a DEVMODE Adjust window rect Create window Get DC Find closest matching pixel format Set the pixel format to closest match Create rendering context Make that context current Show the window Set it to foreground Set it to having focus Resize the GL scene Init GL The points in bold are what I want to move into a rendering class (the rest are what I see being pure win32 calls) but I'm not sure if I can call them after the win32 calls. Essentially what I'm aiming for is to encapsulate the Win32 calls into a Platform::Initiate() type method and the rest into a sort of Renderer::Initiate() method. So my question essentially boils down to: "Would OpenGL allow these methods to be called in this order?" Register window class Adjust window rect Create window Get DC Show the window Set it to foreground Set it to having focus Change display settings with a DEVMODE Find closest matching pixel format Set the pixel format to closest match Create rendering context Make that context current Resize the GL scene Init GL (obviously passing through the appropriate window handles and device contexts.) Thanks in advance.

    Read the article

  • Box2dWeb positioning relative to HTML5 Canvas

    - by Joe
    I'm new with HTML5 canvas and Box2DWeb and I'm trying to make an Asteroids game. So far I think I'm doing okay, but one thing I'm struggling to comprehend is how positioning works in relation to the canvas. I understand that Box2DWeb is only made to deal with physical simulation, but I don't know how to deal with positioning on the canvas. The canvas is 100% viewport and thus can vary size. I want to fill the screen with some asteroids, but if I hardcore certain values such as bodyDef.position.x = Math.random() * 50; the asteroid may appear off canvas for someone with a smaller screen? Can anybody help me understand how I can deal with relative positioning on the canvas?

    Read the article

  • Is it possible to map mouse coordinates to isometric tiles with this coordinate system?

    - by plukich
    I'm trying to implement mouse interaction in a 2D isometric game, but I'm not sure if it's possible given the coordinate system used for tile maps in the game. I've read some helpful things like this. However, this game's coordinate system is "jagged" (for lack of a better word), and looks like this: Is it even possible to map mouse coordinates to this successfully, since the y-axis can't be drawn on this tile-map as a straight line? I've thought about doing odd-y-value translations and even-y-value translations with two different matrices, but that only makes sense going from tile to screen.

    Read the article

  • How to detect GLSL warnings?

    - by msell
    After compiling a shader with glCompileShader, I can call glGetShaderiv with GL_COMPILE_STATUS to check if the shader compiled successfully. I can also call glGetShaderInfoLog to get information about possible errors, warnings or other info. The information log returned by this function is unspecified. In a tool where users can write their own shaders, I would like to print all errors and warnings from the compilation, but nothing if no warnings or errors were found. The problem is that the GL_COMPILE_STATUS returns only false if the compilation failed and true otherwise. If no problems were found, some drivers return empty info log from glGetShaderInfoLog, but some drivers can return something else such as "No errors.", which I do not want to print to the user. How is this problem generally solved?

    Read the article

  • what knowledge would I need to make a good simulation games

    - by Skeith
    I have an idea for a game like theme park but don't know how simulation games are made. I am not some noob on his first game so I appreciated constructive answers instead of "its hard, don't do it". What I want is to know how simulation game mechanics are put together. I figure it would be heaver on the AI than normal games and not knowing much about AI would like to know some programming techniques I should look into for this style game. specific techniques please not just a book on ai. what sort of architecture would be used? I guess it would have some sort of probability engine with pre designed events that are triggered based on the AI state. Would it use a FSM or be purely event driven ? Any information on how a sims game functions would be cool.

    Read the article

  • What is a good method for coloring textures based on a palette in XNA?

    - by Bob
    I've been trying to work on a game with the look of an 8-bit game using XNA, specifically using the NES as a guide. The NES has a very specific palette and each sprite can use up to 4 colors from that palette. How could I emulate this? The current way I accomplish this is I have a texture with defined values which act as indexes to an array of colors I pass to the GPU. I imagine there must be a better way than this, but maybe this is the best way? I don't want to simply make sure I draw every sprite with the right colors because I want to be able to dynamically alter the palette. I'd also prefer not to alter the texture directly using the CPU.

    Read the article

  • Draw contour around object in Opengl

    - by Maciekp
    I need to draw contour around 2d objects in 3d space. I tried drawing lines around object(+points to fill the gap), but due to line width, some part of it(~50%) was covering object. I tried to use stencil buffer, to eliminate this problem, but I got sth like this(contour is green): http://goo.gl/OI5uc (sorry I can't post images, due to my reputation) You can see(where arrow points), that some parts of line are behind object, and some are above. This changes when I move camera, but always there is some part, that is covering it. Here is code, that I use for drawing object: glColorMask(1,1,1,1); std::list<CObjectOnScene*>::iterator objIter=ptr->objects.begin(),objEnd=ptr->objects.end(); int countStencilBit=1; while(objIter!=objEnd) { glColorMask(1,1,1,1); glStencilFunc(GL_ALWAYS,countStencilBit,countStencilBit); glStencilOp(GL_REPLACE,GL_KEEP,GL_REPLACE ); (*objIter)->DrawYourVertices(); glStencilFunc(GL_NOTEQUAL,countStencilBit,countStencilBit); glStencilOp(GL_KEEP,GL_KEEP,GL_REPLACE); (*objIter)->DrawYourBorder(); ++objIter; ++countStencilBit; } I've tried different settings of stencil buffer, but always I was getting sth like that. Here is question: 1.Am I setting stencil buffer wrong? 2. Are there any other simple ways to create contour on such objects? Thanks in advance.

    Read the article

  • Level Representation in a 2D Game

    - by meszar.imola
    I would like to create a 2D game, where a character should move on a stage/level. My stage would be static, constructed some little cubes, similar to the well-known Mario game: some of the elements should represent an element of the way where the character can step, but if the element is missing, the character should fall. My problem is, how to represent this programmatically? My first thought was to represent the stage with a vector, which should contain boolean elements, depending on the state of the element on the stage - if it's missing or not. But this means, I have to verify at my character's x or y position change if it has a stage element under or not (if not, to simulate the falling of the character) - I think it is not the best practice, it's not the beautiful solution. Can you help me with some advice, how to represent the stage?

    Read the article

  • Creating a DrawableGameComponent

    - by Christian Frantz
    If I'm going to draw cubes effectively, I need to get rid of the numerous amounts of draw calls I have and what has been suggested is that I create a "mesh" of my cubes. I already have them being stored in a single vertex buffer, but the issue lies in my draw method where I am still looping through every cube in order to draw them. I thought this was necessary as each cube will have a set position, but it lowers the frame rate incredibly. What's the easiest way to go about this? I have a class CubeChunk that inherits Microsoft.Stuff.DrawableGameComponent, but I don't know what comes next. I suppose I could just use the chunk of cubes created in my cube class, but that would just keep me going in circles and drawing each cube individually. The goal here is to create a draw method that draws my chunk as a whole, and to not draw individual cubes as I've been doing.

    Read the article

  • Cocos2d: Adding a CCSequence to a CCArray

    - by Axort
    I have a problem with an action performed by a sprite. I have one CCSequence in a CCArray and I have an scheduled method (is called every 5 seconds) that make the sprite run the action. The action is performed correctly only the first time (the first 5 seconds), after that, the action do whatever it wants lol. Here is the code: In .h - @interface PowerUpLayer : CCLayer { PowerUp *powerUp; CCArray *trajectories; } @property (nonatomic, retain) CCArray *trajectories; In .mm - @implementation PowerUpLayer @synthesize trajectories; -(id)init { if((self = [super init])) { [self createTrajectories]; self.isTouchEnabled = YES; [self schedule:@selector(spawn:) interval:5]; } return self; } -(void)createTrajectories { self.trajectories = [CCArray arrayWithCapacity:1]; //Wave trajectory ccBezierConfig firstWave, secondWave; firstWave.controlPoint_1 = CGPointMake([[CCDirector sharedDirector] winSize].width + 30, [[CCDirector sharedDirector] winSize].height / 2);//powerUp.sprite.position.x, powerUp.sprite.position.y); firstWave.controlPoint_2 = CGPointMake([[CCDirector sharedDirector] winSize].width - ([[CCDirector sharedDirector] winSize].width / 4), 0); firstWave.endPosition = CGPointMake([[CCDirector sharedDirector] winSize].width / 2, [[CCDirector sharedDirector] winSize].height / 2); secondWave.controlPoint_1 = CGPointMake([[CCDirector sharedDirector] winSize].width / 2, [[CCDirector sharedDirector] winSize].height / 2); secondWave.controlPoint_2 = CGPointMake([[CCDirector sharedDirector] winSize].width / 4, [[CCDirector sharedDirector] winSize].height); secondWave.endPosition = CGPointMake(-30, [[CCDirector sharedDirector] winSize].height / 2); id bezierWave1 = [CCBezierTo actionWithDuration:1 bezier:firstWave]; id bezierWave2 = [CCBezierTo actionWithDuration:1 bezier:secondWave]; id waveTrajectory = [CCSequence actions:bezierWave1, bezierWave2, [CCCallFuncN actionWithTarget:self selector:@selector(setInvisible:)], nil]; [self.trajectories addObject:waveTrajectory]; //[powerUp.sprite runAction:bezierForward]; // [CCMoveBy actionWithDuration:3 position:CGPointMake(-[[CCDirector sharedDirector] winSize].width - powerUp.sprite.contentSize.width, 0)] //[powerUp.sprite runAction:[CCSequence actions:bezierWave1, bezierWave2, [CCCallFuncN actionWithTarget:self selector:@selector(setInvisible:)], nil]]; } -(void)setInvisible:(id)sender { if(powerUp != nil) { [self removeChild:sender cleanup:YES]; powerUp = nil; } } This is the scheduled method: -(void)spawn:(ccTime)dt { if(powerUp == nil) { powerUp = [[PowerUp alloc] initWithType:0]; powerUp.sprite.position = CGPointMake([[CCDirector sharedDirector] winSize].width + powerUp.sprite.contentSize.width, [[CCDirector sharedDirector] winSize].height / 2); [self addChild:powerUp.sprite z:-1]; [powerUp.sprite runAction:((CCSequence *)[self.trajectories objectAtIndex:0])]; } } I don't know what is happening; I never modify the content of the CCSequence after the first time. Thanks!

    Read the article

  • How to sync client and server at the first frame

    - by wheelinlight
    I'm making a game where an authoritative server sends information to all clients about states and positions for objects in a 3d world. The player can control his character by clicking on the screen to set a destination for the character, much like in the Diablo series. I've read most information I can find online about interpolation, reconciliation, and general networking architecture (Valve's for instance). I think I understand everything but one thing seems to be missing in every article I read. Let say we have an interpolation delay of 100ms, server tickrate=50ms, latency=200ms; How do I know when 100ms has past on the client? If the server sends the first update on t=0, can I assume it arrives at t=200, therefore assuming that all packets takes the same amount of time to reach the client? What if the first packet arrives a little quick, for instance at t=150. I would then be starting the client with t=150 and at t=250 it will think it has past 100ms since its connect to the server when it in fact only 50ms has past. Hopefully the above paragraph is understandable. The summarized question would be: How do I know at what tick to start simulating the client? EDIT: This is how I ended up doing it: The client keeps a clock (approximately) in sync with the server. The client then simulates the world at simulationTime = syncedTime - avg(RTT)/2 - interpolationTime The round-trip time can fluctuate so therefore I average it out over time. By only keeping the most recent values when calculating the average I hope to adapt to more permanent changes in latency. It's still to early to draw any conclusion. I'm currently simulating bad network connections, but it's looking good so far. Anyone see any possible problems?

    Read the article

  • How can I render a semi transparent model with OpenGL correctly?

    - by phobitor
    I'm using OpenGL ES 2 and I want to render a simple model with some level of transparency. I'm just starting out with shaders, and I wrote a simple diffuse shader for the model without any issues but I don't know how to add transparency to it. I tried to set my fragment shader's output (gl_FragColor) to a non opaque alpha value but the results weren't too great. It sort of works, but it looks like certain model triangles are only rendered based on the camera position... It's really hard to describe what's wrong so please watch this short video I recorded: http://www.youtube.com/watch?v=s0JqA0rZabE I thought this was a depth testing issue so I tried playing around with enabling/disabling depth testing and back face culling. Enabling back face culling changes the output slightly but the problem in the video is still there. Enabling/disabling depth testing doesn't seem to do anything. Could anyone explain what I'm seeing and how I can add some simple transparency to my model with the shader? I'm not looking for advanced order independent transparency implementations. edit: Vertex Shader: // color varying for fragment shader varying mediump vec3 LightIntensity; varying highp vec3 VertexInModelSpace; void main() { // vec4 LightPosition = vec4(0.0, 0.0, 0.0, 1.0); vec3 LightColor = vec3(1.0, 1.0, 1.0); vec3 DiffuseColor = vec3(1.0, 0.25, 0.0); // find the vector from the given vertex to the light source vec4 vertexInWorldSpace = gl_ModelViewMatrix * vec4(gl_Vertex); vec3 normalInWorldSpace = normalize(gl_NormalMatrix * gl_Normal); vec3 lightDirn = normalize(vec3(LightPosition-vertexInWorldSpace)); // save vertexInWorldSpace VertexInModelSpace = vec3(gl_Vertex); // calculate light intensity LightIntensity = LightColor * DiffuseColor * max(dot(lightDirn,normalInWorldSpace),0.0); // calculate projected vertex position gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } Fragment Shader: // varying to define color varying vec3 LightIntensity; varying vec3 VertexInModelSpace; void main() { gl_FragColor = vec4(LightIntensity,0.5); }

    Read the article

  • Transform 3d viewport vector to 2d vector

    - by learning_sam
    I am playing around with 3d transformations and came along an issue. I have a 3d vector already within the viewport and need to transform it to a 2d vector. (let's say my screen is 10x10) Does that just straight works like regualar transformation or is something different here? i.e.: I have the vector a = (2, 1, 0) within the viewport and want the 2d vector. Does that works like this and if yes how do I handle the "0" within the 3rd component?

    Read the article

  • Smoothing rotation

    - by Lewis
    I've spent the last three days trying to work out how to rotate a sprite smoothly depending on the velocity.x value of the sprite. I'm using this: float Proportion = 9.5; float maxDiff = 200; float rotation = fmaxf(fminf(playerVelocity.x * Proportion, maxDiff), -maxDiff); player.rotation = rotation; The behaviour is what I required but if the velocity changes rapidly then it will look like the sprite will jump to face left or jump to face right. I'll go into the behaviour in a little more detail: 0 velocity = sprite faces forwards negative velocity = sprite faces left depending on value. positive velocity = sprite faces right (higher velocity the more it faces right) same as above. I've read about using interpolation rather than an absolute angle to rotate it to but I don't know how to implement that. I have a physics engine available. There is one other way to get around this: to use += on the rotation angle. The thing is that I would then have to change the equation to produce positive and negative values then to make sure the sprite faces 0 once it reaches 0 velocity again. If I add that in now, it keeps the previous angle even after the velocity has dropped / is dropping. Any ideas/code snippets would be greatly appreciated.

    Read the article

  • What are good ways to find collaborators for a coding weekend?

    - by tarrasch
    Not sure if this belongs here, feel free to push it somewhere else if needed. When i was at university we would sometimes come together into a room full of beer and fast food and crank out software in a weekend. Unfortunately the group has kind of split up and its just not possible any more. My question is now: Where can i find like-minded people on the Internet that would like to do something like this? I have an idea what i wanted to do next, but of course other people have ideas too.

    Read the article

  • Rotate object Up/Down/Left/Right in any orientation

    - by George Duckett
    I'm rendering model at the origin with a fixed camera looking at it positioned on the z axis. I want to be able to rotate the model up/down and left/right. Currently I have 2 variables, HorizontalRotation and VerticalRotation. When calculating the world matrix I rotate about the Y axis by HorizontalRotation and about the X axis by VerticalRotation. The ..Rotation variables are controlled by pressing up/down/left/right arrow keys. The problem I'm having is that the rotations are happening relative to the object. Lets say it's a model of the world. Pressing Up a bit would let me look at the north pole. Currently when i press right the earth spins infront of the camera on its axis; I'm still looking at the north pole. How can i get it so that no matter what rotations are currently applied i can always rotate my model relative to the camera/world axis?

    Read the article

  • Which isometric angles can be mirrored (and otherwise transformed) for optimization?

    - by Tom
    I am working on a basic isometric game, and am struggling to find the correct mirrors. Mirror can be any form of transform. I have managed to get SE out of SW, by scaling the sprite on X axis by -1. Same applies for NE angle. Something is bugging me, that I should be able to also mirror N to S, but I cannot manage to pull this one off. Am I just too sleepy and trying to do the impossible, or a basic -1 scale on Y axis is not enough? What are the common used mirror table for optimizing 8 angle (N, NE, E, SE, S, SW, W, NW) isometric sprites?

    Read the article

  • Dynamic content realoding

    - by Kikaimaru
    Is there a relatively simple way to dynamicaly reload content files? (ie: effect files) I know i can do following: Detect change of file Run content pipeline to rebuild that specific file Unload ALL content that was loaded Load All content And use double references to reference content files. Problem is with step 3 (and step 2 isn't that nice too). But i need to unload everything because if i have model Hero.x which references Model.fx effect, and i change Model.fx file, i need to reload Hero.x file which will then call LoadExternalReference on Model.fx. So I guess question is, did someone mange to make this work without rewriting whole ContentManager (and every ContentReader) and tracking calls to LoadExternalReference?

    Read the article

  • Zooming to point of interest

    - by user1010005
    I have the following variables: Point of interest which is the position(x,y) in pixels of the place to focus. Screen width,height which are the dimensions of the window. Zoom level which sets the zoom level of the camera. And this is the code I have so far. void Zoom(int pointOfInterestX,int pointOfInterstY,int screenWidth, int screenHeight,int zoomLevel) { glTranslatef( (pointOfInterestX/2 - screenWidth/2), (pointOfInterestY/2 - screenHeight/2),0); glScalef(zoomLevel,zoomLevel,zoomLevel); } And I want to do zoom in/out but keep the point of interest in the middle of the screen. but so far all of my attempts have failed and I would like to ask for some help.

    Read the article

  • Finding inspiration / help for making up (weapon) names

    - by Rookie
    I'm really bad with words, especially with English words. Currently I'm struggling to make a good weapon names for my game, it needs to display the weapon functionality (weak/strong/fast/ballistic etc) correctly as well. For example the best weapon in a (futuristic) game cannot be called just with the name "Laser", it's just too boring, right? Are there any tools, websites or anything that helps me finding good names for weapons? (or anything else similar). I was thinking to use scientific names, but noticed that they are really hard to write, and they get very long, and I also lack information about science, I only know I could use the atomic sub-particles names in the weapons for example. How do I get started with becoming good with making up names? (this could apply in generally to any naming problems).

    Read the article

  • Problems when rendering code on Nvidia GPU

    - by 2am
    I am following OpenGL GLSL cookbook 4.0, I have rendered a tesselated quad, as you see in the screenshot below, and i am moving Y coordinate of every vertex using a time based sin function as given in the code in the book. This program, as you see on the text in the image, runs perfectly on built in Intel HD graphics of my processor, but i have Nvidia GT 555m graphics in my laptop, (which by the way has switchable graphics) when I run the program on the graphic card, the OpenGL shader compilation fails. It fails on following instruction.. pos.y = sin.waveAmp * sin(u); giving error Error C1105 : Cannot call a non-function I know this error is coming on the sin(u) function which you see in the instruction. I am not able to understand why? When i removed sin(u) from the code, the program ran fine on Nvidia card. Its running with sin(u) fine on Intel HD 3000 graphics. Also, if you notice the program is almost unusable with intel HD 3000 graphics, I am getting only 9FPS, which is not enough. Its too much load for intel HD 3000. So, sin(X) function is not defined in the OpenGL specification given by Nvidia drivers or something else??

    Read the article

  • Trying to create a sphere in UDK on which I can stand

    - by Dave
    Trying to build a globe in UDK, but when I do (create a sphere), my player falls straight through it. How do I make a sphere that I can walk on? Every other shape (cube, cone...etc) work just fine. -- Edit: Specifically, I want to build a CSG/Brush sphere, not a mesh sphere. It appears to work just fine if I set the "sphere exptrapolation" to 1 or 2, but if I bump it up to 3 or higher, I fall right through. I literally created 2 spheres next to each other, one set at "2" and one at "3" - I can walk from the top of the "2" sphere and jump onto the "3" sphere, but I fall right through it.

    Read the article

  • 2D game big background images for maps

    - by WhiteCat
    Update: this question is general, not specific to Sprite Kit or a single language/platform. I'm toying with Sprite Kit with an idea to make a 2D side-scroller. Now the backgrounds for the maps are going to be hand-drawn and surely bigger than retina display, so the maps could span more than 1 screen in both axis. I imagine loading such a huge image could mean trouble and I don't plan to use tiling. I'm not sure how Sprite Kit splits images bigger than max texture size, if it does. I could split the images myself and use more sprites for each part of the background. What is the usual way to handle this?

    Read the article

  • Learning C++ but wanting to develop iOS Apps

    - by DiscreteGenius
    I'm a computer engineering student and taking my second programming class. I'm learning C++ using "C++ Primer Plus" 5th edition by Prata. I want to develop for iOS. I understand the main language for Xcode is Objective-C. Am I hurting myself by learning C++ before any other language (notably before my desired lang Objective-C)? There's got to be a reason the university requires C++ to learn as a basis language. Please offer any helpful guidance or how I should go about this. Thanks//

    Read the article

  • NPOT texture and video memory usage

    - by Eonil
    I read in this QA that NPOT will take memory as much as next POT sized texture. It means it doesn't give any benefit than POT texture with proper management. (maybe even worse because NPOT should be slower!) Is this true? Does NPOT texture take and waste same memory like POT texture? I am considering NPOT texture for post-processing, so if it doesn't give memory space benefit, using of NPOT texture is meaningless to me. Maybe answer can be different for each platforms. I am targeting mobile devices. Such as iPhone or Androids. Does NPOT texture takes same amount of memory on mobile GPUs?

    Read the article

< Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >