Search Results

Search found 25952 results on 1039 pages for 'development lifecycle'.

Page 509/1039 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • Calculating distance from viewer to object in a shader

    - by Jay
    Good morning, I'm working through creating the spherical billboards technique outlined in this paper. I'm trying to create a shader that calculates the distance from the camera to all objects in the scene and stores the results in a texture. I keep getting either a completely black or white texture. Here are my questions: I assume the position that's automatically sent to the vertex shader from ogre is in object space? The gpu interpolates the output position from the vertex shader when it sends it to the fragment shader. Does it do the same for my depth calculation or do I need to move that calculation to the fragment shader? Is there a way to debug shaders? I have no errors but I'm not sure I'm getting my parameters passed into the shaders correctly. Here's my shader code: void DepthVertexShader( float4 position : POSITION, uniform float4x4 worldViewProjMatrix, uniform float3 eyePosition, out float4 outPosition : POSITION, out float Depth ) { // position is in object space // outPosition is in camera space outPosition = mul( worldViewProjMatrix, position ); // calculate distance from camera to vertex Depth = length( eyePosition - position ); } void DepthFragmentShader( float Depth : TEXCOORD0, uniform float fNear, uniform float fFar, out float4 outColor : COLOR ) { // clamp output using clip planes float fColor = 1.0 - smoothstep( fNear, fFar, Depth ); outColor = float4( fColor, fColor, fColor, 1.0 ); } fNear is the near clip plane for the scene fFar is the far clip plane for the scene

    Read the article

  • 2D Topdown Shooter - Player Movement Relative to Mouse

    - by Jarmo
    I'm trying to make a topdown 2D space game for my school project. I'm almost done but I just want to add a few little things to make the game more fun to play. if (keystate.IsKeyDown(Keys.W)) { vPlayerPos += Vector2.Normalize(new Vector2(Mouse.GetState().X - vPlayerPos.X, Mouse.GetState().Y - vPlayerPos.Y)) * 3; rPlayer.X = (int)vPlayerPos.X; rPlayer.Y = (int)vPlayerPos.Y; } if (keystate.IsKeyDown(Keys.S)) { vPlayerPos += Vector2.Normalize(new Vector2(Mouse.GetState().X - vPlayerPos.X, Mouse.GetState().Y - vPlayerPos.Y)) * -3; rPlayer.X = (int)vPlayerPos.X; rPlayer.Y = (int)vPlayerPos.Y; } This is what i use to move towards and away from my mouse crossair. I tried to make a somewhat similar function to make it strafe with "A" and "D". But for some reason I just couldn't get it done. Any thoughts?

    Read the article

  • Problem Implementing Texture on Libgdx Mesh of Randomized Terrain

    - by BrotherJack
    I'm having problems understanding how to apply a texture to a non-rectangular object. The following code creates textures such as this: from the debug renderer I think I've got the physical shape of the "earth" correct. However, I don't know how to apply a texture to it. I have a 50x50 pixel image (in the environment constructor as "dirt.png"), that I want to apply to the hills. I have a vague idea that this seems to involve the mesh class and possibly a ShapeRenderer, but the little i'm finding online is just confusing me. Bellow is code from the class that makes and regulates the terrain and the code in a separate file that is supposed to render it (but crashes on the mesh.render() call). Any pointers would be appreciated. public class Environment extends Actor{ Pixmap sky; public Texture groundTexture; Texture skyTexture; double tankypos; //TODO delete, temp public Tank etank; //TODO delete, temp int destructionRes; // how wide is a static pixel private final float viewWidth; private final float viewHeight; private ChainShape terrain; public Texture dirtTexture; private World world; public Mesh terrainMesh; private static final String LOG = Environment.class.getSimpleName(); // Constructor public Environment(Tank tank, FileHandle sfileHandle, float w, float h, int destructionRes) { world = new World(new Vector2(0, -10), true); this.destructionRes = destructionRes; sky = new Pixmap(sfileHandle); viewWidth = w; viewHeight = h; skyTexture = new Texture(sky); terrain = new ChainShape(); genTerrain((int)w, (int)h, 6); Texture tankSprite = new Texture(Gdx.files.internal("TankSpriteBase.png")); Texture turretSprite = new Texture(Gdx.files.internal("TankSpriteTurret.png")); tank = new Tank(0, true, tankSprite, turretSprite); Rectangle tankrect = new Rectangle(300, (int)tankypos, 44, 45); tank.setRect(tankrect); BodyDef terrainDef = new BodyDef(); terrainDef.type = BodyType.StaticBody; terrainDef.position.set(0, 0); Body terrainBody = world.createBody(terrainDef); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = terrain; terrainBody.createFixture(fixtureDef); BodyDef tankDef = new BodyDef(); Rectangle rect = tank.getRect(); tankDef.type = BodyType.DynamicBody; tankDef.position.set(0,0); tankDef.position.x = rect.x; tankDef.position.y = rect.y; Body tankBody = world.createBody(tankDef); FixtureDef tankFixture = new FixtureDef(); PolygonShape shape = new PolygonShape(); shape.setAsBox(rect.width*WORLD_TO_BOX, rect.height*WORLD_TO_BOX); fixtureDef.shape = shape; dirtTexture = new Texture(Gdx.files.internal("dirt.png")); etank = tank; } private void genTerrain(int w, int h, int hillnessFactor){ int width = w; int height = h; Random rand = new Random(); //min and max bracket the freq's of the sin/cos series //The higher the max the hillier the environment int min = 1; //allocating horizon for screen width Vector2[] horizon = new Vector2[width+2]; horizon[0] = new Vector2(0,0); double[] skyline = new double[width]; //TODO skyline necessary as an array? //ratio of amplitude of screen height to landscape variation double r = (int) 2.0/5.0; //number of terms to be used in sine/cosine series int n = 4; int[] f = new int[n*2]; //calculating omegas for sine series for(int i = 0; i < n*2 ; i ++){ f[i] = rand.nextInt(hillnessFactor - min + 1) + min; } //amp is the amplitude of the series int amp = (int) (r*height); double lastPoint = 0.0; for(int i = 0 ; i < width; i ++){ skyline[i] = 0; for(int j = 0; j < n; j++){ skyline[i] += ( Math.sin( (f[j]*Math.PI*i/height) ) + Math.cos(f[j+n]*Math.PI*i/height) ); } skyline[i] *= amp/(n*2); skyline[i] += (height/2); skyline[i] = (int)skyline[i]; //TODO Possible un-necessary float to int to float conversions tankypos = skyline[i]; horizon[i+1] = new Vector2((float)i, (float)skyline[i]); if(i == width) lastPoint = skyline[i]; } horizon[width+1] = new Vector2(800, (float)lastPoint); terrain.createChain(horizon); terrain.createLoop(horizon); //I have no idea if the following does anything useful :( terrainMesh = new Mesh(true, (width+2)*2, (width+2)*2, new VertexAttribute(Usage.Position, (width+2)*2, "a_position")); float[] vertices = new float[(width+2)*2]; short[] indices = new short[(width+2)*2]; for(int i=0; i < (width+2); i+=2){ vertices[i] = horizon[i].x; vertices[i+1] = horizon[i].y; indices[i] = (short)i; indices[i+1] = (short)(i+1); } terrainMesh.setVertices(vertices); terrainMesh.setIndices(indices); } Here is the code that is (supposed to) render the terrain. @Override public void render(float delta) { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); // tell the camera to update its matrices. camera.update(); // tell the SpriteBatch to render in the // coordinate system specified by the camera. backgroundStage.draw(); backgroundStage.act(delta); uistage.draw(); uistage.act(delta); batch.begin(); debugRenderer.render(this.ground.getWorld(), camera.combined); batch.end(); //Gdx.graphics.getGL10().glEnable(GL10.GL_TEXTURE_2D); ground.dirtTexture.bind(); ground.terrainMesh.render(GL10.GL_TRIANGLE_FAN); //I'm particularly lost on this ground.step(); }

    Read the article

  • Is using a dedicated thread just for sending gpu commands a good idea?

    - by tigrou
    The most basic game loop is like this : while(1) { update(); draw(); swapbuffers(); } This is very simple but have a problem : some drawing commands can be blocking and cpu will wait while he could do other things (like processing next update() call). Another possible solution i have in mind would be to use two threads : one for updating and preparing commands to be sent to gpu, and one for sending these commands to the gpu : //first thread while(1) { update(); render(); // use gamestate to generate all needed triangles and commands for gpu // put them in a buffer, no command is send to gpu // two buffers will be used, see below pulse(); //signal the other thread data is ready } //second thread while(1) { wait(); // wait for second thread for data to come send_data_togpu(); // send prepared commands from buffer to graphic card swapbuffers(); } also : two buffers would be used, so one buffer could be filled with gpu commands while the other would be processed by gpu. Do you thing such a solution would be effective ? What would be advantages and disadvantages of such a solution (especially against a simpler solution (eg : single threaded with triple buffering enabled) ?

    Read the article

  • Nifty default controls prevent the rest of my game from rendering

    - by zergylord
    I've been trying to add a basic HUD to my 2D LWJGL game using nifty gui, and while I've been successful in rendering panels and static text on top of the game, using the built-in nifty controls (e.g. an editable text field) causes the rest of my game to not render. The strange part is that I don't even have to render the gui control, merely declaring it appears to cause this problem. I'm truly lost here, so even the vaguest glimmer of hope would be appreciated :-) Some code showing the basic layout of the problem: display setup: // load default styles nifty.loadStyleFile("nifty-default-styles.xml"); // load standard controls nifty.loadControlFile("nifty-default-controls.xml"); screen = new ScreenBuilder("start") {{ layer(new LayerBuilder("baseLayer") {{ childLayoutHorizontal(); //next line causes the problem control(new TextFieldBuilder("input","asdf") {{ width("200px"); }}); }}); }}.build(nifty); nifty.gotoScreen("start"); rendering glMatrixMode(GL_PROJECTION); glLoadIdentity(); GLU.gluOrtho2D(0f,WINDOW_DIMENSIONS[0],WINDOW_DIMENSIONS[1],0f); //I can remove the 2 nifty lines, and the game still won't render nifty.render(true); nifty.update(); glMatrixMode(GL_PROJECTION); glLoadIdentity(); GLU.gluOrtho2D(0f,(float)VIEWPORT_DIMENSIONS[0],0f,(float)VIEWPORT_DIMENSIONS[1]); glTranslatef(translation[0],translation[1],0); for (Bubble bubble:bubbles){ bubble.draw(); } for (Wall wall:walls){ wall.draw(); } for(Missile missile:missiles){ missile.draw(); } for(Mob mob:mobs){ mob.draw(); } agent.draw();

    Read the article

  • How access PhysicalMaterial from Actor Class?

    - by EmAdpres
    I use Projectile for my weapon system and UDKProjectile has two main function to handle Hit of projectiles(=bullet of my weapon): simulated function ProcessTouch(Actor Other, Vector HitLocation, Vector HitNormal) // For Actors simulated event HitWall(vector HitNormal, actor Wall, PrimitiveComponent WallComp) // Everything except Actors ( I guess) the first method, the function just give me the actor which I hit and my question is How I can get that actor's physical material by first parameter ( Other ), in order to make a proper react about it ( for example a proper Sound of collide ) ... A tricky (but hateful ) way which I knew works is, make a Trace from a little back of that actor to that actor, and use HitInfo parameter which include physical Material ! But there should be a more standard way !

    Read the article

  • Client side prediction/simulation Question

    - by Legendre
    I found a related question but it doesn't have what I needed. Client A sends input to move at T0. Server receives input at T1. All clients receive the change at T2. Question: With client-side prediction, client A would start moving at T0, client-side. All other clients receive the change at T2, so to them, client A only started moving at T2. If I understand correctly, client B will always see client A's past position and not his current position? How do I sync both client B and client A?

    Read the article

  • Scan-Line Z-Buffering Dilemma

    - by Belgin
    I have a set of vertices in 3D space, and for each I retain the following information: Its 3D coordinates (x, y, z). A list of pointers to some of the other vertices with which it's connected by edges. Right now, I'm doing perspective projection with the projecting plane being XY and the eye placed somewhere at (0, 0, d), with d < 0. By doing Z-Buffering, I need to find the depth of the point of a polygon (they're all planar) which corresponds to a certain pixel on the screen so I can hide the surfaces that are not visible. My questions are the following: How do I determine to which polygon does a pixel belong to so I could use the formula of the plane which contains the polygon to find the Z-coordinate? Are my data structures correct? Do I need to store something else entirely in order for this to work? I'm just projecting the vertices onto the projection plane and joining them with lines based on the pointer lists.

    Read the article

  • Efficient path-finding in free space

    - by DeadMG
    I've got a game situated in space, and I'd like to issue movement orders, which requires pathfinding. Now, it's my understanding that A* and such mostly apply to trees, and not empty space which does not have pathfinding nodes. I have some obstacles, which are currently expressed as fixed AABBs- that is, there is no unbounded "terrain" obstacle. In addition, I expect most obstacles to be reasonably approximable as cubes or spheres. So I've been thinking of applying a much simpler pathfinding algorithm- that is, simply cast a ray from the current position to the target position, and then I can get a list of obstacles using spatial partitioning relatively quickly. What I'm not so sure about is how to determine the part where the ordered unit manoeuvres around the obstacles. What I've been thinking so far is that I will simply use potential fields- that is, all units will feel a strong repulsive force away from each other and a moderate force towards the desired point. This also has the advantage that to issue group orders, I can simply order a mid-level force towards another entity. But this obviously won't achieve the optimal solution. Will potential fields achieve a reasonable approximation given my parameters, or do I need another solution?

    Read the article

  • MarteEngine Tile Collision

    - by opiop65
    I need to add collision to my tile map using MarteEngine. MarteEngine is built of of slick2D. Here's my tile generation code: Code: public void render(GameContainer gc, StateBasedGame game, Graphics g) throws SlickException { for (int x = 0; x < 16; x++) { for (int y = 0; y < 16; y++) { map[x][y] = AIR; air.draw(x * GameWorld.tilesize, y * GameWorld.tilesize); } } for (int x = 0; x < 16; x++) { for (int y = 7; y < 8; y++) { map[x][y] = GRASS; grass.draw(x * tilesize, y * tilesize); } } for (int x = 0; x < 16; x++) { for (int y = 8; y < 10; y++) { map[x][y] = DIRT; dirt.draw(x * tilesize, y * tilesize); } } for (int x = 0; x < 16; x++) { for (int y = 10; y < 16; y++) { map[x][y] = STONE; stone.draw(x * tilesize, y * tilesize); } } super.render(gc, game, g); } And one of my tile classes (they're all the same, the image names are just different): Code: package MarteEngine; import org.newdawn.slick.Image; import org.newdawn.slick.SlickException; import it.randomtower.engine.entity.Entity; public class Grass extends Entity { public static Image grass = null; public Grass(float x, float y) throws SlickException { super(x, y); grass = new Image("res/grass.png"); setHitBox(0, 0, 50, 50); addType(SOLID); } } I tried to do it like this: Code: for (int x = 0; x < 16; x++) { for (int y = 7; y < 8; y++) { map[x][y] = GRASS; Grass.grass.draw(x * tilesize, y * tilesize); } } But it gave me a NullPointerException. No idea why, everything looks initialized right? I would be very grateful for some help!

    Read the article

  • problems programmatically creating UIView on iPad App

    - by user3871
    I have been struggling with this problem for a few days. My iPad app is designed to be a portrait game. To satisfy Apple's expection, I also support landscape mode. When it goes into landscape mode, the game goes into a letterbox format with back borders on the sides. My problem is I am creating the UIWindow and UIView programmatically. For some unkown reason, the touch controls are "locked" in to think I'm always in landscape mode. And even though visually in portrait mode everything looks correct, the top and bottom of the screen does not respond to touch. To summarize how I am setting this up, let me provide the skeletal framework of what I'm doing: in main.cpp: int retVal = UIApplicationMain(argc, argv, nil, @"derbyPoker_ipadAppDelegate"); In the delegate, I am doing this: - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { CGRect screenBounds = [[UIScreen mainScreen] bounds]; CGFloat scale = [[ UIScreen mainScreen] scale ]; m_device_width = screenBounds.size.width; m_device_height = screenBounds.size.height; m_device_scale = scale; // Everything is built assuming 640x960 window = [[ UIWindow alloc ] initWithFrame:[[UIScreen mainScreen] bounds]]; viewController = [ glView new ]; [self doStateChange:[blitz class]]; return YES; } The last bit of code sets up the UIView... - (void) doStateChange: (Class) state{ viewController.view = [[state alloc] initWithFrame:CGRectMake(0, 0, m_device_width, m_device_height) andManager:self]; viewController.view.contentMode = UIViewContentModeScaleAspectFit; viewController.view.autoresizesSubviews = YES; [window addSubview:viewController.view]; [window makeKeyAndVisible]; } The problem seems to related to the line viewController.view.contentMode = UIViewContentModeScaleAspectFit; If I remove that line, touch works correctly in portrait mode. But the negative is when I'm landscape mode, the game stretches incorrectly. So That's not a option. The frustrating thing is, when I originally had this set up with a NIB file, it worked fine. I have read through the docs about UIWindow, UIViewController and UIView and have tried about everything to no avail. Any help would be greatly appreciated.

    Read the article

  • Uniform not being applied to proper mesh

    - by HaMMeReD
    Ok, I got some code, and you select blocks on a grid. The selection works. I can modify the blocks to be raised when selected and the correct one shows. I set a color which I use in the shader. However, I am trying to change the color before rendering the geometry, and the last rendered geometry (in the sequence) is rendered light. However, to debug logic I decided to move the block up and make it white, in which case one block moves up and another block becomes white. I checked all my logic and it knows the correct one is selected and it is showing in, in the correct place and rendering it correctly. When there is only 1 it works properly. Video Of the bug in action, note how the highlighted and elevated blocks are not the same block, however the code for color and My Renderer is here (For the items being drawn) public void render(Renderer renderer) { mGrid.render(renderer, mGameState); for (Entity e:mGameEntities) { UnitTypes ut = UnitTypes.valueOf((String)e.getObject(D.UNIT_TYPE.ordinal())); if (ut == UnitTypes.Soldier) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.texture_soldier.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); if (mSelectedEntities.contains(e)) { mEntityMatrix.translate(pos.x, 1f, pos.y); renderer.testShader.setUniformf("v_color", 0.5f,0.5f,0.5f,1f); } else { mEntityMatrix.translate(pos.x, 0f, pos.y); renderer.testShader.setUniformf("v_color", 1f,1f,1f,1f); } mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_soldier.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } else if (ut == UnitTypes.Enemy_Infiltrator) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.testShader.setUniformf("v_color", 1.0f,1,1,1.0f); renderer.texture_enemy_infiltrator.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); mEntityMatrix.translate(pos.x, 0f, pos.y); mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_enemy_infiltrator.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } } }

    Read the article

  • Best practices on separating Update and Draw on game loop

    - by Galvanize
    I've been working on my first HTML5 prototype and I found a good model that uses the regular Update and Draw loop we see in game dev. My question is, where does one end and the other begins? The question popped when I wanted to rotate and draw an Image, and I kept wondering if the work of changing the tranformation matrix (that I presume would be a bit expensive since it works on the whole pixel array of an image) and calculating the right position do draw it would characterize drawing work, or maybe not, since after that I may need to check for collision or something similar. Thinkig of it, seems like a silly question, but I would like some advice from more experienced developers. Where does does update ends and draw starts? Thanks in advance.

    Read the article

  • When dealing with a static game board, what are some methods to make it more interesting?

    - by Ólafur Waage
    Let's say you have a game board that you look at. It does not move but there is some action going on. For example Chess, Checkers, Solitaire. The game I'm working on is not one of this but it's a good reference. What are some methods you can apply to the game or the design that increases the appeal of the user to the game. Of course you can make it prettier but what are some other methods you can use? For example: Visual cues, game design changes, user interface arrangement, etc.

    Read the article

  • Is Unity's Random seeded automatically?

    - by Lohoris
    I seem to recall Unity's Random is automatically seeded; checking the documentation it doesn't say it outright, but a certain interpretation of their words might seem to imply it. The seed is normally set from some arbitrary value like the system clock before the random number functions are used. This prevents the same run of values from occurring each time a game is played and thus avoids predictable gameplay. However, it is sometimes useful to produce the same run of pseudo-random values on demand by setting the seed yourself. (emphasis added)

    Read the article

  • Technologies stack to create soccer game vizualization on web page [on hold]

    - by Lambrusco
    I want to create soccer game vizualization. What technologies will be best to create such one for web page? On input I have two teams with players. I have theory about their movements, the movement of the ball on field and so on. I just want to vizualize their movements. What will be the best technology stack? I mean programming languages (C++, Ruby, Java, PHP) and vizualization ways (Flash, HTML5, JS)

    Read the article

  • How do history generation algorithms work?

    - by Bane
    I heard of the game Dwarf Fortress, but only now one of the people I follow on Youtube made a commentary on it... I was more than surprised when I noticed how Dwarf Fortress actually generates a history for the world! Now, how do these algorithms work? What do they usually take as input, except the length of the simulation? How specific can they be? And more importantly; can they be made in Javascript, or is Javascript too slow? (I guess this depends on the depth of the simulation, but take Dwarf Fortress as an example.)

    Read the article

  • Animations / OpenGL (ES 2) in game menu

    - by user16547
    (I am specifically asking for Android) If you look at Angry Birds (and in fact many other games), you can already see a lot of animations & effects going in the main menu and in other places even before starting to play. I assume they are done with OpenGL, more precisely a FrameLayout is used and inside it a GLSurfaceView is somewhere at the bottom of the hierarchy; above the GLSurfaceView you have regular Android buttons and texts. Is this how it's done*? Also would you reuse the same GLSurfaceView when running the actual game or should another one be created? *I am aware an alternative approach would be to make absolutely everything in OpenGL. Of these two I prefer the FrameLayout one, but I don't know whether other developers agree.

    Read the article

  • Objects disappear when zoomed out in Unity

    - by Starkers
    Ignore the palm trees here. I have some oak-like trees when I'm zoomed in: They disappear when I zoom out: Is this normal? Is this something to do with draw distance? How can I change this so my trees don't disappear? The reason I ask is because my installation had a weird terrain glitch. If this isn't normal I'm going to reinstall right away because I'm always thinking 'is that a feature? Or a glitch'?

    Read the article

  • rotating menu with Actors in libgdx

    - by joecks
    I am intending to build a circular menu, with menu items equally distributed around the circle. When clicking on a menu item the circle should rotate so that the selected item is facing the top. I am using libgdx and I am not very familiar with the Actors concept, so I intuitivly tried to implement an Actor, who is drawing a texture and then transforming it by using Actions, with no success: class CircleActor extends Actor { @Override public void draw(SpriteBatch batch, float parentAlpha) { batch.draw(texture1, 100, 100); } @Override public Actor hit(float x, float y) { return this; } } and the rotate action: CircleActor circleActor = new CircleActor(); circleActor.action(Forever.$(RotateBy.$(0.1f, 0.1f))); // stage.addActor(); stage.addActor(circleActor); The texture is rectangular, but it doe not work. 1. What is wrong? 2. Is it a good approach to solve the task? Thanks!

    Read the article

  • Deleting a game object causing an access violation

    - by Balls
    I tried doing this but it cause an access violation. void GameObjectFactory::Update() { for( std::list<GameObject*>::iterator it=gameObjectList.begin() ..... (*it)->Update(); } void Bomb::Update() { if( time == 2.0f ) { gameObjectFactory->Remove( this ); } } void GameObjectFactory::Remove( ... ) { gameObjectList.remove( ... ); } My thoughts would be to mark the object to be dead then let the factory handle it the on next frame for deletion. Is it the best and fastest way? What do you think?

    Read the article

  • Grid Based Lighting in XNA/Monogame

    - by sm81095
    I know that questions like this have been asked many times, but I have not found one exactly like this yes. I have implemented a top-down grid based world in Monogame, and am starting on the lighting system soon. How I want to do lighting is to have a grid that is 4 times wider and higher, basically splitting each world tile into a 4x4 system of "subtiles". I would like to use a flow like system to spread light across the tiles by reducing the light by a small amount each time. This is kind of the effect I was going for: http://i.imgur.com/rv8LCxZ.png The black grid lines are the light grid, and the red lines are the actual tile grid, and the light drop-off is very exaggerated. I plan to render the world by drawing the unlit grid to a separate RenderTarget2D, then rendering the lighting grid to a separate target and overlaying the two. Basically, my questions are: What would be the algorithm for a flow style lighting system like this? Would there be a more efficient way of rendering this? How would I handle the darkening of the light with colors, reducing the RGB values in each grid, or reducing the alpha in each grid, assuming that I render the light map over the grid using blending? Even assuming the former are possible, what BlendState would I use for that?

    Read the article

  • My game seems to be incompatible with recording software. What could be causing this?

    - by Lewis Wakeford
    I've just finished a little Game-Dev project for university and I need to record a video to accompany my submission (just in case they can't get my source to work). Basically my game doesn't work at all when FRAPS or Bandicam attempts to attach to it, I get a black screen and a stream of GL INVALID OPERATION messages from my error reporting code. Dxtory can't seem to hook into it correctly at all, it doesn't display it's FPS counter or anything. My game logic appears to be running correctly from the debug traces, it just seems like all the gl library calls break. I don't know a huge amount about how these programs operate so I don't really know what I could be doing to cause this. I've heard they read from the OpenGL frame buffers so maybe I'm doing something wrong there? I'm letting GLFW and GLEW do all the low level initialization, but I have successfully recorded projects with the same setup and recording software. Essentially, has anyone ever run into something like this before or do you know anything about how these programs work that could give a clue as to the cause of the issue?

    Read the article

  • How are larger games organized?

    - by Matthew G.
    I'm using Java, but the language I'm using here is probably irrelevant. I'd like to create an economy based on an ancient civilization. I'm not sure how to design it. If I were working on a smaller game, like a copy of "Space Invaders", I'd have no problem structuring it like this. Game -Main Control Class --Graphics Class --Player Class --Enemy class I'd pass the graphics class to both the player and enemy class so they could call graphics functions. I don't understand how I'd do this for larger projects. Do I create a country class that contains a bunch of towns? Do the towns contain a lot building class, most contain classes of people? Do I make a path finding class that the player can access to get around? How exactly do I structure this and pass all these references around? Thanks.

    Read the article

  • Why do consoles have so little memory compared to classic computers?

    - by jokoon
    I remember the Playstation having 2MB ram and 1MB graphic memory. The Playstation 3 now has only 256MB ram and 256MB graphic memory, and I'm sure that the day the console was released, even laptop's "standard" capacity was at least 1GB. So why do they put so little memory in their machines, while developers would benefit a lot by having more ? Or is the memory that much faster than desktops and thus more expensive ? Or is it not that much worth it for developers ? What are the Sony/XBox/Nintendo engineers thinking that seems to be the same reason ?

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >