Search Results

Search found 25518 results on 1021 pages for 'iterative development'.

Page 488/1021 | < Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >

  • What are some good resources for creating a game engine in XNA?

    - by Glasser
    I'm currently a student game programmer working on an indie project. We have a team of eleven people (five programmers, four artists, and two audio designers) aboard, all working hard to help design this game. We've been meeting for months now and so far we have a pretty buffed out Game Design Document as well as much audio/visual concept art. Our programmers are itching to progress on our own end. Each person in our programming team is well versed in C++, but is very familiar with C#. We have enough experience and skill that we're confident that we will be successful with our game, and we're looking to build our own game engine in XNA as it seems like it would be worth our time and effort in the end. The game itself will be a 2D beat 'em up style game to be released over xbox live and the PC. It's play style will be similar to that of Castle Crashers or Scott Pilgrim vs The World. We want to design the game engine to allow us to better implement our assets into the game as well as to simplify the creation of design elements/mechanics. Currently between our programmers, we have books such as "XNA 4.0" and "Game Coding Complete, Third Edition," but we'd still like more information on both XNA and (especially) building a game engine from scratch. What are any other good books, websites, or resources we could use to further map out and program our game engine?

    Read the article

  • What causes the iOS OpenGLES driver to allocate extra memory?

    - by Martin Linklater
    I'm trying to optimize the memory usage of our iOS game and I'm puzzled about when/why the iOS GLES driver allocates extra memory at runtime... When I run our game through Instruments with the OpenGL ES Driver instrument the gartUsedBytes value can fluctuate quite wildly. We preload all our textures and build the buffer objects up front, so it's not the game engine requesting extra memory from GL. Currently we are manually requesting around 50MB of GL memory, yet the gartUsedBytes value sits at around 90MB most of the time, peaking at 125MB from time to time. It seems to be linked to what you are rendering that frame - our PVS only submits VBO's for visible meshes. Can anyone shed some light on what the driver is doing in the background ? Like I said earlier, all our game engine allocations are done on level load, so in theory there shouldn't be any fluctuation on GL memory usage while the level is running. Thanks.

    Read the article

  • Very slow direct3D texture sampling

    - by __dominic
    Hi, So I'm writing a small game using Direct3D 9 and I'm using multitexturing for the terrain. All I'm doing is sampling 3 textures and a blend map and getting the overall color from the three textures based on the color channels from the blend map. Anyway, I am getting a massive frame rate drop when I sample more than 1 texture, I'm going from 120+ fps to just under 50. This is the HLSL code responsible for the slow down: float3 ground = tex2D(GroundTex, multiTex).rgb; float3 stone = tex2D(StoneTex, multiTex).rgb; float3 grass = tex2D(GrassTex, multiTex).rgb; float3 blend = tex2D(BlendMapTex, blendMap).rgb; Am I doing it wrong ? If anyone has any info or tips about texture sampling or anything, that would be nice. Thanks.

    Read the article

  • RPG Item processing

    - by f00b4r
    I started working on an item system for my (first) game, and I'm having a problem conceptualizing how it should work. Since Items can produce a bunch of potentially non-standard actions (revive a character vs increasing some stat) or have use restrictions (can only revive if a character is dead). For obvious reasons, I don't want to create a new Item class for every item type. What is the best way to handle this? Should I make a handful of item types (field modifiers, status modifiers, )? Is it normal to script item usage? Could (should?) this be combined with the above mentioned solution (have a couple of different sub item types, make special case items usage scripted)? Thanks.

    Read the article

  • Will we see a trend of stereoscopic 3D games coming up in the near future?

    - by Vish
    I've noticed that the trend of movies is diving into the world of movies with 3-dimensional camera.For me it provoked a thought as if it was the same feeling people got when they saw a colour movie for the first time, like in the transition from black and white to colour it is a whole new experience. For the first time we are experiencing the Z(depth) factor and I really mean when I said "experiencing". So my question is or maybe if not a question, but Is there a possibility of a genre of 3d camera games upcoming?

    Read the article

  • this.BoundingBox.Intersects(Wall[0].BoundingBox) not working properly

    - by Pieter
    I seem to be having this problem a lot, I'm still learning XNA / C# and well, trying to make a classic paddle and ball game. The problem I run into (and after debugging have no answer) is that everytime I run my game and press either of the movement keys, the Paddle won't move. Debugging shows that it never gets to the movement part, but I can't understand why not? Here's my code: // This is the If statement for checking Left movement. if (keyboardState.IsKeyDown(Keys.Left) || keyboardState.IsKeyDown(Keys.A)) { if (!CheckCollision(walls[0])) { Location.X -= Velocity; } } //This is the CheckCollision(Wall wall) boolean public bool CheckCollision(Wall wall) { if (this.BoundingBox.Intersects(wall.BoundingBox)) { return true; } return false; } As far as I can tell there should be absolutely no problem with this, I initialize the bounding box in the constructor whenever a new instance of Walls and Paddle is created. this.BoundingBox = new Rectangle(0, 0, Sprite.Width, Sprite.Height); Any idea as to why this isn't working? I have previously succeeded with using the whole Location.X < Wall.Location.X + Wall.Texture.Width code... But to me that seems like too much coding if a simple boolean check could be done.

    Read the article

  • Multiple objects listening for the same key press

    - by xiaohouzi79
    I want to learn the best way to implement this: I have a hero and an enemy on the screen. Say the hero presses "k" to get out a knife, I want the enemy to react in a certain way. Now, if in my game loop I have a listener for the key press event and I identify a "k" was pressed, the quick and easy way would be to do: // If K pressed // hero.getOoutKnife() // enemy.getAngry() But what is commonly done in more complex games, where say I have 10 types of character on screen and they all need to react in a unique way when the letter "k" is pressed? I can think of a bunch of hacky ways to do this, but would love to know how it should be done properly. I am using C++, but I'm not looking for a code implementation, just some ideas on how it should be done the right way.

    Read the article

  • Repairing back-facing triangles without user input

    - by LTR
    My 3D application works with user-imported 3D models. Frequently, those models have a few vertices facing into the wrong direction. (For example, there is a 3D roof and a few triangles of that roof are facing inside the building). I want to repair those automatically. We can make several assumptions about these 3D models: they are completely closed without holes, and the camera is always on the outside. My idea: Shoot 500 rays from every triangle outwards into all directions. From the back side of the triangle, all rays will hit another part of the model. From the front side, at least one ray will hit nothing. Is there a better algorithm? Are there any papers about something like this?

    Read the article

  • How do I draw a scrolling background?

    - by droidmachine
    How can I draw background tile in my 2D side-scrolling game? Is that loop logical for OpenGL es? My tile 2400x480. Also I want to use parallax scrolling for my game. batcher.beginBatch(Assets.background); for(int i=0; i<100; i++) batcher.drawSprite(0+2400*i, 240, 2400, 480, Assets.backgroundRegion); batcher.endBatch(); UPDATE And thats my onDrawFrame.I'm sending deltaTime for fps control. public void onDrawFrame(GL10 gl) { GLGameState state = null; synchronized(stateChanged) { state = this.state; } if(state == GLGameState.Running) { float deltaTime = (System.nanoTime()-startTime) / 1000000000.0f; startTime = System.nanoTime(); screen.update(deltaTime); screen.present(deltaTime); } if(state == GLGameState.Paused) { screen.pause(); synchronized(stateChanged) { this.state = GLGameState.Idle; stateChanged.notifyAll(); } } if(state == GLGameState.Finished) { screen.pause(); screen.dispose(); synchronized(stateChanged) { this.state = GLGameState.Idle; stateChanged.notifyAll(); } } }

    Read the article

  • Could someone explain why my world reconstructed from depth position is incorrect?

    - by yuumei
    I am attempting to reconstruct the world position in the fragment shader from a depth texture. I pass in the 8 frustum points in world space and interpolate them across fragments and then interpolate from near to far by the depth: highp float depth = (2.0 * CameraPlanes.x) / (CameraPlanes.y + CameraPlanes.x - texture( depthTexture, textureCoord ).x * (CameraPlanes.y - CameraPlanes.x)); // Reconstruct the world position from the linear depth highp vec3 world = mix( nearWorldPos, farWorldPos, depth ); CameraPlanes.x is the near plane CameraPlanes.y is the far. Assuming that my frustum positions are correct, and my depth looks correct, why is my world position wrong? (My depth texture is of format GL_DEPTH_COMPONENT32F if that matters) Thanks! :D Update: Screenshot of world position http://imgur.com/sSlHd So you can see it looks nearly correct. However as the camera moves, the colours (positions) change, which they shouldnt. I can get this to work, if I do the following: Write this into the depth attachment in the previous pass: gl_FragDepth = gl_FragCoord.z / gl_FragCoord.w / CameraPlanes.y; and then read the depth texture like so: depth = texture( depthTexture, textureCoord ).x However this will kill the hardware z buffer optimizations.

    Read the article

  • Interaction using Kinect in XNA

    - by Sweta Dwivedi
    So i have written a program to play a sound file when ever my RightHand.Joint touches the 3D model . . It goes like this . . even though the code works somehow but not very accurate . . for example it will play the sound when my hand is slightly under my 3D object not exactly on my 3D object . How do i make it more accurate? here is the code . . (HandX & HandY is the values coming from the Skeleton data RightHand.Joint.X etc) and also this calculation doesnt work with Animated Sprites..which i need to do foreach (_3DModel s in Solar) { float x = (float)Math.Floor(((handX * 0.5f) + 0.5f) * (resolution.X)); float y = (float)Math.Floor(((handY * -0.5f) + 0.5f) * (resolution.Y)); float z = (float)Math.Floor((handZ) / 4 * 20000); if (Math.Sqrt(Math.Pow(x - s.modelPosition.X, 2) + Math.Pow(y - s.modelPosition.Y, 2)) < 15) { //Exit(); PlaySound("hyperspace_activate"); Console.WriteLine("1" + "handx:" + x + "," + " " + "modelPos.X:" + s.modelPosition.X + "," + " " + "handY:" + y + "modelPos.Y:" + s.modelPosition.Y); } else { Console.WriteLine("2" + "handx:" + x + "," + " " + "modelPos.X:" + s.modelPosition.X + "," + " " + "handY:" + y + "modelPos.Y:" + s.modelPosition.Y); } }

    Read the article

  • How to manage Areas/Levels in an RPG?

    - by Hexlan
    I'm working on an RPG and I'm trying to figure out how to manage the different levels/areas in the game. Currently I create a new state (source file) for every area, defining its unique aspects. My concern is that as the game grows the number of class files will become unmanageable with all the towns, houses, shops, dungeons, etc. that I need to keep track of. I would also prefer to separate my levels from the source code because non-programmer members of the team will be creating levels, and I would like the engine to be as free from game specific code as possible. I'm thinking of creating a class that provides all the functions that will be the same between all the levels/areas with a unique member variable that can be used to look up level specifics from data. This way I only need to define level/area once in the code, but can create multiple instances each with its own unique aspects provided by data. Is this a good way to go about solving the issue? Is there a better way to handle a growing number of levels?

    Read the article

  • HLSL 5 interpolation issues

    - by metredigm
    I'm having issues with the depth components of my shadowmapping shaders. The shadow map rendering shader is fine, and works very well. The world rendering shader is more problematic. The only value which seems to definitely be off is the pixel's position from the light's perspective, which I pass in parallel to the position. struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; The reason that I used the semantic 'TEXCOORD2' on the light's pixel position is because I believe that the problem lies with Direct3D's interpolation of values between shaders, and I started trying random semantics and also forcing linear and noperspective interpolations. In the world rendering shader, I observed in the pixel shader that the Z value of light_pos was always extremely close to, but less than the W value. This resulted in a depth result of 0.999 or similar for every pixel. Here is the vertex shader code : struct Vertex { float3 position : POSITION; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; cbuffer Camera : register (b0) { matrix world; matrix view; matrix projection; }; cbuffer Light : register (b1) { matrix light_world; matrix light_view; matrix light_projection; }; Pixel RenderVertexShader(Vertex input) { Pixel output; output.position = mul(float4(input.position, 1.0f), world); output.position = mul(output.position, view); output.position = mul(output.position, projection); output.world_pos = mul(float4(input.position, 1.0f), world); output.world_pos = mul(output.world_pos, light_view); output.world_pos = mul(output.world_pos, light_projection); output.texcoord = input.texcoord; output.normal = input.normal; return output; } I suspect interpolation to be the culprit, as I used the camera matrices in place of the light matrices in the vertex shader, and had the same problem. The problem is evident as both of the same vectors were passed to a pixel from the VS, but only one of them showed a change in the PS. I have already thoroughly debugged the matrices' validity, the cbuffers' validity, and the multiplicative validity. I'm very stumped and have been trying to solve this for quite some time. Misc info : The light projection matrix and the camera projection matrix are the same, generated from D3DXMatrixPerspectiveFovLH(), with an FOV of 60.0f * 3.141f / 180.0f, a near clipping plane of 0.1f, and a far clipping plane of 1000.0f. Any ideas on what is happening? (This is a repost from my question on Stack Overflow)

    Read the article

  • iPhone 3d Model format: .h file, .obj, or some other?

    - by T Reddy
    I'm beginning to write an iPhone game using OpenGL-ES and I've come across a problem with deciding what format my 3D models should be in. I've read (link escapes me at the moment) that some developers prefer the models compiled in Objective-C .h files. Still, others prefer having .obj as these are more portable (i.e., for deployment on non-iPhone platforms). Various 3D game engines seem to support many(?) formats, but I'm not going to use any of these engines as I would like to actually learn OpenGL-ES. Am I putting myself at a disadvantage here by not using a packaged engine? Thanks!

    Read the article

  • Optimal way to learn DirectX?

    - by BluePhase
    I am finding it very difficult to learn DirectX 11. The MSDN website is just full of unorganized information that doesn't seem to help at all. I am particularly looking for something that explains many if not all aspects of developing with DirectX 11. I have been searching for weeks and still come up empty. I have found some books but they don't really explain the fundamentals of the language at all. Thanks in advanced.

    Read the article

  • Connecting 2 Vertices in 3DS Max?

    - by Reanimation
    How do you connect two vertices in 3DS Max 2013? I have two vertices which I wish to connect with a line to create an edge. (actually several) I have tried all I can think and done several Google searches but it only comes up with older versions method which say use the "connect" button... But I can't find the connect button on my version (see below) This is what my menu looks like: These are the vertices I'm trying to connect: Basically, I've edited an STL file and deleted some edges and vertices. Now I want to fill the gaps and triangulate what's left. Thanks.

    Read the article

  • How to blend the sprite into background?

    - by optimisez
    I try to blend the character into game but I still cannot remove the blue color in the sprite sheet and discover that the white area of sprite is semi-transparent. Before that, the color D3DCOLOR_XRGB(255, 255, 255) is set in D3DXCreateTextureFromFileEx. You will see the fireball through the sprite. After I change the color to D3DCOLOR_XRGB(0, 255, 255), the result will be Now, I am trying to remove the blue color of the sprite sheet and my expected result is something like that Until now, I still cannot figure out how to do that. Any ideas? void initPlayer() { // Create texture. hr = D3DXCreateTextureFromFileEx(d3dDevice, "player.png", 169, 44, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(0, 255, 255), NULL, NULL, &player); } void renderPlayer() { sprite->Draw(player, &playerRect, NULL, &D3DXVECTOR3(playerDest.X, playerDest.Y, 0),D3DCOLOR_XRGB(255, 255, 255)); } void initFireball() { hr = D3DXCreateTextureFromFileEx(d3dDevice, "fireball.png", 512, 512, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 255), NULL, NULL, &fireball); } void renderFireball() { sprite->Draw(fireball, &fireballRect, NULL, &D3DXVECTOR3(fireballDest.X, fireballDest.Y, 0), D3DCOLOR_XRGB(255,255, 255)); }

    Read the article

  • Box2D platformer movement. Are joints a good idea?

    - by Romeo
    So i smashed my brains trying to make my character move. As i wanted later in the game to add explosions and bullets it wasn't a good idea to mess with the velocity and the forces/impulses didn't work as i expected so something stuck in my mind: Is it a good idea to put at his bottom a wheel(circle) which is invisible to the player that will do the movement by rotation? I will attach this to my main body with a revolute joint but i don't really know how to make the main body and wheel body to don't collide one with each other since funny things can happen. What is your oppinion?

    Read the article

  • how does the rectangle bounds (x,y,width,height) in libgdx work

    - by JG22
    I cant work out how to use the rectangle bounds in libgdx I am currently using the superJumper example and have 2or 3 examples with that are pause Bounds = new Rectangle(320 - 64, 480 - 64, 64, 64); this is the pause button in the top right corner resume Bounds = new Rectangle(160 - 96, 240, 192, 36); this is a rectangle resume button in the middle of the page in the menu that comes up when the pause button is pressed. basically my question is aimed at the 360 -64 and 160 -96 because I don't know why this is used I need to create a rectangle that covers the left side of the screen and the same on the right because I want to create a on screen buttons, I have already created the does for these buttons and I have managed to get them to work but I can move the rectangles to where I want. Thank you If you can help

    Read the article

  • How to pass one float as four unsigned chars to shader by glVertexPointAttrib?

    - by Kog
    For each vertex I use two floats as position and four unsigned bytes as color. I want to store all of them in one table, so I tried casting those four unsigned bytes to one float, but I am unable to do that correctly... All in all, my tests came to one point: GLfloat vertices[] = { 1.0f, 0.5f, 0, 1.0f, 0, 0 }; glEnableVertexAttribArray(0); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(float), vertices); // VER1 - draws red triangle // unsigned char colors[] = { 0xff, 0, 0, 0xff, 0xff, 0, 0, 0xff, 0xff, 0, 0, // 0xff }; // glEnableVertexAttribArray(1); // glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), // colors); // VER2 - draws greenish triangle (not "pure" green) // float f = 255 << 24 | 255; //Hex:0xff0000ff // float colors2[] = { f, f, f }; // glEnableVertexAttribArray(1); // glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), // colors2); // VER3 - draws red triangle int i = 255 << 24 | 255; //Hex:0xff0000ff int colors3[] = { i, i, i }; glEnableVertexAttribArray(1); glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), colors3); glDrawArrays(GL_TRIANGLES, 0, 3); Above code is used to draw one simple red triangle. My question is - why do versions 1 and 3 work correctly, while version 2 draws some greenish triangle? Hex values are one I read by marking variable during debug. They are equal for version 2 and 3 - so what causes the difference?

    Read the article

  • Why does my VertexDeclaration apparently not contain Position0?

    - by Phil
    I'm trying to get my code from calling each individual draw call down to using at least a VertexBuffer, and preferably an indexBuffer, but now that I'm attempting to test my code, I'm getting the error: The current vertex declaration does not include all the elements required by the current vertex shader. Position0 is missing. Which makes absolutely no sense to me, as my VertexDeclaration is: public readonly static VertexDeclaration VertexDeclaration = new VertexDeclaration( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(sizeof(float) * 3, VertexElementFormat.Color, VertexElementUsage.Color, 0), new VertexElement(sizeof(float) * 3 + 4, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0) ); Which clearly contains the information. I am attempting to draw with the following lines: VertexBuffer vb = new VertexBuffer(GraphicsDevice, VertexPositionColorNormal.VertexDeclaration, c.VertexList.Count, BufferUsage.WriteOnly); IndexBuffer ib = new IndexBuffer(GraphicsDevice, typeof(int), c.IndexList.Count, BufferUsage.WriteOnly); vb.SetData<VertexPositionColorNormal>(c.VertexList.ToArray()); ib.SetData<int>(c.IndexList.ToArray()); GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vb.VertexCount, 0, c.IndexList.Count/3); Where c is a Chunk class containing an 8x8x8 array of boxes. Full code is available at https://github.com/mrbaggins/Box/tree/ProperMeshing/box/box. Relevant locations are Chunk.cs (Contains the VertexDeclaration) and Game1.cs (Draw() is in Lines 230-250). Not much else of relevance to this problem anywhere else. Note that large commented sections are from old version of drawing.

    Read the article

  • Efficient way to render tile-based map in Java

    - by Lucius
    Some time ago I posted here because I was having some memory issues with a game I'm working on. That has been pretty much solved thanks to some suggestions here, so I decided to come back with another problem I'm having. Basically, I feel that too much of the CPU is being used when rendering the map. I have a Core i5-2500 processor and when running the game, the CPU usage is about 35% - and I can't accept that that's just how it has to be. This is how I'm going about rendering the map: I have the X and Y coordinates of the player, so I'm not drawing the whole map, just the visible portion of it; The number of visible tiles on screen varies according to the resolution chosen by the player (the CPU usage is 35% here when playing at a resolution of 1440x900); If the tile is "empty", I just skip drawing it (this didn't visibly lower the CPU usage, but reduced the drawing time in about 20ms); The map is composed of 5 layers - for more details; The tiles are 32x32 pixels; And just to be on the safe side, I'll post the code for drawing the game here, although it's as messy and unreadable as it can be T_T (I'll try to make it a little readable) private void drawGame(Graphics2D g2d){ //Width and Height of the visible portion of the map (not of the screen) int visionWidht = visibleCols * TILE_SIZE; int visionHeight = visibleRows * TILE_SIZE; //Since the map can be smaller than the screen, I center it just to be sure int xAdjust = (getWidth() - visionWidht) / 2; int yAdjust = (getHeight() - visionHeight) / 2; //This "deducedX" thing is to move the map a few pixels horizontally, since the player moves by pixels and not full tiles int playerDrawX = listOfCharacters.get(0).getX(); int deducedX = 0; if (listOfCharacters.get(0).currentCol() - visibleCols / 2 >= 0) { playerDrawX = visibleCols / 2 * TILE_SIZE; map_draw_col = listOfCharacters.get(0).currentCol() - visibleCols / 2; deducedX = listOfCharacters.get(0).getXCol(); } //"deducedY" is the same deal as "deducedX", but vertically int playerDrawY = listOfCharacters.get(0).getY(); int deducedY = 0; if (listOfCharacters.get(0).currentRow() - visibleRows / 2 >= 0) { playerDrawY = visibleRows / 2 * TILE_SIZE; map_draw_row = listOfCharacters.get(0).currentRow() - visibleRows / 2; deducedY = listOfCharacters.get(0).getYRow(); } int max_cols = visibleCols + map_draw_col; if (max_cols >= map.getCols()) { max_cols = map.getCols() - 1; deducedX = 0; map_draw_col = max_cols - visibleCols + 1; playerDrawX = listOfCharacters.get(0).getX() - map_draw_col * TILE_SIZE; } int max_rows = visibleRows + map_draw_row; if (max_rows >= map.getRows()) { max_rows = map.getRows() - 1; deducedY = 0; map_draw_row = max_rows - visibleRows + 1; playerDrawY = listOfCharacters.get(0).getY() - map_draw_row * TILE_SIZE; } //map_draw_row and map_draw_col representes the coordinate of the upper left tile on the screen //iterate through all the tiles on screen and draw them - this is what consumes most of the CPU for (int col = map_draw_col; col <= max_cols; col++) { for (int row = map_draw_row; row <= max_rows; row++) { Tile[] tiles = map.getTiles(col, row); for(int layer = 0; layer < tiles.length; layer++){ Tile currentTile = tiles[layer]; boolean shouldDraw = true; //I only draw the tile if it exists and is not empty (id=-1) if(currentTile != null && currentTile.getId() >= 0){ //The layers above 1 can be draw behing or infront of the player according to where it's standing if(layer > 1 && currentTile.getId() >= 0){ if(playerBehind(col, row, layer, listOfCharacters.get(0))){ behinds.get(0).add(new int[]{col, row}); //the tiles that are infront of the player wont be draw right now shouldDraw = false; } } if(shouldDraw){ g2d.drawImage( tiles[layer].getImage(), (col-map_draw_col)*TILE_SIZE - deducedX + xAdjust, (row-map_draw_row)*TILE_SIZE - deducedY + yAdjust, null); } } } } } } There's some more code in this method but nothing relevant to this question. Basically, the biggest problem is that I iterate over around 5000 tiles (in this specific resolution) 60 times each second. I thought about rendering the visible portion of the map once and storing it into a BufferedImage and when the player moved move the whole image the same amount but to the opposite side and then drawn the tiles that appeared on the screen, but if I do it like that, I wont be able to have animated tiles (at least I think). That being said, any suggestions?

    Read the article

  • Accessing managers from game entities/components

    - by Boreal
    I'm designing an entity-component engine in C# right now, and all components need to have access to the global event manager, which sends off inter-entity events (every entity also has a local event manager). I'd like to be able to simply call functions like this: GlobalEventManager.Publish("Foo", new EventData()); GlobalEventManager.Subscribe("Bar", OnBarEvent); without having to do this: class HealthComponent { private EventManager globalEventManager; public HealthComponent(EventManager gEM) { globalEventManager = gEM; } } // later on... EventManager globalEventManager = new EventManager(); Entity playerEntity = new Entity(); playerEntity.AddComponent(new HealthComponent(globalEventManager)); How can I accomplish this? EDIT: I solved it by creating a singleton called GlobalEventManager. It derives from the local EventManager class and I use it like this: GlobalEventManager.Instance.Publish("Foo", new EventData());

    Read the article

  • How to display image in second layer in Cocos2d

    - by PeterK
    I am very new at Cocos2d and is testing to displaying an image over the "Hello World" text on a second layer and need help to get it work. I guess it is some basic stuff here and appreciate any tips etc. with this. I know that if i put the display-code (myLayer1) in the "init" it work or do the call [self goHere] from the "init" in myLayer1 it works but i want to call the "goHere" directly. I have the following code: HelloWorld.m: #import "HelloWorldLayer.h" #import "myLayer1.h" // HelloWorldLayer implementation @implementation HelloWorldLayer +(CCScene *) scene { // 'scene' is an autorelease object. CCScene *scene = [CCScene node]; // 'layer' is an autorelease object. HelloWorldLayer *layer = [HelloWorldLayer node]; myLayer1 *layer1 = [myLayer1 node]; // add layer as a child to scene [scene addChild: layer]; [scene addChild: layer1]; // return the scene return scene; } // on "init" you need to initialize your instance -(id) init { // always call "super" init // Apple recommends to re-assign "self" with the "super" return value if( (self=[super init])) { // create and initialize a Label CCLabelTTF *label = [CCLabelTTF labelWithString:@"Hello World" fontName:@"Marker Felt" fontSize:64]; // ask director the the window size CGSize size = [[CCDirector sharedDirector] winSize]; // position the label on the center of the screen label.position = ccp( size.width /2 , size.height/2 ); // add the label as a child to this Layer [self addChild: label]; myLayer1 *a1 = [myLayer1 new]; [a1 goHere]; [myLayer1 release]; } return self; } myLayer1.m: #import "myLayer1.h" @implementation myLayer1 -(void)goHere { NSLog(@">>>>goHere<<<<"); CGSize size = [[CCDirector sharedDirector] winSize]; CCSprite *vv = [CCSprite spriteWithFile:@"hand.png"]; vv.position = ccp( size.width /2 , size.height/2 ); [self addChild:vv z:3]; } -(id) init { // always call "super" init // Apple recommends to re-assign "self" with the "super" return value if( (self=[super init])) { } return self; } @end

    Read the article

  • Which physics phenomenons can be simulated properly with Box2d or bullet physics? [on hold]

    - by user3585425
    Knowing that box2d or bullet physics can't simulate Newton's cradle (because of multiple bodies being in contact at the same time if I understand correctly), is there a sets of physics phenomenons that imply two or more objects that still can be simulated properly ? For example, I'm thinking about lightweight objects launched towards heavyweight objects. If the object is destroyed on contact, this would not make a difference if the energy is not transmitted correctly on impact.

    Read the article

< Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >