Search Results

Search found 35343 results on 1414 pages for 'development tools'.

Page 609/1414 | < Previous Page | 605 606 607 608 609 610 611 612 613 614 615 616  | Next Page >

  • How to add reflection definition to read JSON files in web game

    - by user3728735
    I have a game which I deployed for desktop and Android. I can read JSON data and create my levels, but when it comes to reading JSON files from web app, I get an error that logs, "cannot read the json file". I researched a lot and I found out that I should add my JSON config class to configurations, so I added this line to gameName.gwt.xml, which is in core folder: <extend-configuration-property name="gdx.reflect.include" value="com.las.get.level.LevelConfig"/> But it did not work out. I have no idea where should I place this line or where I should change to make my web app work, so I can read JSON files.

    Read the article

  • Drawing flaming letters in 3d on OpenGL ES 2.0

    - by Chiquis
    I am a bit confused about how to achieve this. What i want is to "draw with flames". I have achieved this with textures successfully, but now my concern is about doing this with particles to achieve the flaming effect. Am I supposed to have a Path in where i should add many particle emitters along the path that will "be emitting flames"? I understand the concept for 2d, but for 3d are the particles (that are quads) always supposed to be facing the user? Edit: Something else im worried about is the performance hit that will occur by having that many particle emitters, because there can be many letters and drawings at the same time. And each of these elements will have many particle emitters.

    Read the article

  • Corona sdk events dispatched with dispatchEvent() are handled directly upon call. Why so?

    - by Amoxus
    I noticed to my surprise that an event created with dispatchEvent(event) gets handled directly when called, and not together with other events at a specific phase of the frame loop. Two main reasons of having an event system are: so that you can call code B from code A, but still want to prioritize code A. to make sure there are no freaky loopedy loops where code A calls code B calls code A ... I wonder what Ansca's rationale behind having events being handled directly this way is. And does Corona handle loopedy loops and other such pitfalls gracefully? The following code demonstrates dispatchEvent(): T= {} Z = display.newRect(100,100,100,100) function T.doSomething() print("T.doSomething: begun") local event = { name="myEventType", target=T } Z:dispatchEvent( event ) print("T.doSomething: ended") end function Z.sayHello(event) print("Z.sayHello: begun and ended") end Z:addEventListener("myEventType", Z.sayHello) print("Main: begun") T.doSomething() print("Main: ended") However Ansca claims the contrary at http://developer.coronalabs.com/reference/index/objectdispatchevent Can anyone clear this up a little? ( Using Corona simulator V 2012.840 )

    Read the article

  • Isometric tile selection

    - by Dylan Lundy
    I'm not all that good with Maths. I'm trying to make a function to convert mouse coordinates into a particular tile in an isometric view. All of the algorithms I have seen so far work with the X & Y axes going diagonal, my game is currently set up like this, and I would like to keep it so. Is there an algorithm so that if the mouse was at the red dot, it would return the coordinates of the tile that it is sitting on? (6,2)

    Read the article

  • OpenGL render to texture causing edge artifacts

    - by mysticalOso
    This is my first post here so any help would be massively appreciated :) I'm using C++ with SDL and OpenGL 3.3 When rendering directly to screen I get the following result And when I render to texture I this happens Anti-aliasing is turned off for both. I'm guessing this has something to do with depth buffer accuracy but I've tried a lot of different methods to improve the result but, no success :( I'm currently using the following code to set up my FBO: GLuint frameBufferID; glGenFramebuffers(1, &frameBufferID); glBindFramebuffer(GL_FRAMEBUFFER, frameBufferID); glGenTextures(1, &coloursTextureID); glBindTexture(GL_TEXTURE_2D, coloursTextureID); glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,SCREEN_WIDTH,SCREEN_HEIGHT,0,GL_RGB,GL_UNSIGNED_BYTE,NULL); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); //Depth buffer setup GLuint depthrenderbuffer; glGenRenderbuffers(1, &depthrenderbuffer); glBindRenderbuffer(GL_RENDERBUFFER, depthrenderbuffer); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, SCREEN_WIDTH,SCREEN_HEIGHT); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthrenderbuffer); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, coloursTextureID, 0); GLenum DrawBuffers[1] = {GL_COLOR_ATTACHMENT0}; glDrawBuffers(1, DrawBuffers); // if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) return false; Thank you so much for any help :)

    Read the article

  • OpenGL Lighting

    - by gopgop
    I have a simple day and night cycle by at day disabling OpenGL lighting and at night enabling openGL Lighting. When I enable everything appears darker. My question is How would I make it that at a specific spot there would be a light that will only light up its surrounding area for example: http://media.giantbomb.com/uploads/0/276/1414275-light_large.png Where the light is is where I want to position my light. My application is in 2D.

    Read the article

  • How to get distance from point to line with distinction between side of line?

    - by tesselode
    I'm making a 2d racing game. I'm taking the nice standard approach of having a set of points defining the center of the track and detecting whether the car is off the track by detecting its distance from the nearest point. The nicest way I've found of doing this is using the formula: d = |Am + Bn + C| / sqrt(A^2 + B^2) Unfortunately, to have proper collision resolution, I need to know which side of the line the car is hitting, but I can't do that with this formula because it only returns positive numbers. So my question is: is there a formula that will give me positive or negative numbers based on which side of the line the point is on? Can I just get rid of the absolute value in the formula or do I need to do something else?

    Read the article

  • Marching squares: Finding multiple contours within one source field?

    - by TravisG
    Principally, this is a follow-up-question to a problem from a few weeks ago, even though this is about the algorithm in general without application to my actual problem. The algorithm basically searches through all lines in the picture, starting from the top left of it, until it finds a pixel that is a border. In pseudo-C++: int start = 0; for(int i=0; i<amount_of_pixels; ++i) { if(pixels[i] == border) { start = i; break; } } When it finds one, it starts the marching squares algorithm and finds the contour to whatever object the pixel belongs to. Let's say I have something like this: Where everything except the color white is a border. And have found the contour points of the first blob: For the general algorithm it's over. It found a contour and has done its job. How can I move on to the other two blobs to find their contours as well?

    Read the article

  • How do you cope mentally with one very long piece of work

    - by Asher Einhorn
    This is my first games industry job and my task is to take out one major game component and put in a newer one. So far it's been 5 weeks, and I'm still just staring at errors. I think it could be months before it's at the point that it can compile. It's really getting me down. I'm just changing things over, I'm not really writing anything myself. it's just endless. I fix a thousand errors and nine thousand take their place. I'm sure this must be a common thing, so I was just wondering, how do you cope with this? It doesn't seem like I can break it down into little chunks at all.

    Read the article

  • Recommended formats to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • Normalizing the direction to check if able to move

    - by spartan2417
    i have a a room with 4 walls along the x and z axis respectively. My player who is in first person (therefore the camera) should have collision detection with these walls. I'm relatively new to this so please bare with me. I believe the way to do this is to calculate the direction and distance to the wall from the camera and then normalize the directions. However i can only get this far before i dont know what to do. I think you should work out the angle and direction your facing? where _dx and _dz is the small buffer in front of the camera. float CalcDirection(float Cam_x, float Cam_z, float Wall_x, float Wall_z) { //Calculate direction and distance to obstacle. float ob_dirx = Cam_x + _dx - Wall_x; float ob_dirz = Cam_z + _dz - Wall_z; float ob_dist = sqrt(ob_dirx*ob_dirx + ob_dirz*ob_dirz); //Normalise directions float ob_norm = sqrt(ob_dirx*ob_dirx + ob_dirz*ob_dirz); ob_dirx = (ob_dirx)/ob_norm; ob_dirz = (ob_dirz)/ob_norm; can anyone explain in laymen's terms how i work out the angle?

    Read the article

  • LWJGL GL_QUADS texture artifact

    - by Dajgoro Labinac
    I managed to get working lwjgl in Java, and i loaded a test image(tv test card), but i keep getting weird artifacts outside the image. Image link: http://tinypic.com/r/vhv9g/6 Code: glBegin(GL_QUADS); glTexCoord2f(0, 0); glVertex2i(10, 10); glTexCoord2f(1, 0); glVertex2i(500, 10); glTexCoord2f(1, 1); glVertex2i(500, 500); glTexCoord2f(0, 1); glVertex2i(10, 500); glEnd(); What could be the cause?

    Read the article

  • Using allocators for different systems

    - by chadb
    I am going over the memory architecture for my game and even though I know my memory budgets may not be final, I at the point where I can start using them in a general sense. I know that I will need several allocators (for systems such as audio, rendering, etc) but I am unsure of how they will be accessed. I do not use singletons, so I can't just have something such as AudioManager::GetInstance().get_allocator(). Instead, I need to find a different method of usage but I am not sure of how. How can I store and call my allocators needed for several different systems over engine in an efficient manner?

    Read the article

  • View space lighting in deferred shading

    - by kochol
    I implemented a simple deferred shading renderer. I use 3 G-Buffer for storing position (R32F), normal (G16R16F) and albedo (ARGB8). I use sphere map algorithm to store normals in world space. Currently I use inverse of view * projection matrix to calculate the position of each pixel from stored depth value. First I want to avoid per pixel matrix multiplication for calculating the position. Is there another way to store and calculate position in G-Buffer without the need of matrix multiplication Store the normal in view space Every lighting in my engine is in world space and I want do the lighting in view space to speed up my lighting pass. I want an optimized lighting pass for my deferred engine.

    Read the article

  • How can I change this isometric engine to make it so that you could distinguish between blocks that are on different planes?

    - by l5p4ngl312
    I have been working on an isometric minecraft-esque game engine for a strategy game I plan on making. As you can see, it really needs some sort of shading. It is difficult to distinguish between separate elevations when the camera is facing away from the slope because everything is the same shade. So my question is: can I shade just a specific section of a sprite? All of those blocks are just sprites, so if I shaded the entire image, it would shade the whole block. I am using LWJGL. Are there any other approaches to take? Heres a link to a screenshot from the engine: http://i44.tinypic.com/qxqlix.jpg

    Read the article

  • Smooth drag scrolling of an Isometric map - Html5

    - by user881920
    I have implemented a basic Isometric tile engine that can be explored by dragging the map with the mouse. Please see the fiddle below and drag away: http://jsfiddle.net/kHydg/14/ The code broken down is (CoffeeScript): The draw function draw = -> requestAnimFrame draw canvas.width = canvas.width for row in [0..width] for col in [0..height] xpos = (row - col) * tileHeight + width xpos += (canvas.width / 2) - (tileWidth / 2) + scrollPosition.x ypos = (row + col) * (tileHeight / 2) + height + scrollPosition.y context.drawImage(defaultTile, Math.round(xpos), Math.round(ypos), tileWidth, tileHeight) Mouse drag-scrolling control scrollPosition = x: 0 y: 0 dragHelper = active: false x: 0 y: 0 window.addEventListener 'mousedown', (e) => handleMouseDown(e) , false window.addEventListener 'mousemove', (e) => handleDrag(e) , false window.addEventListener 'mouseup', (e) => handleMouseUp(e) , false handleDrag = (e) => e.preventDefault() if dragHelper.active x = e.clientX y = e.clientY scrollPosition.x -= Math.round((dragHelper.x - x) / 28) scrollPosition.y -= Math.round((dragHelper.y - y) / 28) handleMouseUp = (e) => e.preventDefault() dragHelper.active = false handleMouseDown = (e) => e.preventDefault() x = e.clientX y = e.clientY dragHelper.active = true dragHelper.x = x dragHelper.y = y The Problem As you can see from the fiddle the dragging action is ok but not perfect. How would I change the code to make the dragging action more smooth? What I would like is the point of the map you click on to stay under the mouse point whilst you drag; the same as they have done here: http://craftyjs.com/demos/isometric/

    Read the article

  • Speed up content loading

    - by user1806687
    I am using WinForms Sample downloaded from microsoft website. The problem is, that the model loading time is quite long, using: contentBuilder.Add(ModelPath, ModelName, null, "ModelProcessor"); contentManager.Load<Model>(ModelName); even a simple model, such as a cube with no textures, takes 4+ seconds to load. Now, I am no expert on this, but is there anyway to decrease loading time? EDIT: I've gone thru the code and found out that calling contentBuilder.Build(); ,which comes right after contentBuilder.Add() method takes up most of the time.

    Read the article

  • DirectX 9.0c and C++ GUI

    - by SullY
    Well, I'm trying to code a gui for my engine, but I've got some problems. I know how to make a UI overlay but buttons are still black magic for me. Anything I tried was to compilcated ( if it goes big ). To Example I tried to look if the mouse position is the same as the Pixel that is showing the button. But If I use some bigger areas it's getting to complicated. Now I'm searching for a Tutorial how to implement your own gui. I'm really confused about it. Well I hope you have/ know some good tutorials. By the way, I took a look at the DXUTSample, but it's to big to get overview.

    Read the article

  • Abstract skill/talent system implementation

    - by kiliki
    I've been making small 2D games for about 3 years now (XNA and more recently LWJGL/Slick2D). My latest idea would involve some form of "talent tree" system in a real time game. I've been wracking my brain but can't think of a structure to hold a talent. Something like "Your melee attack is an instant kill if behind the target" I'd like to come up with an abstract object rather than putting random conditionals into other methods. I've solved some relatively complex problems before but I don't even know where to begin with this one. Any help would be appreciated - Java, pseudocode or general concepts are all great.

    Read the article

  • I get GL_INVALID_VALUE after calling glTexSubImage2D

    - by user892644
    I am trying to figure out why my texture allocation does not work. Here is the code: glTexStorage2D(GL_TEXTURE_2D, 2, GL_RGBA8, 2048, 2048); glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 2048, 2048, GL_RGB, GL_UNSIGNED_SHORT_5_6_5_REV, &BitMap[0]); glTexSubImage2D returns GL_INVALID_VALUE but the maximum texture allowed is 16384x16384 on my card. The source of the image is 16bit (Red 5, Green 6, Blue 5).

    Read the article

  • OpenGL doesn't draw (3.3+) [on hold]

    - by Dhiego Magalhães
    Brief: I've been following this tutorial about OpenGL for 2 days, and I still can't have a triangle drawn, so I'm asking for help here. The tutorial is turned to OpenGL version 3.3 programing, using vertex arrays, buffers, etc. The libraries are: GLFW3 and GLEW, and I setted them by myself. The screen keeps black all the time. Full code: link here (It's just like a Hello World opengl program) Further Details: I get no errors at all. I downloaded a software to test my video card, and it supports OpenGL 4.1+ Standard OpenGL code for drawing (from earlier version) such as this one works normally. I'm using Microsoft Visual Studio 10.0 I presume all the OpenGL implementation was dune right: I added Additional Dependences to the linker as glew32.lib, opengl32.lib, glfw3.lib. The glew.dll was placed at SysWOW64 - because I'm running window 64bits, and glew is 32. Notes: I've been working hard to find out what this is, but I can't find. I would appreciate if anyone could test this code for me, so I can know if I implemented something wrong, and that its not my code.

    Read the article

  • What are some ways to separate game logic from animations and the draw loop?

    - by TMV
    I have only previously made flash games, using MovieClips and such to separate out my animations from my game logic. Now I am getting into trying my hand at making a game for Android, but the game programming theory around separating these things still confuses me. I come from a background of developing non game web applications so I am versed in more MVC like patterns and am stuck in that mindset as I approach game programming. I want to do things like abstract my game by having, for example, a game board class that contains the data for a grid of tiles with instances of a tile class that each contain properties. I can give my draw loop access to this and have it draw the game board based on the properties of each tile on the game board, but I don't understand where exactly animation should go. As far as I can tell, animation sort of sits between the abstracted game logic (model) and the draw loop (view). With my MVC mindset, it's frustrating trying to decide where animation is actually supposed to go. It would have quite a bit of data associated with it like a model, but seemingly needs to be very closely coupled with the draw loop in order to have things like frame independent animation. How can I break out of this mindset and start thinking about patterns that make more sense for games?

    Read the article

  • Any way to edit Warcraft MDX or MDL Animated models?

    - by Aralox
    I have been searching for a while for a way to get an animated mdl or mdx model into any 3D animating software (such as Blender), but so far have not had any success. I found a few methods of getting textured static mdx or mdl models into Blender/Milkshape/Hexagon, but no one seems to have written an importer that deals with the mdl/mdx model's keyframe animation. On that note, if anyone knows of a way of importing a keyframe-animated 3DS model into Blender, me and alot of people would appreciate it if you could let us know. Thanks for any help! :) PS: For anyone curious about static MDL or MDX - Blender, see here: http://wiki.blender.org/index.php/Extensions:2.6/Py/Scripts/Import-Export/WarCraft_MDL

    Read the article

  • Point line collision reaction

    - by user4523
    I am trying to program point line segment collision detection and reaction. I am doing this for fun and to learn. The point moves (it has a velocity, and can be controlled by the user), whilst the lines are strait and stationary. The lines are not axis aligned. Everything is in 2D. It is quite straight forward to work out if a collision has occurred. For each frame, the point moves from A to B. AB is a line, and if it crosses the line segment, a collision has occurred (or will occur) and I am able to work out the point of intersection (poi). The problem I am having is with the reaction. Ideally I would like the point to be prevented from moving across the line. In one frame, I can move the point back to the poi (or only alow it to move as far as the poi), and alter the velocity. The problem I am having with this approach (I think) is that, next frame the user may try to cross the line again. Although the point is on the poi, the point may not be exactly on the line. Since it is not axis aligned, I think there is always some subtle rounding issue (A float representation of a point on a line might be rounded to a point that is slightly on one side or the other). Because of this, next frame the path might not intersect the line (because it can start on the other side and move away from it) and the point is effectively allowed to cross the line. Firstly, does the analysis sound correct? Having accepted (maybe) that I cannot always exactly position the point on the line, I tried to move the point away from the line slightly (either along the normal to the line, or along the path vector). I then get a problem at edges. Attempting to fix one collision by moving the point away from the line (even slightly) can cause it to cross another line (one shape I am dealing with is a star, with sharp corners). This can mean that the solution to one collision inadvertently creates another collision, which is ignored. Again, does this sound correct? Anyway, whatever I try, I am having difficulty with edges, and the point is occasionally able to penetrate the polygons and cross lines, which is undesirable. Whilst I can find a lot of information about collision detection on the web (and on this site) I can find precious little information on collision reaction. Does any one know of any good point line collision reaction tutorials? Or is my approach too flawed/over complicated?

    Read the article

  • Efficient Way to Draw Grids in XNA

    - by sm81095
    So I am working on a game right now, using Monogame as my framework, and it has come time to render my world. My world is made up of a grid (think Terraria but top-down instead of from the side), and it has multiple layers of grids in a single world. Knowing how inefficient it is to call SpriteBatch.Draw() a lot of times, I tried to implement a system where the tile would only be drawn if it wasn't hidden by the layers above it. The problem is, I'm getting worse performance by checking if it's hidden than when I just let everything draw even if it's not visible. So my question is: how to I efficiently check if a tile is hidden to cut down on the draw() calls? Here is my draw code for a single layer, drawing floors, and then the tiles (which act like walls): public void Draw(GameTime gameTime) { int drawAmt = 0; int width = Tile.TILE_DIM; int startX = (int)_parent.XOffset; int startY = (int)_parent.YOffset; //Gets the starting tiles and the dimensions to draw tiles, so only onscreen tiles are drawn, allowing for the drawing of large worlds int tileDrawWidth = ((CIGame.Instance.Graphics.PreferredBackBufferWidth / width) + 4); int tileDrawHeight = ((CIGame.Instance.Graphics.PreferredBackBufferHeight / width) + 4); int tileStartX = (int)MathHelper.Clamp((-startX / width) - 2, 0, this.Width); int tileStartY = (int)MathHelper.Clamp((-startY / width) - 2, 0, this.Height); #region Draw Floors and Tiles CIGame.Instance.GraphicsDevice.SetRenderTarget(_worldTarget); CIGame.Instance.GraphicsDevice.Clear(Color.Black); CIGame.Instance.SpriteBatch.Begin(); //Draw floors for (int x = tileStartX; x < (int)MathHelper.Clamp(tileStartX + tileDrawWidth, 0, this.Width); x++) { for (int y = tileStartY; y < (int)MathHelper.Clamp(tileStartY + tileDrawHeight, 0, this.Height); y++) { //Check if this tile is hidden by layer above it bool visible = true; for (int i = this.LayerNumber; i <= _parent.ActiveLayer; i++) { if (this.LayerNumber != (_parent.Layers - 1) && (_parent.GetTileAt(x, y, i + 1).Opacity >= 1.0f || _parent.GetFloorAt(x, y, i + 1).Opacity >= 1.0f)) { visible = false; break; } } //Only draw if visible under the tile above it if (visible && this.GetTileAt(x, y).Opacity < 1.0f) { Texture2D tex = WorldTextureManager.GetFloorTexture((Floor)_floors[x, y]); Rectangle source = WorldTextureManager.GetSourceForIndex(((Floor)_floors[x, y]).GetTextureIndexFromSurroundings(x, y, this), tex); Rectangle draw = new Rectangle(startX + x * width, startY + y * width, width, width); CIGame.Instance.SpriteBatch.Draw(tex, draw, source, Color.White * ((Floor)_floors[x, y]).Opacity); drawAmt++; } } } //Draw tiles for (int x = tileStartX; x < (int)MathHelper.Clamp(tileStartX + tileDrawWidth, 0, this.Width); x++) { for (int y = tileStartY; y < (int)MathHelper.Clamp(tileStartY + tileDrawHeight, 0, this.Height); y++) { //Check if this tile is hidden by layers above it bool visible = true; for (int i = this.LayerNumber; i <= _parent.ActiveLayer; i++) { if (this.LayerNumber != (_parent.Layers - 1) && (_parent.GetTileAt(x, y, i + 1).Opacity >= 1.0f || _parent.GetFloorAt(x, y, i + 1).Opacity >= 1.0f)) { visible = false; break; } } if (visible) { Texture2D tex = WorldTextureManager.GetTileTexture((Tile)_tiles[x, y]); Rectangle source = WorldTextureManager.GetSourceForIndex(((Tile)_tiles[x, y]).GetTextureIndexFromSurroundings(x, y, this), tex); Rectangle draw = new Rectangle(startX + x * width, startY + y * width, width, width); CIGame.Instance.SpriteBatch.Draw(tex, draw, source, Color.White * ((Tile)_tiles[x, y]).Opacity); drawAmt++; } } } CIGame.Instance.SpriteBatch.End(); Console.WriteLine(drawAmt); CIGame.Instance.GraphicsDevice.SetRenderTarget(null); //TODO: Change to new rendertarget instead of null #endregion } So I was wondering if this is an efficient way, but I'm going about it wrongly, or if there is a different, more efficient way to check if the tiles are hidden. EDIT: For example of how much it affects performance: using a world with three layers, allowing everything to draw no matter what gives me 60FPS, but checking if its visible with all of the layers above it gives me only 20FPS, while checking only the layer immediately above it gives me a fluctuating FPS between 30 and 40FPS.

    Read the article

< Previous Page | 605 606 607 608 609 610 611 612 613 614 615 616  | Next Page >