Search Results

Search found 25496 results on 1020 pages for 'development fabric'.

Page 483/1020 | < Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >

  • Making efficeint voxel engines using "chunks"

    - by Wardy
    Concept I'm currently looking in to how voxel engines work with a view to possibly making one myself. I see a lot of stuff like this ... https://sites.google.com/site/letsmakeavoxelengine/home/chunks ... which talks about how to go about reducing the draw calls. What I can't seem to understand is how it actually saves draw call counts on the basis of the logic being something like this ... Without chunks foreach voxel in myvoxels DrawIfVisible() With Chunks foreach chunk in mychunks DrawIfVisible() which then does ... foreach voxel in myvoxels DrawIfVisible() So surely you saved nothing ?!?! You still make a draw call for each visible voxel do you not? A visible voxel needs a draw call in either scenario. The only real saving I can see is that the logic that evaluates a chunk will be able to determine if a large number of voxels are visible or not effectively saving a bit of "is this chunk visible" cpu time. But it's the draw calls that interest me ... The fewer of those, the faster the application. EDIT: In case it makes any difference I will probably be using XNA (DX not OpenGL) for my engine so don't consider my choice of example in the link above my choice of technology. But this question is such that I doubt it would matter.

    Read the article

  • Multiple enemy array in LibGDX

    - by johnny-b
    I am trying to make a multiple enemy array, where every 30 secods a new bullet comes from a random point. And if the bullet is clicked it should disapear and a pop like an explosion should appear. And if the bullet hits the ball then the ball pops. so the bullet should change to a different sprite or texture. same with the ball pop. But all that happens is the bullet if touched pops and nothing else happens. And if modified then the bullet keeps flashing as the update is way too much. I have added COMMENTS in the code to explain more on the issues. below is the code. if more code is needed i will provide. Thank you public class GameRenderer { private GameWorld myWorld; private OrthographicCamera cam; private ShapeRenderer shapeRenderer; private SpriteBatch batcher; // Game Objects private Ball ball; private ScrollHandler scroller; private Background background; private Bullet bullet1; private BulletPop bPop; private Array<Bullet> bullets; // This is for the delay of the bullet coming one by one every 30 seconds. /** The time of the last shot fired, we set it to the current time in nano when the object is first created */ double lastShot = TimeUtils.nanoTime(); /** Convert 30 seconds into nano seconds, so 30,000 milli = 30 seconds */ double shotFreq = TimeUtils.millisToNanos(30000); // Game Assets private TextureRegion bg, bPop; private Animation bulletAnimation, ballAnimation; private Animation ballPopAnimation; public GameRenderer(GameWorld world) { myWorld = world; cam = new OrthographicCamera(); cam.setToOrtho(true, 480, 320); batcher = new SpriteBatch(); // Attach batcher to camera batcher.setProjectionMatrix(cam.combined); shapeRenderer = new ShapeRenderer(); shapeRenderer.setProjectionMatrix(cam.combined); // This is suppose to produce 10 bullets at random places on the background. bullets = new Array<Bullet>(); Bullet bullet = null; float bulletX = 00.0f; float bulletY = 00.0f; for (int i = 0; i < 10; i++) { bulletX = MathUtils.random(-10, 10); bulletY = MathUtils.random(-10, 10); bullet = new Bullet(bulletX, bulletY); AssetLoader.bullet1.flip(true, false); AssetLoader.bullet2.flip(true, false); bullets.add(bullet); } // Call helper methods to initialize instance variables initGameObjects(); initAssets(); } private void initGameObjects() { ball = GameWorld.getBall(); bullet1 = myWorld.getBullet1(); bPop = myWorld.getBulletPop(); scroller = myWorld.getScroller(); } private void initAssets() { bg = AssetLoader.bg; ballAnimation = AssetLoader.ballAnimation; bullet1Animation = AssetLoader.bullet1Animation; ballPopAnimation = AssetLoader.ballPopAnimation; } // This is to take the bullet away when clicked or touched. public void onClick() { for (int i = 0; i < bullets.size; i++) { if (bullets.get(i).getBounds().contains(0, 0)) bullets.removeIndex(i); } } private void drawBackground() { batcher.draw(bg1, background.getX(), background.getY(), background.getWidth(), backgroundMove.getHeight()); } public void render(float runTime) { Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL30.GL_COLOR_BUFFER_BIT); batcher.begin(); // Disable transparency // This is good for performance when drawing images that do not require // transparency. batcher.disableBlending(); drawBackground(); batcher.enableBlending(); // when the bullet hits the ball, it should be disposed or taken away and a ball pop sprite/texture should be put in its place if (bullet1.collides(ball)) { // draws the bPop texture but the bullet does not go just keeps going around, and the bPop texture goes. batcher.draw(AssetLoader.bPop, 195, 273); } batcher.draw(AssetLoader.ballAnimation.getKeyFrame(runTime), ball.getX(), ball.getY(), ball.getWidth(), ball.getHeight()); // this is where i am trying to make the bullets come one by one, and if removed via the onClick() then bPop animation // should play but does not??? if(TimeUtils.nanoTime() - lastShot > shotFreq){ // Create your stuff for (int i = 0; i < bullets.size; i++) { bullets.get(i); batcher.draw(AssetLoader.bullet1Animation.getKeyFrame(runTime), bullet1.getX(), bullet1.getY(), bullet1.getOriginX(), bullet1.getOriginY(), bullet1.getWidth(), bullet1.getHeight(), 1.0f, 1.0f, bullet1.getRotation()); if (bullets.removeValue(bullet1, false)) { batcher.draw(AssetLoader.ballPopAnimation.getKeyFrame(runTime), bPop1.getX(), bPop1.getY(), bPop1.getWidth(), bPop1.getHeight()); } } /* Very important to set the last shot to now, or it will mess up and go full auto */ lastShot = TimeUtils.nanoTime(); } // End SpriteBatch batcher.end(); } } Thank you

    Read the article

  • OpenGL Drawing textured model (OBJ) black texture

    - by andrepcg
    I'm using OpenGL, Glew, GLFW and Glut to create a simple game. I've been following some tutorials and I have now a good model importer with textures (from ogldev.atspace.co.uk) but I'm having an issue with the model textures. I have a skybox with a beautiful texture as you can see in the picture That weird texture behind the helicopter (model) is the heli model that I've applied on purpose to that wall to demonstrate that specific texture is working, but not on the helicopter. I'll include the files I'm working on so you can check it out. Mesh.cpp - http://pastebin.com/pxDuKyQa Texture.cpp - http://pastebin.com/AByWjwL6 Render function + skybox - http://pastebin.com/Vivc9qnT I'm just calling mesh->Render(); before the drawSkyBox function, in the render loop. Why is the heli black when I can perfectly apply its texture to another quad? I've debugged the code and the mesh-render() call is correctly fetching the texture number and passing it to the texture-bind() function.

    Read the article

  • Optimal way to learn DirectX?

    - by BluePhase
    I am finding it very difficult to learn DirectX 11. The MSDN website is just full of unorganized information that doesn't seem to help at all. I am particularly looking for something that explains many if not all aspects of developing with DirectX 11. I have been searching for weeks and still come up empty. I have found some books but they don't really explain the fundamentals of the language at all. Thanks in advanced.

    Read the article

  • Efficiently rendering to 3D texture

    - by TravisG
    I have an existing depth texture and some other color textures, and want to process the information in them by rendering to a 3D texture (based on the depth contained in the depth texture, i.e. a point at (x/y) in the depth texture will be rendered to (x/y/texture(depth,uv)) in the 3D texture). Simply doing one manual draw call for each slice of the 3D texture (via glFramebufferTextureLayer) is terribly slow, since I don't know beforehand to what slice of the 3D texture a given texel from one of the color textures or the depth texture belongs. This means the entire process is effectively for each slice for each texel in depth texture process color textures and render to slice So I have to sample the depth texture completely per each slice, and I also have to go through the processing (at least until to discard;) for all texels in it. It would be much faster if I could rearrange the process to for each texel in depth texture figure out what slice it should end up in process color textures and render to slice Is this possible? If so, how? What I'm actually trying to do: the color textures contain lighting information (as seen from light view, it's a reflective shadow map). I want to accumulate that information in the 3D texture and then later use it to light the scene. More specifically I'm trying to implement Cryteks Light Propagation Volumes algorithm.

    Read the article

  • Using glReadBuffer/glReadPixels returns black image instead of the actual image only on Intel cards

    - by cloudraven
    I have this piece of code glReadBuffer( GL_FRONT ); glReadPixels( 0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, buffer ); Which works just perfectly in all the Nvidia and AMD GPUs I have tried, but it fails in almost every single Intel built-in video that I have tried. It actually works in a very old 945GME, but fails in all the others. Instead of getting a screenshot I am actually getting a black screen. If it helps, I am working with the Doom3 Engine, and that code is derived from the built-in screen capture code. By the way, even with the original game I cannot do screen capture on those intel devices anyway. My guess is that they are not implementing the standard correctly or something. Is there a workaround for this?

    Read the article

  • Avoid overwriting all the methods in the child class

    - by Heckel
    The context I am making a game in C++ using SFML. I have a class that controls what is displayed on the screen (manager on the image below). It has a list of all the things to draw like images, text, etc. To be able to store them in one list I created a Drawable class from which all the other drawable class inherit. The image below represents how I would organize each class. Drawable has a virtual method Draw that will be called by the manager. Image and Text overwrite this method. My problem is that I would like Image::draw method to work for Circle, Polygon, etc. since sf::CircleShape and sf::ConvexShape inherit from sf::Shape. I thought of two ways to do that. My first idea would be for Image to have a pointer on sf::Shape, and the subclasses would make it point onto their sf::CircleShape or sf::ConvexShape classes (Like on the image below). In the Polygon constructor I would write something like ptr_shape = &polygon_shape; This doesn't look very elegant because I have two variables that are, in fact, just one. My second idea is to store the sf::CircleShape and sf::ConvexShape inside the ptr_shape like ptr_shape = new sf::ConvexShape(...); and to use a function that is only in ConvexShape I would cast it like so ((sf::ConvexShape*)ptr_shape)->convex_method(); But that doesn't look very elegant either. I am not even sure I am allowed to do that. My question I added details about the whole thing because I thought that maybe my whole architecture was wrong. I would like to know how I could design my program to be safe without overwriting all the Image methods. I apologize if this question has already been asked; I have no idea what to google.

    Read the article

  • Different bounding volumes for culling and collision detection

    - by Serthy
    Should an object in a 3D-engine use different bounding volumes for collision-detection (broad-phase) and culling? Basically class renderBounds and class physBounds versus class boundingVolume? Each of this classes then could either contain the same type of volumes (AABB's, kDOP's, sphere's etc.) or a special fitting one for the particular object. (note: without considering of using an external physics engine)

    Read the article

  • Is there a library that handles hexagon tiled 2D maps?

    - by Pete Mancini
    It would represent a map that is semi-square of arbitrary size. It would have a simple system for representation of the map coordinates such as 0101 (first column, 1st hex). I'd want the map to be able to tell me the distance between two points, and what other hexes lay between those two points as a list or array. I don't care as much about the language but c# or python would be ideal. Does one exist?

    Read the article

  • How can I generate signed distance fields in real time, fast?

    - by heishe
    In a previous question, it was suggested that signed distance fields can be precomputed, loaded at runtime and then used from there. For reasons I will explain at the end of this question (for people interested), I need to create the distance fields in real time. There are some papers out there for different methods which are supposed to be viable in real-time environments, such as methods for Chamfer distance transforms and Voronoi diagram-approximation based transforms (as suggested in this presentation by the Pixeljunk Shooter dev guy), but I (and thus can be assumed a lot of other people) have a very hard time actually putting them to use, since they're usually long, largely bloated with math and not very algorithmic in their explanation. What algorithm would you suggest for creating the distance fields in real-time (favourably on the GPU) especially considering the resulting quality of the distance fields? Since I'm looking for an actual explanation/tutorial as opposed to a link to just another paper or slide, this question will receive a bounty once it's eligible for one :-). Here's why I need to do it in real time:

    Read the article

  • Finding vectors with two points

    - by Christian Careaga
    We're are trying to get the direction of a projectile but we can't find out how For example: [1,1] will go SE [1,-1] will go NE [-1,-1] will go NW and [-1,1] will go SW we need an equation of some sort that will take the player pos and the mouse pos and find which direction the projectile needs to go. Here is where we are plugging in the vectors: def update(self): self.rect.x += self.vector[0] self.rect.y += self.vector[1] Then we are blitting the projectile at the rects coords.

    Read the article

  • Javascript A* path finding

    - by Veyha
    I am trying to learn A* path finding. I am using this library - https://github.com/qiao/PathFinding.js But there is one thing I don't understand how to do. To find a path from player.x/player.y (player.x and player.y are both 0) to 10/10 I use this code var path = finder.findPath(player.x, player.y, 10, 10, grid); This gives an array of where I need to move, but how do I apply this array to my player.x and player.y? The path structure looks like this path = [[0, 0], [1, 0], [1, 1], ..., [10, 10]]

    Read the article

  • My GLSL shader isn't compiling even though it should. What should I investigate?

    - by reapz
    I'm porting an iOS game to Android. One of the shaders I'm using wouldn't compile until I reduced the number of uniform variables. Here are the uniform definitions: uniform highp mat4 ViewProjMatrix; uniform mediump vec3 LightDirWorld; uniform mediump int BoneCount; uniform highp mat4 BoneMatrixArray[8]; uniform highp mat3 BoneMatrixArrayIT[8]; uniform mediump int LightCount; uniform mediump vec3 LightPos[4]; // This used to be 12, but now 4, next lines also uniform lowp vec3 LightColour[4]; uniform mediump vec3 LightInnerOuterFalloff[4]; My issue is that the GLSL shader wouldn't compile until I reduced the count of the above arrays from 12 to 4. My understanding is that if those 3 lines were arrays of 12 then I would be using 56 vertex uniform vectors. I query the system at startup (GL_MAX_VERTEX_UNIFORM_VECTORS) and it says that 128 are available. Why wouldn't it compile with 56? I'm having issues on the Kindle Fire.

    Read the article

  • On Screen Coin Animation

    - by Siddharth
    am working with side scrolling skater game. I want to perform coin animation such that as player collect coin it moves upside and attach with currency sprite. My main character and coin present in game scene and currency sprite present in HUD layer. This situation creates problem for me. Directly I can not apply modifier to coin because it is side scrolling game so based on main character speed it reaches at different position. That I have checked. So that I have to generate other coin at same position at game layer coin has, in HUD layer and move upward to it. But I didn't able to get its y position correct though I can able to get x position correctly. Many time main character goes downward so it get minus value many time. I also tried following code float[] position = GameHUD.this .convertSceneCoordinatesToLocalCoordinates(GameManager .getInstance().getCoinX(), GameManager.getInstance() .getCoinY()); But I am getting same coordinate as I provide. No difference in that so please some one provide me guidance in that. Because I am near to complete my game. EDIT: Here game layer and hud layer is totally different. Actual coin present in game layer which player has to collect and at same position I want to generate another coin in hud layer to perform some animation. It is recommended to generate coin in hud layer because through that only I can able to complete my target.

    Read the article

  • Circle vs Edge collision detection / resolution

    - by topheman
    I made a javascript class Ball.js that handles physics interactions betweens balls as well as painting. In the v1.0, the ball vs ball collision detection and resolution is well handled. In the next version (v2), I'm trying to add edgeCollision handling. I'm having some problems, maybe you will be able to help me. All the v2 branch source code is on github repository : https://github.com/topheman/Ball.js/tree/v2 The v2 demos (where you can see the bug I will be talking about) : http://labs.topheman.com/Ball-v2/#help As you will see on the demo, I have two major problems that I'm having a really hard time to solve on Ball.js : method resolveEdgeCollision : bounce angle is inconsistent method checkEdgeCollision : if the ball's velocity (the length that it runs each frame) is higher than its diameter, eventually, it will pass through an edge, without triggering any collision Any Ideas ?...

    Read the article

  • Sampling Heightmap Edges for Normal map

    - by pl12
    I use a Sobel filter to generate normal maps from procedural height maps. The heightmaps are 258x258 pixels. I scale my texture coordinates like so: texCoord = (texCoord * (256/258)) + (1/258) Yet even with this I am left with the following problem: As you can see the edges of the normal map still proves to be problematic. Putting the texture wrap mode to "clamp" also proved no help. EDIT: The Sobel Filter function by sampling the 8 surrounding pixels around a given pixel so that a derivative can be calculated in order to find the "normal" of the given pixel. The texture coordinates are instanced once per quad (for the quadtree that makes up the world) and are created as follows (it is quite possible that the problem results from the way I scale and offset the texCoords as seen above): Java: for(int i = 0; i<vertices.length; i++){ Vector2f coord = new Vector2f((vertices[i].x)/(worldSize), (vertices[i].z)/( worldSize)); texCoords[i] = coord; } the quad used for input here rests on the X0Z plane. 'worldSize' is the diameter of the planet. No negative texCoords are seen as the quad used for input for this method is not centered around the origin. Is there something I am missing here? Thanks.

    Read the article

  • Understanding dot notation

    - by Starkers
    Here's my interpretation of dot notation: a = [2,6] b = [1,4] c = [0,8] a . b . c = (2*6)+(1*4)+(0*8) = 12 + 4 + 0 = 16 What is the significance of 16? Apparently it's a scalar. Am I right in thinking that a scalar is the number we times a unit vector by to get a vector that has a scaled up magnitude but the same direction as the unit vector? So again, what is the relevance of 16? When is it used? It's not the magnitude of all the vectors added up. The magnitude of all of them is calculated as follows: sqrt( ax * ax + ay * ay ) + sqrt( bx * bx + by * by ) + sqrt( cx * cx + cy * cy) sqrt( 2 * 2 + 6 * 6 ) + sqrt( 1 * 1 + 4 * 4 ) + sqrt( 0 * 0 + 8 * 8) sqrt( 4 + 36 ) + sqrt( 1 + 16 ) + sqrt( 0 + 64) sqrt( 40 ) + sqrt( 17 ) + sqrt( 64) 6.3 + 4.1 + 8 10.4 + 8 18.4 So I don't really get this diagram: Attempting with sensible numbers: a = [1,0] b = [4,3] a . b = (1*0) + (4*3) = 0 + 12 = 12 So what exactly is a . b describing here? The magnitude of that vector? Because that isn't right: the 'a.b' vector = [4,0] sqrt( x*x + y*y ) sqrt( 4*4 + 0*0 ) sqrt( 16 + 0 ) 4 So what is 12 describing?

    Read the article

  • Why distance field text rendering have clear outline?

    - by jinhwan
    http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf All the process for doing distance rendering is clear, but 'how does it work' is not clear for me. It looks like that distance field pixels which are created around original pixel may affect 2d texture sampling interpolation process. But I can't understand the interpolation process. I've read that the distance field rendering is processed under nearest-neighbour interpolation. If it is true, shouldn't the distance field redering creates non interpolated result? In my thought, they should looks liked retro style pixel art. Where do i misunderstand in this process? So far, It is no difference with alpha test for me. Both of them throw away all pixcel which are not in. How does extra distance field pixel affect rendering under nearest-neighbour interpolation?

    Read the article

  • generating maps

    - by gardian06
    This is a conglomeration question when answering please specify which part you are addressing. I am looking at creating a maze type game that utilizes elevation. I have a few features I would like to have, but am unsure as to some of the implementation. I have done work doing fileIO maze generation (using a key to read the file, and then generate the level based on that file), but I am unsure how to think about this with elevation in the mix. I think height maps might be a good approach, but don't know how to represent them effectively. for a height map which is more beneficial XML(containing h[u,v] data and key definition), CSV (item1 is key reference, item2 is elevation), or another approach that I have not thought of yet? When it comes to placing the elevation values themselves what kind of deltah values are appropriate to have it noticeable at about a 60degree angle while not really effecting gravity driven physics (assuming some effect while moving up/down hill)? I am thinking of maybe going to procedural generation at some point, but am wondering if it is practical to have a procedurally generated grid (wall squares possibly same dimensions as the open space squares), or if designing to a thin wall open spaces is better? this decision will effect the amount of work need on the graphics end for uniform vs. irregular walls. EDIT: game will be a elevation maze shooter. levels/maps will be mazes with elevation the player has to negotiate. elevations will have effects on "combat" vision, and movement

    Read the article

  • How to display image in second layer in Cocos2d

    - by PeterK
    I am very new at Cocos2d and is testing to displaying an image over the "Hello World" text on a second layer and need help to get it work. I guess it is some basic stuff here and appreciate any tips etc. with this. I know that if i put the display-code (myLayer1) in the "init" it work or do the call [self goHere] from the "init" in myLayer1 it works but i want to call the "goHere" directly. I have the following code: HelloWorld.m: #import "HelloWorldLayer.h" #import "myLayer1.h" // HelloWorldLayer implementation @implementation HelloWorldLayer +(CCScene *) scene { // 'scene' is an autorelease object. CCScene *scene = [CCScene node]; // 'layer' is an autorelease object. HelloWorldLayer *layer = [HelloWorldLayer node]; myLayer1 *layer1 = [myLayer1 node]; // add layer as a child to scene [scene addChild: layer]; [scene addChild: layer1]; // return the scene return scene; } // on "init" you need to initialize your instance -(id) init { // always call "super" init // Apple recommends to re-assign "self" with the "super" return value if( (self=[super init])) { // create and initialize a Label CCLabelTTF *label = [CCLabelTTF labelWithString:@"Hello World" fontName:@"Marker Felt" fontSize:64]; // ask director the the window size CGSize size = [[CCDirector sharedDirector] winSize]; // position the label on the center of the screen label.position = ccp( size.width /2 , size.height/2 ); // add the label as a child to this Layer [self addChild: label]; myLayer1 *a1 = [myLayer1 new]; [a1 goHere]; [myLayer1 release]; } return self; } myLayer1.m: #import "myLayer1.h" @implementation myLayer1 -(void)goHere { NSLog(@">>>>goHere<<<<"); CGSize size = [[CCDirector sharedDirector] winSize]; CCSprite *vv = [CCSprite spriteWithFile:@"hand.png"]; vv.position = ccp( size.width /2 , size.height/2 ); [self addChild:vv z:3]; } -(id) init { // always call "super" init // Apple recommends to re-assign "self" with the "super" return value if( (self=[super init])) { } return self; } @end

    Read the article

  • How do I find the angle required to point to another object?

    - by Ginamin
    I am making an air combat game, where you can fly a ship in a 3D space. There is an opponent that flies around as well. When the opponent is not on screen, I want to display an arrow pointing in the direction the user should turn, as such: So, I took the camera location and the oppenent location and did this: double newDirection = atan2(activeCamera.location.y-ship_wrap.location.y, activeCamera.location.x-ship_wrap.location.x); After which, I get the position on the circumferance of a circle which surrounds my crosshairs, like such: trackingArrow.position = point((60*sin(angle)+240),60*cos(angle)+160); It all works fine, except it's the wrong angle! I assume my calculation for the new direction is incorrect. Can anyone help?

    Read the article

  • Import 3ds into JMonkeyEngine 3

    - by Yanick Rochon
    I have asked this question on SO, but I think it will be more suitable here. Basically, we are trying to import an animated character body (with skeleton) from 3D Studio Max to JMonkeyEngine 3, but while we succeeded at importing some animations, we cannot seem to export the skeleton to .skeleton.xml using OgreXML format. Since OgreXML seems to be the favored way to import models into JME, we dropped .obj files and such. Any help appreciated.

    Read the article

  • Javascript Isometric draw optimization

    - by hustlerinc
    I'm having trouble with isometric tiles drawing. At the moment I got an array with the tiles i want to draw. And it all works fine until i increase the size of the array. Since I draw ALL tiles on the map it really affects the game performance (obviously) :D. My problem is I'm no genius when it comes to javascript and I haven't managed to just draw what is in viewport. Should be fairly simple for an expert though because its fixed sizes etc. Canvas is 960x480 pixels, each tile 64x32. This gives 16 tiles on first row, 15 on the next etc. for a total of 16 rows. Tile 0,0 is in the top-right corner. And draws X up to down and Y right to left. Going through the tiles on the first row from left to right as +X -Y. Here is the relevant part of my drawMap() function drawMap(){ var tileW = 64; // Tile Width var tileH = 32; // Tile Height var mapX = 960-32; var mapY = -16; for(i=0;i<map.length;i++){ for(j=0;j<map[i].length;j++){ var drawTile = map[i][j]; var drawObj = objectMap[i][j]; var xpos = (i-j)*tileH + mapX; var ypos = (i+j)*tileH/2 + mapY; // Place the tiles isometric. ctx.drawImage(tileImg[drawTile],xpos,ypos); if(drawObj){ ctx.drawImage(objectImg[drawObj-1],xpos,ypos-(objectImg[drawObj- 1])); } } } } Could anyone please help me how to translate this to just draw the relevant tiles? It would be deeply appreciated.

    Read the article

  • What are the pros and cons of a non-fixed-interval update loop?

    - by akonsu
    I am studying various approaches to implementing a game loop and I have found this article. In the article the author implements a loop which, if the processing falls behind in time, skips frame renderings and just updates the game in a loop (the last variant called "Constant Game Speed independent of Variable FPS"). I do not understand why it is acceptable to call update_game() in a loop without making sure the update function is called at a particular interval. I do not see any value in doing this. I would think that in my game I want to be sure the game is updated periodically with a known period. So maybe it is worthwhile to have two threads, one would call update periodically, and the other one would redraw the game, also periodically? Would this be a good and practical approach? Of course I would need to synchronise the threads.

    Read the article

  • Repairing back-facing triangles without user input

    - by LTR
    My 3D application works with user-imported 3D models. Frequently, those models have a few vertices facing into the wrong direction. (For example, there is a 3D roof and a few triangles of that roof are facing inside the building). I want to repair those automatically. We can make several assumptions about these 3D models: they are completely closed without holes, and the camera is always on the outside. My idea: Shoot 500 rays from every triangle outwards into all directions. From the back side of the triangle, all rays will hit another part of the model. From the front side, at least one ray will hit nothing. Is there a better algorithm? Are there any papers about something like this?

    Read the article

< Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >