Search Results

Search found 39156 results on 1567 pages for 'device driver development'.

Page 514/1567 | < Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >

  • How do I build a 2D physics engine?

    - by Vish
    The most advanced games I've made are a 8-ball pool game made with the physics engine Box2dFlashAS3 and a platform game with levels. When I did platform games, I've always wished to know how to make an engine, so that I could re-use it. When I see games that have slopes, curved slopes, perfect gravity and real-life physics, I've always wished I knew how to code the engine. Please suggest techniques and articles for whatever relevant knowledge-base is necessary.

    Read the article

  • Box2d too much for Circle/Circle collision detection?

    - by Joey Green
    I'm using cocos2d to program a game and am using box2d for collision detection. Everything in my game is a circle and for some reason I'm having a problem with some times things are not being detected as a collision when they should be. I'm thinking of rolling up my own collision detection since I don't think it would be too hard. Questions are: Would this approach work for collision detection between circles? a. get radius of circle A and circle B. b. get distance of the center of circle A and circle B c. if the distance is greater than or equal to the sum of circle A radius and circle B radius then we have a hit Should box2d be used for such simple collision detection? There are no physics in this game.

    Read the article

  • Monogame - Shader parameters missing

    - by Layoric
    I am currently working on a simple game that I am building in Windows 8 using MonoGame (develop3d). I am using some shader code from a tutorial (made by Charles Humphrey) and having an issue populating a 'texture' parameter. I'm not well versed writing shaders, so this might be caused by a more obvious problem. I have debugged through MonoGame's Content processor to see how this shader is being parsed, all the non 'texture' parameters are there and look to be loading correctly. Shader code below #include "PPVertexShader.fxh" float2 lightScreenPosition; float4x4 matVP; float2 halfPixel; float SunSize; texture flare; sampler2D Scene: register(s0){ AddressU = Clamp; AddressV = Clamp; }; sampler Flare = sampler_state { Texture = (flare); AddressU = CLAMP; AddressV = CLAMP; }; float4 LightSourceMaskPS(float2 texCoord : TEXCOORD0 ) : COLOR0 { texCoord -= halfPixel; // Get the scene float4 col = 0; // Find the suns position in the world and map it to the screen space. float2 coord; float size = SunSize / 1; float2 center = lightScreenPosition; coord = .5 - (texCoord - center) / size * .5; col += (pow(tex2D(Flare,coord),2) * 1) * 2; return col * tex2D(Scene,texCoord); } technique LightSourceMask { pass p0 { VertexShader = compile vs_4_0 VertexShaderFunction(); PixelShader = compile ps_4_0 LightSourceMaskPS(); } } I've removed default values as they are currently not support in MonoGame and also changed ps and vs to v4 instead of 2. Could this be causing the issue? As I debug through 'DXConstantBufferData' constructor (from within the MonoGameContentProcessing project) I find that the 'flare' parameter does not exist. All others seem to be getting created fine. Any help would be appreciated.

    Read the article

  • How can I create a fast, real-time, fixed length glowing ray?

    - by igf
    Similar to the disintegrate skill in Diablo 3. It should not light other objects in scene. Just glowing and animated. Like in this video http://www.youtube.com/watch?v=D_c4x6aQAG8. Should I use pack of pre-computed glow sources textures for each frame of ray animation like in this article http://http.developer.nvidia.com/GPUGems/gpugems_ch21.html and put it in bloom shader? Is there any other efficient ways to achive this effect? I'm using OpenGL ES 2.0.

    Read the article

  • Selling Android apps from Latvia? or should I just put banners?

    - by Roger Travis
    I am in Latvia ( which is not supported to sell apps at android market ), so I am thinking about the best way of monetizing my app. So far I've come up with such options: somehow imitate that I am from a supported country, get a bank account there, etc. use PayPal for in-app purchases. The player get, say, first 10 levels for free, but then is asked to pay 0.99$ for the rest of the game. downsides: player might not feel comfortable entering his paypal details into an app. also android market might not really like that. making the app free and get money from advertising... let's do some calculation here, say, I get 1m free downloads, each user during his playtime would see 10 banners, therefor 10m / 1000 * 0.3 = gives roughly 33k$ ( if we use adMob with their 0.3$ per 1000 impressions ). On the other hand, if we use paypal in app purchase, we need a 3% or more conversion rate to beat this... hmm... What do you think about all this? Thanks! edit: from what I just read all over the net, it looks like advertisers will change their eCPM price a lot without you understanding why... while using in-app paypal purchase you can at least somehow monitor the cashflow.

    Read the article

  • Experience formula with javascript

    - by StealingMana
    I'm having trouble working out a formula using this experience curve to get the total exp after each level. I bet its easy and im just over thinking it. maxlvl = 10; increment = 28; baseexp = 100; function calc(){ for (i = 0;i<(maxlvl*increment);i+=increment){ expperlvl = baseexp + i; document.writeln(expperlvl); } } I figured it out. maxlvl=6; base=200; increment=56; function total(){ totalxp= (base*(maxlvl-1))+(increment*(maxlvl-2)*(maxlvl-1)/2); document.write(totalxp); }

    Read the article

  • How To Approach 360 Degree Snake

    - by Austin Brunkhorst
    I've recently gotten into XNA and must say I love it. As sort of a hello world game I decided to create the classic game "Snake". The 90 degree version was very simple and easy to implement. But as I try to make a version of it that allows 360 degree rotation using left and right arrows, I've come into sort of a problem. What i'm doing now stems from the 90 degree version: Iterating through each snake body part beginning at the tail, and ending right before the head. This works great when moving every 100 milliseconds. The problem with this is that it makes for a choppy style of gameplay as technically the game progresses at only 6 fps rather than it's potential 60. I would like to move the snake every game loop. But unfortunately because the snake moves at the rate of it's head's size it goes way too fast. This would mean that the head would need to move at a much smaller increment such as (2, 2) in it's direction rather than what I have now (32, 32). Because I've been working on this game off and on for a couple of weeks while managing school I think that I've been thinking too hard on how to accomplish this. It's probably a simple solution, i'm just not catching it. Here's some pseudo code for what I've tried based off of what makes sense to me. I can't really think of another way to do it. for(int i = SnakeLength - 1; i > 0; i--){ current = SnakePart[i], next = SnakePart[i - 1]; current.x = next.x - (current.width * cos(next.angle)); current.y = next.y - (current.height * sin(next.angle)); current.angle = next.angle; } SnakeHead.x += cos(SnakeAngle) * SnakeSpeed; SnakeHead.y += sin(SnakeAngle) * SnakeSpeed; This produces something like this: Code in Action. As you can see each part always stays behind the head and doesn't make a "Trail" effect. A perfect example of what i'm going for can be found here: Data Worm. Not the viewport rotation but the trailing effect of the triangles. Thanks for any help!

    Read the article

  • What is the most efficient way to blur in a shader?

    - by concernedcitizen
    I'm currently working on screen space reflections. I have perfectly reflective mirror-like surfaces working, and I now need to use a blur to make the reflection on surfaces with a low specular gloss value look more diffuse. I'm having difficulty deciding how to apply the blur, though. My first idea was to just sample a lower mip level of the screen rendertarget. However, the rendertarget uses SurfaceFormat.HalfVector4 (for HDR effects), which means XNA won't allow linear filtering. Point filtering looks horrible and really doesn't give the visual cue that I want. I've thought about using some kind of Box/Gaussian blur, but this would not be ideal. I've already thrashed the texture cache in the raymarching phase before the blur even occurs (a worst case reflection could be 32 samples per pixel), and the blur kernel to make the reflections look sufficiently diffuse would be fairly large. Does anyone have any suggestions? I know it's doable, as Photon Workshop achieved the effect in Unity.

    Read the article

  • Do I need path finding to make AI avoid obstacles?

    - by yannicuLar
    How do you know when a path-finding algorithm is really needed? There are contexts, where you just want to improve AI navigation to avoid an object, like a space -ship that won't crash on a planet or a car that already knows where to steer, but needs small corrections to avoid a road bump. As I've seen on similar posts, the obvious solution is to implement some path-finding algorithm, most likely like A*, and let your AI-controlled object to navigate through the path. Now, I have the necessary skills to implement a path-finding algorithm, and I'm not being lazy here, but I'm still a bit skeptical on if this is really needed. I have the impression that path-finding is appropriate to navigate through a maze, or picking a path when there are many alternatives. But in obstacle avoidance, when you do know the path, but need to make slight corrections, is path finding really necessary? Even when the obstacles are too sparse or small ? I mean, in real life, when you're driving and notice a bump on the road, you will just have to pick between steering a bit on the left (and have the bump on your right side) or the other way around. You will not consider stopping, or going backwards. A path finding would be appropriate when you need to pick a route through the city, right ? So, are there any other methods to help AI navigation, except path-finding? And if there are, how do you know when path-fining is the appropriate algorithm ? Thanks for any thoughts

    Read the article

  • Playing NSF music in FMOD.net

    - by Tesserex
    So, as the title says, I want to be able to play NSF files using FMOD, because my project already uses FMOD and I'd rather not replace it. This will involve figuring out how existing players and emulators work and porting it. I haven't yet found an existing player that uses FMOD. My starting point is the MyNes source from http://sourceforge.net/projects/mynes/. There are two big steps between here and what I'm looking for. MyNes plays from a ROM, not NSF. So, I have to rip out the APU and get it to play NSF files. The MyNes APU uses SlimDX, so I have to convert that to FMOD.NET. I am really stuck about how to go about either of these, because I'm not that familiar with audio formats and it's hard finding resources online. So here are a few questions: From what I can tell from the NSF spec at http://kevtris.org/nes/nsfspec.txt, it's just contains the relevant memory section of the ROM, plus the header. If anyone can verify or correct this that would be great. The emulator APU uses data from the rest of the emulator to play, including things like cycle counts. I'm not sure what replaces this in a standalone player. Can't I just load all the music data at once into a stream and play it? Joining #1 and #2, does the header data from the NSF substitute for some of the ROM data in the emulator code? Using FMOD, will I be following the usercreatedsound example for loading a stream? And does this format count as PCM? Specifically MyNes says PCM8. Any tips on loading / playing the stream in FMOD are appreciated. As an aside, I don't really understand the loading / playing sections of the spec I linked at all. It seems to apply to 6502 systems / emulators only and not to my situation. I know it's a long shot for anyone here to have enough experience in this area to help, but anything you can provide is definitely appreciated. A link to an existing .NET library that does this would be even better, but I don't believe one exists.

    Read the article

  • multi-dimension array problem in RGSS (RPG Maker XP)

    - by AzDesign
    This is my first day code script in RMXP. I read tutorials, ruby references, etc and I found myself stuck on a weird problem, here is the scenario: I made a custom script to display layered images Create the class, create an instance variable to hold the array, create a simple method to add an element into it, done The draw method (skipped the rest of the code to this part): def draw image = [] index = 0 for i in [email protected] if image.size > 0 index = image.size end image[index] = Sprite.new image[index].bitmap = RPG::Cache.picture(@components[i][0] + '.png') image[index].x = @x + @components[i][1] image[index].y = @y + @components[i][2] image[index].z = @z + @components[i][3] @test =+ 1 end end Create an event that does these script > $layerz = Layerz.new $layerz.configuration[0] = ['root',0,0,1] > $layerz.configuration[1] = ['bark',0,10,2] > $layerz.configuration[2] = ['branch',0,30,3] > $layerz.configuration[3] = ['leaves',0,60,4] $layerz.draw Run, trigger the event and the result : ERROR! Undefined method`[]' for nil:NilClass pointing at this line on draw method : image[index].bitmap = RPG::Cache.picture(@components[i][0] + '.png') THEN, I changed the method like these just for testing: def draw image = [] index = 0 for i in [email protected] if image.size > 0 index = image.size end image[index] = Sprite.new image[index].bitmap = RPG::Cache.picture(@components[0][0] + '.png') image[index].x = @x + @components[0][1] image[index].y = @y + @components[0][2] image[index].z = @z + @components[0][3] @test =+ 1 end I changed the @components[i][0] to @components[0][0] and IT WORKS, but only the root as it not iterates to the next array index Im stuck here, see : > in single level array, @components[0] and @components[i] has no problem > in multi-dimension array, @components[0][0] has no problem BUT > in multi-dimension array, @components[i][0] produce the error as above > mentioned. any suggestion to fix the error ? Or did I wrote something wrong ?

    Read the article

  • PhysicsMouseJoint problem in andengine + Box2d

    - by Nikhil Lamba
    What we can remove from this code i.e from PhysicsMouseJointExample to remove the functionality of drag and drog of sprite but i need all functionality except this only user move the sprite with some force and velocity of fling but user can't move the ball as like drag and drop like moving a finger on screen and sprite move with finger plz plz help me I am Using Below method for Mouse Joint CODE : public MouseJoint createMouseJoint(final IShape pFace, final float pTouchAreaLocalX, final float pTouchAreaLocalY) { final Body body = (Body) pFace.getUserData(); final MouseJointDef mouseJointDef = new MouseJointDef(); final Vector2 localPoint = Vector2Pool.obtain((pTouchAreaLocalX - pFace.getWidth() * 0.5f) / PhysicsConstants.PIXEL_TO_METER_RATIO_DEFAULT, (pTouchAreaLocalY - pFace.getHeight() * 0.5f) / PhysicsConstants.PIXEL_TO_METER_RATIO_DEFAULT); this.groundBody.setTransform(localPoint, 0); mouseJointDef.bodyA = this.groundBody; mouseJointDef.bodyB = body; mouseJointDef.dampingRatio = 0.95f; mouseJointDef.frequencyHz = 30; mouseJointDef.maxForce = (200.0f * body.getMass()); mouseJointDef.collideConnected = true; mouseJointDef.target.set(body.getWorldPoint(localPoint)); Vector2Pool.recycle(localPoint); return (MouseJoint)mPhysicsWorld.createJoint(mouseJointDef); }

    Read the article

  • Per Pixel Collision Detection

    - by CJ Cohorst
    Just a quick question, I have this collision detection code: public bool PerPixelCollision(Player player, Game1 dog) { Matrix atob = player.Transform * Matrix.Invert(dog.Transform); Vector2 stepX = Vector2.TransformNormal(Vector2.UnitX, atob); Vector2 stepY = Vector2.TransformNormal(Vector2.UnitY, atob); Vector2 iBPos = Vector2.Transform(Vector2.Zero, atob); for(int deltax = 0; deltax < player.playerTexture.Width; deltax++) { Vector2 bpos = iBPos; for (int deltay = 0; deltay < player.playerTexture.Height; deltay++) { int bx = (int)bpos.X; int by = (int)bpos.Y; if (bx >= 0 && bx < dog.dogTexture.Width && by >= 0 && by < dog.dogTexture.Height) { if (player.TextureData[deltax + deltay * player.playerTexture.Width].A > 150 && dog.TextureData[bx + by * dog.Texture.Width].A > 150) { return true; } } bpos += stepY; } iBPos += stepX; } return false; } What I want to know is where to put in the code where something happens. For example, I want to put in player.playerPosition.X -= 200 just as a test, but I don't know where to put it. I tried putting it under the return true and above it, but under it, it said unreachable code, and above it nothing happened. I also tried putting it by bpos += stepY; but that didn't work either. Where do I put the code? Any help is appreciated. Thanks in advance!

    Read the article

  • Hide collision layer in libgdx with TiledMap?

    - by Daniel Jonsson
    I'm making a 2D game with libgdx, and I'm using its TileMapRenderer to render my map which I have made in the map editor Tiled. In Tiled I have a dedicated collision layer. However, I can't figure out how I'm supposed to hide it and its tiles in the game. This is how a map is loaded: TiledMap map = TiledLoader.createMap(Gdx.files.internal("maps/map.tmx")); TileAtlas atlas = new TileAtlas(map, Gdx.files.internal("maps")); tileMapRenderer = new TileMapRenderer(map, atlas, 32, 32); Currently the collision tiles are rendered on top of everything else, as I see them in the map editor.

    Read the article

  • As a game developer, which data structure use for develop the game? [duplicate]

    - by Rizwanabbasi
    This question already has an answer here: When should vector/list be used? 5 answers We are developing a game for bank robbery. The game plots a bank robbery. Lots of people witness that robbery. Our game will load the lists of suspected offenders while the players (witnesses) will have to identify the offenders of this robbery. Game should load list of offenders to identify the one as quickly as possible. Admin can add/remove offenders in the lists and two or more lists of offenders can also be merged into one (to show it to the player). As a game developer, which data structure we should use for develop the game? Justify your selection with solid arguments. Remember the most critical requirement is that the list should load super fast.

    Read the article

  • Can I use a genetic algorithm for balancing character builds?

    - by Renan Malke Stigliani
    I'm starting to build a online PVP (duel like, one-on-one) game, where there is leveling, skill points, special attacks and all the common stuff. Since I have never done anything like this, I'm still thinking about the math behind the levels/skills/specials balance. So I thought a good way of testing the best builds/combos, would be to implement a Genetic Algorithm. It'd be like this: Generate a big group of random characters Make them fight, level them up accordingly to their victories(more XP)/losses(less XP) Mate the winners, crossing their builds, to try and make even better characters Add some more random chars, emulating new players Repeat the process for some time, or util I find some chars who can beat everyone's butt I could then play with the math and try to find better balances to make sure that the top x% of chars would be a mix of various build types. So, is it a good idea, or is there some other, easier method to do the balancing?

    Read the article

  • Separating physics and game logic from UI code

    - by futlib
    I'm working on a simple block-based puzzle game. The game play consists pretty much of moving blocks around in the game area, so it's a trivial physics simulation. My implementation, however, is in my opinion far from ideal and I'm wondering if you can give me any pointers on how to do it better. I've split the code up into two areas: Game logic and UI, as I did with a lot of puzzle games: The game logic is responsible for the general rules of the game (e.g. the formal rule system in chess) The UI displays the game area and pieces (e.g. chess board and pieces) and is responsible for animations (e.g. animated movement of chess pieces) The game logic represents the game state as a logical grid, where each unit is one cell's width/height on the grid. So for a grid of width 6, you can move a block of width 2 four times until it collides with the boundary. The UI takes this grid, and draws it by converting logical sizes into pixel sizes (that is, multiplies it by a constant). However, since the game has hardly any game logic, my game logic layer [1] doesn't have much to do except collision detection. Here's how it works: Player starts to drag a piece UI asks game logic for the legal movement area of that piece and lets the player drag it within that area Player lets go of a piece UI snaps the piece to the grid (so that it is at a valid logical position) UI tells game logic the new logical position (via mutator methods, which I'd rather avoid) I'm not quite happy with that: I'm writing unit tests for my game logic layer, but not the UI, and it turned out all the tricky code is in the UI: Stopping the piece from colliding with others or the boundary and snapping it to the grid. I don't like the fact that the UI tells the game logic about the new state, I would rather have it call a movePieceLeft() method or something like that, as in my other games, but I didn't get far with that approach, because the game logic knows nothing about the dragging and snapping that's possible in the UI. I think the best thing to do would be to get rid of my game logic layer and implement a physics layer instead. I've got a few questions regarding that: Is such a physics layer common, or is it more typical to have the game logic layer do this? Would the snapping to grid and piece dragging code belong to the UI or the physics layer? Would such a physics layer typically work with pixel sizes or with some kind of logical unit, like my game logic layer? I've seen event-based collision detection in a game's code base once, that is, the player would just drag the piece, the UI would render that obediently and notify the physics system, and the physics system would call a onCollision() method on the piece once a collision is detected. What is more common? This approach or asking for the legal movement area first? [1] layer is probably not the right word for what I mean, but subsystem sounds overblown and class is misguiding, because each layer can consist of several classes.

    Read the article

  • How to draw image in memory manually in pyglet?

    - by Mossen
    In pyglet, I want to create an image buffer in memory, then set the bytes manually, then draw it. I tried making a 3x3 red square like this in my draw() function: imageData = pyglet.image.ImageData(3, 3, 'RGB', [1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0 ]) imageData.blit(10, 10) ...but at runtime, Python complains: ctypes.ArgumentError: argument 9: <type 'exceptions.TypeError'>: wrong type Is this the right approach? Am I missing a step? How can I fix this?

    Read the article

  • Game Engine that allows for objects being placed in-game

    - by user185812
    I am looking for a game engine with multiplayer support that allows for players to place objects in the terrain. (eg. in TF2 one can place teleporters, etc...or in minecraft one can place blocks). I don't need the placeable objects to be interactive like in TF2, but I just need an engine that won't make me code this from scratch. I have decent knowledge of Python, PHP, HTML, C++ and C# (and a small knowledge of Lua scripting, although I have only been at it for a few months) so I should be able to handle most engines. So far I have looked at UDK and Cryengine, and wasn't thrilled with either.

    Read the article

  • planar shadow matrix and plane b value

    - by DevExcite
    I implemented planar shadows with the function D3DXMatrixShadow. As you know, we need plane and light factor to calculate a shadow matrix. The problem is that when I set the plane as D3DXPLANE p(0, -1, 0, 0.1f), the shadows by directional light are correctly rendered, but the shadows by point light are not rendered. However, if I use D3DXPLANE p(0, 1, 0, 0.1f), the situation is reversed, shadows by directional light are not drawn, the shadows by point light are ok. I cannot understand why it happens. Is it normal or am i missing something? Please explain to me why this happens. Thanks in advance.

    Read the article

  • OpenGL/GLSL: Render to cube map?

    - by BobDole
    I'm trying to figure out how to render my scene to a cube map. I've been stuck on this for a bit and figured I would ask you guys for some help. I'm new to OpenGL and this is the first time I'm using a FBO. I currently have a working example of using a cubemap bmp file, and the samplerCube sample type in the fragment shader is attached to GL_TEXTURE1. I'm not changing the shader code at all. I'm just changing the fact that I wont be calling the function that was loading the cubemap bmp file and trying to use the below code to render to a cubemap. You can see below that I'm also attaching the texture again to GL_TEXTURE1. This is so when I set the uniform: glUniform1i(getUniLoc(myProg, "Cubemap"), 1); it can access it in my fragment shader via uniform samplerCube Cubemap. I'm calling the below function like so: cubeMapTexture = renderToCubeMap(150, GL_RGBA8, GL_RGBA, GL_UNSIGNED_BYTE); Now, I realize in the draw loop below that I'm not changing the view direction to look down the +x, -x, +y, -y, +z, -z axis. I really was just wanting to see something working first before implemented that. I figured I should at least see something on my object the way the code is now. I'm not seeing anything, just straight black. I've made my background white still the object is black. I've removed lighting, and coloring to just sample the cubemap texture and still black. I'm thinking the problem might be the format types when setting my texture which is GL_RGB8, GL_RGBA but I've also tried: GL_RGBA, GL_RGBA GL_RGB, GL_RGB I thought this would be standard since we are rendering to a texture attached to a framebuffer, but I've seen different examples that use different enum values. I've also tried binding the cube map texture in every draw call that I'm wanting to use the cube map: glBindTexture(GL_TEXTURE_CUBE_MAP, cubeMapTexture); Also, I'm not creating a depth buffer for the FBO which I saw in most examples, because I'm only wanting the color buffer for my cube map. I actually added one to see if that was the problem and still got the same results. I could of fudged that up when I tried. Any help that can point me in the right direction would be appreciated. GLuint renderToCubeMap(int size, GLenum InternalFormat, GLenum Format, GLenum Type) { // color cube map GLuint textureObject; int face; GLenum status; //glEnable(GL_TEXTURE_2D); glActiveTexture(GL_TEXTURE1); glGenTextures(1, &textureObject); glBindTexture(GL_TEXTURE_CUBE_MAP, textureObject); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); for (face = 0; face < 6; face++) { glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, InternalFormat, size, size, 0, Format, Type, NULL); } // framebuffer object glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_POSITIVE_X, textureObject, 0); status = glCheckFramebufferStatus(GL_FRAMEBUFFER); printf("%d\"\n", status); printf("%d\n", GL_FRAMEBUFFER_COMPLETE); glViewport(0,0,size, size); for (face = 1; face < 6; face++) { drawSpheres(); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, textureObject, 0); } //Bind 0, which means render to back buffer, as a result, fb is unbound glBindFramebuffer(GL_FRAMEBUFFER, 0); return textureObject; }

    Read the article

  • Algorithm for spreading labels in a visually appealing and intuitive way

    - by mac
    Short version Is there a design pattern for distributing vehicle labels in a non-overlapping fashion, placing them as close as possible to the vehicle they refer to? If not, is any of the method I suggest viable? How would you implement this yourself? Extended version In the game I'm writing I have a bird-eye vision of my airborne vehicles. I also have next to each of the vehicles a small label with key-data about the vehicle. This is an actual screenshot: Now, since the vehicles could be flying at different altitudes, their icons could overlap. However I would like to never have their labels overlapping (or a label from vehicle 'A' overlap the icon of vehicle 'B'). Currently, I can detect collisions between sprites and I simply push away the offending label in a direction opposite to the otherwise-overlapped sprite. This works in most situations, but when the airspace get crowded, the label can get pushed very far away from its vehicle, even if there was an alternate "smarter" alternative. For example I get: B - label A -----------label C - label where it would be better (= label closer to the vehicle) to get: B - label label - A C - label EDIT: It also has to be considered that beside the overlapping vehicles case, there might be other configurations in which vehicles'labels could overlap (the ASCII-art examples show for example three very close vehicles in which the label of A would overlap the icon of B and C). I have two ideas on how to improve the present situation, but before spending time implementing them, I thought to turn to the community for advice (after all it seems like a "common enough problem" that a design pattern for it could exist). For what it's worth, here's the two ideas I was thinking to: Slot-isation of label space In this scenario I would divide all the screen into "slots" for the labels. Then, each vehicle would always have its label placed in the closest empty one (empty = no other sprites at that location. Spiralling search From the location of the vehicle on the screen, I would try to place the label at increasing angles and then at increasing radiuses, until a non-overlapping location is found. Something down the line of: try 0°, 10px try 10°, 10px try 20°, 10px ... try 350°, 10px try 0°, 20px try 10°, 20px ...

    Read the article

  • component Initialization in component-based game architectures

    - by liortal
    I'm develping a 2d game (in XNA) and i've gone slightly towards a component-based approach, where i have a main game object (container) that holds different components. When implementing the needed functionality as components, i'm now faced with an issue -- who should initialize components? Are components usually passed in initialized into an entity, or some other entity initialized them? In my current design, i have an issue where the component, when created, requires knowledge regarding an attached entity, however these 2 events may not happen at the same time (component construction, attaching to a game entity). I am looking for a standard approach or examples of implementations that work, that overcome this issue or present a clear way to resolve it

    Read the article

  • deWitters Game loop in libgdx(Android)

    - by jaysingh
    I am a beginner and I want a complete example in LibGDX for android(Fixed time game loop) how to limit the framerate to 50 or 60. Also how to mangae interpolation between game state with simple example code e.g. deWiTTERS Game Loop: @Override public void render() { float deltaTime = Gdx.graphics.getDeltaTime(); Update(deltaTime); Render(deltaTime); } libgdx comments:- There is a Gdx.graphics.setVsync() method (generic = backend-independant), but it is not present in 0.9.1, only in the Nightlies. "Relying on vsync for fixed time steps is a REALLY bad idea. It will break on almost all hardware out there. See LwjglApplicationConfiguration, there's a flag in there that let s use toggle gpu/software vsynching. Play around with it." (Mario) NOTE that none of these limit the framerate to a specific value... if you REALLY need to limit the framerate for some reason, you'll have to handle it yourself by returning from render calls if xxx ms haven't passed since the last render call. li

    Read the article

  • How do I set a touch listener in a child scene in AndEngine?

    - by Siddharth
    In my game, I want to implement touch listener for my child scene objects. Basically I tried all the possible way to implement this that I have usually done for my normal scenes, but those methods do not work here. Could somebody provide some guidance for setting touch area listener in child scene? Here is my code: menuScene.setTouchAreaBindingEnabled(true); menuScene.registerTouchArea(resumeButtonSprite); menuScene.registerTouchArea(retryButtonSprite); menuScene.registerTouchArea(exitButtonSprite); menuScene.setOnAreaTouchListener(new IOnAreaTouchListener() { @Override public boolean onAreaTouched(TouchEvent pSceneTouchEvent, ITouchArea pTouchArea, float pTouchAreaLocalX, float pTouchAreaLocalY) { System.out.println("Touch"); return true; } }); In this code, menuScene was a child scene activity. Also after research I found that my engine was stopped while the child scene was activated so the touch event is not detected. I want to implement a pause menu in my game so any desirable solution for a pause menu implementation would help.

    Read the article

< Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >