Search Results

Search found 31839 results on 1274 pages for 'plugin development'.

Page 481/1274 | < Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >

  • Physics like asteroides

    - by user2933016
    I try to make a ship that has the physic properties like asteroides. I have this for now(All in Java): Ship.class public class Ship { public static final float sMaxHealth = 0.1F; public static final float sMaxMoveVelocity = 5.0F; public static final float sMaxAngleVelocity = 20.0F; public static final float sRadius = 1.0F; public static final float sMoveDeceleration = 10.0F; public static final float sMoveAcceleration = 2.0F; public static final float sAngleDeceleration = 15.0F; public static final float sAngleAcceleration = 20.0F; private float mHealth; private float mXVelocity; private float mYVelocity; private float mAngleVelocity; private float mX; private float mY; private float mAngle; } (I let the getter and setter away for now) Controller code // Player input if(Gdx.input.isKeyPressed(Keys.UP)) { mPlayer.setXVelocity(mPlayer.getXVelocity() + (float) Math.cos(mPlayer.getAngle()) * Ship.sMoveAcceleration); mPlayer.setYVelocity(mPlayer.getYVelocity() + (float) Math.sin(mPlayer.getAngle()) * Ship.sMoveAcceleration); } if(Gdx.input.isKeyPressed(Keys.LEFT)) { mPlayer.setAngleVelocity(mPlayer.getAngleVelocity() + Ship.sAngleAcceleration * pDeltaTime); } if(Gdx.input.isKeyPressed(Keys.RIGHT)) { mPlayer.setAngleVelocity(mPlayer.getAngleVelocity() - Ship.sAngleAcceleration * pDeltaTime); } // X velocity if(mPlayer.getXVelocity() < 0) { if(-mPlayer.getXVelocity() > Ship.sMaxMoveVelocity) { mPlayer.setXVelocity(-Ship.sMaxMoveVelocity); } mPlayer.setXVelocity(mPlayer.getXVelocity() + Ship.sMoveDeceleration * pDeltaTime); if(mPlayer.getXVelocity() > 0) { mPlayer.setXVelocity(0); } } else if(mPlayer.getXVelocity() > 0) { if(mPlayer.getXVelocity() > Ship.sMaxMoveVelocity) { mPlayer.setXVelocity(Ship.sMaxMoveVelocity); } mPlayer.setXVelocity(mPlayer.getXVelocity() - Ship.sMoveDeceleration * pDeltaTime); if(mPlayer.getXVelocity() < 0) { mPlayer.setXVelocity(0); } } // Y velocity if(mPlayer.getYVelocity() < 0) { if(-mPlayer.getYVelocity() > Ship.sMaxMoveVelocity) { mPlayer.setYVelocity(-Ship.sMaxMoveVelocity); } mPlayer.setYVelocity(mPlayer.getYVelocity() + Ship.sMoveDeceleration * pDeltaTime); if(mPlayer.getYVelocity() > 0) { mPlayer.setYVelocity(0); } } else if(mPlayer.getYVelocity() > 0) { if(mPlayer.getYVelocity() > Ship.sMaxMoveVelocity) { mPlayer.setYVelocity(Ship.sMaxMoveVelocity); } mPlayer.setYVelocity(mPlayer.getYVelocity() - Ship.sMoveDeceleration * pDeltaTime); if(mPlayer.getYVelocity() < 0) { mPlayer.setYVelocity(0); } } // Angle velocity if(mPlayer.getAngleVelocity() < 0) { if(-mPlayer.getAngleVelocity() > Ship.sMaxAngleVelocity) { mPlayer.setAngleVelocity(-Ship.sMaxAngleVelocity); } mPlayer.setAngleVelocity(mPlayer.getAngleVelocity() + Ship.sAngleDeceleration * pDeltaTime); if(mPlayer.getAngleVelocity() > 0) { mPlayer.setAngleVelocity(0); } } else if(mPlayer.getAngleVelocity() > 0) { if(mPlayer.getAngleVelocity() > Ship.sMaxAngleVelocity) { mPlayer.setAngleVelocity(Ship.sMaxAngleVelocity); } mPlayer.setAngleVelocity(mPlayer.getAngleVelocity() - Ship.sAngleDeceleration * pDeltaTime); if(mPlayer.getAngleVelocity() < 0) { mPlayer.setAngleVelocity(0); } } mPlayer.setX(mPlayer.getX() + mPlayer.getXVelocity() * pDeltaTime); mPlayer.setY(mPlayer.getY() + mPlayer.getYVelocity() * pDeltaTime); mPlayer.setAngle(mPlayer.getAngle() + mPlayer.getAngleVelocity() * pDeltaTime); Why the ship does not behave like in asteroides ? What do I wrong?

    Read the article

  • How to I get a rotated sprite to move left or right?

    - by rphello101
    Using Java/Slick 2D, I'm using the mouse to rotate a sprite on the screen and the directional keys (in this case, WASD) to move the spite. Forwards and backwards is easy, just position += cos(ang)*speed or position -= cos(ang)*speed. But how do I get the sprite to move left or right? I'm thinking it has something to do with adding 90 degrees to the angle or something. Any ideas? Rotation code: int mX = Mouse.getX(); int mY = HEIGHT - Mouse.getY(); int pX = sprite.x+sprite.image.getWidth()/2; int pY = sprite.y+sprite.image.getHeight()/2; double mAng; if(mX!=pX){ mAng = Math.toDegrees(Math.atan2(mY - pY, mX - pX)); if(mAng==0 && mX<=pX) mAng=180; } else{ if(mY>pY) mAng=90; else mAng=270; } sprite.angle = mAng; sprite.image.setRotation((float) mAng); And the movement code (delta is change in time): Input input = gc.getInput(); Vector2f direction = new Vector2f(); Vector2f velocity = new Vector2f(); direction.x = (float) Math.cos(Math.toRadians(sprite.angle)); direction.y = (float) Math.sin(Math.toRadians(sprite.angle)); if(direction.length()>0) direction = direction.normalise(); //On a separate note, what does this line of code do? velocity.x = (float) (direction.x * sprite.moveSpeed); velocity.y = (float) (direction.y * sprite.moveSpeed); if(input.isKeyDown(sprite.up)){ sprite.x += velocity.x*delta; sprite.y += velocity.y*delta; }if (input.isKeyDown(sprite.down)){ sprite.x -= velocity.x*delta; sprite.y -= velocity.y*delta; }if (input.isKeyDown(sprite.left)){ //??? }if (input.isKeyDown(sprite.right)){ //??? }

    Read the article

  • converting a mouse click to a ray

    - by Will
    I have a perspective projection. When the user clicks on the screen, I want to compute the ray between the near and far planes that projects from the mouse point, so I can do some ray intersection code with my world. I am using my own matrix and vector and ray classes and they all work as expected. However, when I try and convert the ray to world coordinates my far always ends up as 0,0,0 and so my ray goes from the mouse click to the centre of the object space, rather than through it. (The x and y coordinates of near and far are identical, they differ only in the z coordinates where they are negatives of each other) GLint vp[4]; glGetIntegerv(GL_VIEWPORT,vp); matrix_t mv, p; glGetFloatv(GL_MODELVIEW_MATRIX,mv.f); glGetFloatv(GL_PROJECTION_MATRIX,p.f); const matrix_t inv = (mv*p).inverse(); const float unit_x = (2.0f*((float)(x-vp[0])/(vp[2]-vp[0])))-1.0f, unit_y = 1.0f-(2.0f*((float)(y-vp[1])/(vp[3]-vp[1]))); const vec_t near(vec_t(unit_x,unit_y,-1)*inv); const vec_t far(vec_t(unit_x,unit_y,1)*inv); ray = ray_t(near,far-near); What have I got wrong? (How do you unproject the mouse-point?)

    Read the article

  • How are these bullets done?

    - by Mike
    I really want to know how the bullets in Radiangames Inferno are done. The bullets seem like they are just billboard particles but I am curious about how their tails are implemented. They can curve so this means they are not just a billboard. Also, they appear continuous which implies that the tails are not made of a bunch of smaller particles (I think). Can anyone shead some light on this for me?

    Read the article

  • Rain effect looks like snowfall effect?

    - by Nikhil Lamba
    i am making a game in that game i want rain effect i am little bit far from this right now i am doing like below particleSystem.addParticleInitializer(new ColorInitializer(1, 1, 1)); particleSystem.addParticleInitializer(new AlphaInitializer(0)); particleSystem.setBlendFunction(GL10.GL_SRC_ALPHA, GL10.GL_ONE); particleSystem.addParticleInitializer(new VelocityInitializer(2, 2, 20, 10)); particleSystem.addParticleInitializer(new RotationInitializer(0.0f, 30.0f)); particleSystem.addParticleModifier(new ScaleModifier(1.0f, 2.0f, 0, 150)); particleSystem.addParticleModifier(new ColorModifier(1, 1, 1, 1f, 1, 1, 1, 3)); particleSystem.addParticleModifier(new ColorModifier(1, 1, 1f, 1, 1, 1, 1, 6)); particleSystem.addParticleModifier(new AlphaModifier(0, 1, 0, 3)); particleSystem.addParticleModifier(new AlphaModifier(1, 0, 1, 125)); particleSystem.addParticleModifier(new ExpireModifier(50, 50)); scene.attachChild(particleSystem); But its looks like snowfall effect what changes i can do for make it rain effect please correct me EDIT : here is link for snapshot http://i.imgur.com/bRIMP.png

    Read the article

  • Computing a normal matrix in conjunction with gluLookAt

    - by Chris Smith
    I have a hand-rolled camera class that converts yaw, pitch, and roll angles into a forward, side, and up vector suitable for calling gluLookAt. Using this camera class I can modify the model-view matrix to move about the 3D world just fine. However, I am having trouble when using this camera class (and associated model-view matrix) when trying to perform directional lighting in my vertex shader. The problem is that the light direction, (0, 1, 0) for example, is relative to where the 'camera is looking' and not the actual world coordinates. (Or is this eye coordinates vs. model coordinates?) I would like the light direction to be unaffected by the camera's viewing direction. For example, when the camera is looking down the Z axis the ground is lit correctly. However, if I point the camera straight at the ground, then it goes dark. This is (I think) because the light direction is parallel with the camera's 'up' vector which is perpendicular with the ground's normal vector. I tried computing the normal matrix without taking the camera's model view into account, but then none of my objects were rotated correctly. Sorry if this sounds vague. I suspect there is a straight forward answer, but I'm not 100% clear on how the normal matrix should be used for transforming vertex normals in my vertex shader. For reference, here is pseudo code for my rendering loop: pMatrix = new Matrix(); pMatrix = makePerspective(...) mvMatrix = new Matrix() camera.apply(mvMatrix); // Calls gluLookAt // Move the object into position. mvMatrix.translatev(position); mvMatrix.rotatef(rotation.x, 1, 0, 0); mvMatrix.rotatef(rotation.y, 0, 1, 0); mvMatrix.rotatef(rotation.z, 0, 0, 1); var nMatrix = new Matrix(); nMatrix.set(mvMatrix.get().getInverse().getTranspose()); // Set vertex shader uniforms. gl.uniformMatrix4fv(shaderProgram.pMatrixUniform, false, new Float32Array(pMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.mvMatrixUniform, false, new Float32Array(mvMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.nMatrixUniform, false, new Float32Array(nMatrix.getFlattened())); // ... gl.drawElements(gl.TRIANGLES, this.vertexIndexBuffer.numItems, gl.UNSIGNED_SHORT, 0); And the corresponding vertex shader: // Attributes attribute vec3 aVertexPosition; attribute vec4 aVertexColor; attribute vec3 aVertexNormal; // Uniforms uniform mat4 uMVMatrix; uniform mat4 uNMatrix; uniform mat4 uPMatrix; // Varyings varying vec4 vColor; // Constants const vec3 LIGHT_DIRECTION = vec3(0, 1, 0); // Opposite direction of photons. const vec4 AMBIENT_COLOR = vec4 (0.2, 0.2, 0.2, 1.0); float ComputeLighting() { vec4 transformedNormal = vec4(aVertexNormal.xyz, 1.0); transformedNormal = uNMatrix * transformedNormal; float base = dot(normalize(transformedNormal.xyz), normalize(LIGHT_DIRECTION)); return max(base, 0.0); } void main(void) { gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0); float lightWeight = ComputeLighting(); vColor = vec4(aVertexColor.xyz * lightWeight, 1.0) + AMBIENT_COLOR; } Note that I am using WebGL, so if the anser is use glFixThisProblem(...) any pointers on how to re-implement that on WebGL if missing would be appreciated.

    Read the article

  • What are the benefits of designing a KeyBinding relay?

    - by Adam Naylor
    The input system of Quake3 is handled using a Keybinding relay, whereby each keypress is matched against a 'binding' which is then passed to the CLI along with a time stamp of when the keypress (or release) occurred. I just wanted to get an idea from developers what they considered to be the key benefits of designing your input system around this approach? One thing i don't particularly like is the appending of the timestamp to the bound command. This seems like a bit of a hack to bend the CLI into handling the games input? Also I feel that detecting the keypress only to add the command to a stream of text that gets parsed at a later date to be a slightly latent way of responding to input? (or is this unfounded?) The only real benefit i can see is that it allows you to bind 'complex' commands to keypresses; like 'switch weapon;+fire;' for example. Or maybe for journaling purposes? Thanks for any insights!

    Read the article

  • Inconsistent movement / line-of-sight around obstacles on a hexagonal grid

    - by Darq
    In a roguelike game I've been working on, one of my core design goals has been to allow the player to "Play the game, not the grid." In essence, I want the player's positioning to be tactical because of elements in the game world, not simply because some grid tiles are more advantageous than others, in relation to enemies. I am fine with world geometry not being realistic, but it needs to be consistent. In this process I have ran into most of the common problems (Square tiles? Diagonal movement, LOS, corner cases, etc.) and have moved to a hexagonal tile grid. For the most part this has been great, and I've not had too many inconsistencies. Recently however I have been stumped by the following: Points A and B are both distance 4 from the player (red lines). Line-of-sight to both are blocked by walls (black tiles). However, due to the hexagonal grid, A can be reached in 4 moves, whereas B requires 5 moves (blue lines). On a hex grid, "shortest path" seems divorced from "direct path", there may be multiple shortest paths to any point, but there is only one direct path (or two in some situations). This is fine, geometry need not be realistic. However this also seems inconsistent, similar obstacles are more effective in some positions than in others. A player running away from an enemy should be able to run in any direction, increasing the distance between the two actors. However when placing obstacles or traps between themselves and enemies, the player is best served by running in one of the six directions that don't have multiple shortest paths. Is there a way to rationalise this? Am I missing something that makes this behaviour consistent? Or is there a way to make this behaviour consistent? I am most certainly over-thinking this, but as it is one of my goals, I should do it due diligence.

    Read the article

  • Rotate/Translate object in local space

    - by Mathias Hölzl
    I am just trying to create a movementcontroller class for game entities. These class should transform the entity affected by the mouse and keyboard input. I am able to calculate the changed rotation and the new globalPosition. Then I multiply: newGlobalMatrix = changedRotationMatrix * oldGlobalMatrix; newGlobalMatrix = MatrixSetPosition(newPosition); The problem is that the object rotates around the global axis and not around the local axis. I use XNAMath for the matrix calculation.

    Read the article

  • Only draw visible objects to the camera in 2D

    - by Deukalion
    I have Map, each map has an array of Ground, each Ground consists of an array of VertexPositionTexture and a texture name reference so it renders a texture at these points (as a shape through triangulation). Now when I render my map I only want to get a list of all objects that are visible in the camera. (So I won't loop through more than I have to) Structs: public struct Map { public Ground[] Ground { get; set; } } public struct Ground { public int[] Indexes { get; set; } public VertexPositionNormalTexture[] Points { get; set; } public Vector3 TopLeft { get; set; } public Vector3 TopRight { get; set; } public Vector3 BottomLeft { get; set; } public Vector3 BottomRight { get; set; } } public struct RenderBoundaries<T> { public BoundingBox Box; public T Items; } when I load a map: foreach (Ground ground in CurrentMap.Ground) { Boundaries.Add(new RenderBoundaries<Ground>() { Box = BoundingBox.CreateFromPoints(new Vector3[] { ground.TopLeft, ground.TopRight, ground.BottomLeft, ground.BottomRight }), Items = ground }); } TopLeft, TopRight, BottomLeft, BottomRight are simply the locations of each corner that the shape make. A rectangle. When I try to loop through only the objects that are visible I do this in my Draw method: public int Draw(GraphicsDevice device, ICamera camera) { BoundingFrustum frustum = new BoundingFrustum(camera.View * camera.Projection); // Visible count int count = 0; EffectTexture.World = camera.World; EffectTexture.View = camera.View; EffectTexture.Projection = camera.Projection; foreach (EffectPass pass in EffectTexture.CurrentTechnique.Passes) { pass.Apply(); foreach (RenderBoundaries<Ground> render in Boundaries.Where(m => frustum.Contains(m.Box) != ContainmentType.Disjoint)) { // Draw ground count++; } } return count; } When I try adding just one ground, then moving the camera so the ground is out of frame it still returns 1 which means it still gets draw even though it's not within the camera's view. Am I doing something or wrong or can it be because of my Camera? Any ideas why it doesn't work?

    Read the article

  • Ray Tracing Shadows in deferred rendering

    - by Grieverheart
    Recently I have programmed a raytracer for fun and found it beutifully simple how shadows are created compared to a rasterizer. Now, I couldn't help but I think if it would be possible to implement somthing similar for ray tracing of shadows in a deferred renderer. The way I though this could work is after drawing to the gbuffer, in a separate pass and for each pixel to calculate rays to the lights and draw them as lines of unique color together with the geometry (with color 0). The lines will be cut-off if there is occlusion and this fact could be used in a fragment shader to calculate which rays are occluded. I guess there must be something I'm missing, for example I'm not sure how the fragment shader could save the occlusion results for each ray so that they are available for pixel at the ray's origin. Has this method been tried before, is it possible to implement it as I described and if yes what would be the drawbacks in performance of calculating shadows this way?

    Read the article

  • How to implement a simple bullet trajectory

    - by AirieFenix
    I searched and searched and although it's a fair simple question, I don't find the proper answer but general ideas (which I already have). I have a top-down game and I want to implement a gun which shoots bullets that follow a simple path (no physics nor change of trajectory, just go from A to B thing). a: vector of the position of the gun/player. b: vector of the mouse position (cross-hair). w: the vector of the bullet's trajectory. So, w=b-a. And the position of the bullet = [x=x0+speed*time*normalized w.x , y=y0+speed*time * normalized w.y]. I have the constructor: public Shot(int shipX, int shipY, int mouseX, int mouseY) { //I get mouse with Gdx.input.getX()/getY() ... this.shotTime = TimeUtils.millis(); this.posX = shipX; this.posY = shipY; //I used aVector = aVector.nor() here before but for some reason didn't work float tmp = (float) (Math.pow(mouseX-shipX, 2) + Math.pow(mouseY-shipY, 2)); tmp = (float) Math.sqrt(Math.abs(tmp)); this.vecX = (mouseX-shipX)/tmp; this.vecY = (mouseY-shipY)/tmp; } And here I update the position and draw the shot: public void drawShot(SpriteBatch batch) { this.lifeTime = TimeUtils.millis() - this.shotTime; //position = positionBefore + v*t this.posX = this.posX + this.vecX*this.lifeTime*speed*Gdx.graphics.getDeltaTime(); this.posY = this.posY + this.vecY*this.lifeTime*speed*Gdx.graphics.getDeltaTime(); ... } Now, the behavior of the bullet seems very awkward, not going exactly where my mouse is (it's like the mouse is 30px off) and with a random speed. I know I probably need to open the old algebra book from college but I'd like somebody says if I'm in the right direction (or points me to it); if it's a calculation problem, a code problem or both. Also, is it possible that Gdx.input.getX() gives me non-precise position? Because when I draw the cross-hair it also draws off the cursor position. Sorry for the long post and sorry if it's a very basic question. Thanks!

    Read the article

  • OpenGL-ES: clearing the alpha of the FrameBufferObject

    - by MrDatabase
    This question is a follow-up to Texture artifacts on iPad How does one "clear the alpha of the render texture frameBufferObject"? I've searched around here, StackOverflow and various search engines but no luck. I've tried a few things... for example calling GlClear(GL_COLOR_BUFFER_BIT) at the beginning of my render loop... but it doesn't seem to make a difference. Any help is appreciated since I'm still new to OpenGL. Cheers! p.s. I read on SO and in Apple's documentation that GlClear should always be called at the beginning of the renderLoop. Agree? Disagree? Here's where I read this: http://stackoverflow.com/questions/2538662/how-does-glclear-improve-performance

    Read the article

  • Cocos2d sprite's parent not reflecting true scale value

    - by Paul Renton
    I am encountering issues with determining a CCSprite's parent node's scale value. In my game I have a class that extends CCLayer and scales itself based on game triggers. Certain child sprites of this CCLayer have mathematical calculations that become inaccurate once I scale the parent CCLayer. For instance, I have a tank sprite that needs to determine its firing point within the parent node. Whenever I scale the layer and ask the layer for its scale values, they are accurate. However, when I poll the sprites contained within the layer for their parent's scale values, they always appear as one. // From within the sprite CCLOG(@"ChildSprite-> Parent's scale values are scaleX: %f, scaleY: %f", self.parent.scaleX, self.parent.scaleY); // Outputs 1.0,1.0 // From within the layer CCLOG(@"Layer-> ScaleX : %f, ScaleY: %f , SCALE: %f", self.scaleX, self.scaleY, self.scale); // Output is 0.80,0.80 Could anyone explain to me why this is the case? I don't understand why these values are different. Maybe I don't understand the inner design of Cocos2d fully. Any help is appreciated.

    Read the article

  • How should I unbind and delete OpenAL buffers?

    - by Joe Wreschnig
    I'm using OpenAL to play sounds. I'm trying to implement a fire-and-forget play function that takes a buffer ID and assigns it to a source from a pool I have previously allocated, and plays it. However, there is a problem with object lifetimes. In OpenGL, delete functions either automatically unbind things (e.g. textures), or automatically deletes the thing when it eventually is unbound (e.g. shaders) and so it's usually easy to manage deletion. However alDeleteBuffers instead simply fails with AL_INVALID_OPERATION if the buffer is still bound to a source. Is there an idiomatic way to "delete" OpenAL buffers that allows them to finish playing, and then automatically unbinds and really them? Do I need to tie buffer management more deeply into the source pool (e.g. deleting a buffer requires checking all the allocated sources also)? Similarly, is there an idiomatic way to unbind (but not delete) buffers when they are finished playing? It would be nice if, when I was looking for a free source, I only needed to see if a buffer was attached at all and not bother checking the source state. (I'm using C++, although approaches for C are also fine. Approaches assuming a GCd language and using finalizers are probably not applicable.)

    Read the article

  • Implement 2x speed in tower of defense type game

    - by Siddharth
    I was currently developing tower of defense game and I want to implement 2x feature for my game. Game usually run with 1x speed that was normal speed of the game. Here what 1x and 2x mean : 1x - mention normal speed of the game, 2x - mention the game object moves with double speed means user experience the fast game play. I want to implement such functionality for my game. The functionality that I want contains in the game Medieval Castle game that was available in the market. https://play.google.com/store/apps/details?id=com.nova.root&feature=search_result#?t=W251bGwsMSwxLDEsImNvbS5ub3ZhLnJvb3QiXQ.. The screen shot also shows the 1x and 2x button in that game. I think for 2x speed of the game I have to increase the speed of each object that were in the game. So any member please help what to do for that implementation. Only idea become enough for me.

    Read the article

  • Texture the quad with different parts of texture

    - by PolGraphic
    I have a 2D quad. Let say it's position is (5,10) and size is (7,11). I want to texture it with one texture, but using three different parts of it. I want to texture the part of quad from x = 5 to x = 7 with part of texture from U = 0 to U = 0.5 (replaying it after achieving 0.5, so I will have 4 same 0.5-lenght fragments). The second one with some other part of texture (also repeating it) and third in the same style. But, how to achieve it? I know that: float2 tc = fmod(input.TexCoord, textureCoordinates.zw - textureCoordinates.xy) + textureCoordinates.xy; //textureCoordinates.xy = fragments' offset Will give me the texture part replaying.

    Read the article

  • Libgdx actor bounds are wrong

    - by Undume
    The Actor's boundaries are not centered at the ButtonText but I used the setBounds() method. The higher the Y position is, the less centered is the boundary. The weird thing is that i only created and added to the Stage one button but the screen shows two. When i click the top button, the bottom one is the one highlighted. How can i fix that? import com.badlogic.gdx.Game; import com.badlogic.gdx.Gdx; import com.badlogic.gdx.files.FileHandle; import com.badlogic.gdx.scenes.scene2d.Stage; import com.badlogic.gdx.scenes.scene2d.ui.Skin; import com.badlogic.gdx.scenes.scene2d.ui.TextButton; public class MyGame extends Game { Stage stage; @Override public void create() { stage=new Stage(); FileHandle skinFile = new FileHandle("data/resources/uiskin/uiskin.json"); Skin skin = new Skin(skinFile); TextButton sas=new TextButton("dd",skin); sas.setBounds(0, 500, 100, 100); stage.addActor(sas); Gdx.input.setInputProcessor(stage); } @Override public void dispose() { super.dispose(); } @Override public void render() { super.render(); stage.act(Gdx.graphics.getDeltaTime()); stage.draw(); } @Override public void resize(int width, int height) { super.resize(width, height); } @Override public void pause() { super.pause(); } @Override public void resume() { super.resume(); } }

    Read the article

  • Typical Applications of Linear System Solver in Game Developemnt

    - by craftsman.don
    I am going to write a custom solver for linear system. I would like to survey the typical problems involved the linear system solving in games. So that I can custom optimization on these problems based on the shape of the matrix. currently I am focus on these problems: B-Spline editing (I use a linear solve to resolve the C0, C1, C2 continuity) Constraint in Simulation (especially Position-Constraint, cloth) Both of them are Banded Matrix. I want to hear about some other applications of a linear system in games. Thank you.

    Read the article

  • Quadtree collapsing

    - by Caius Eugene
    Okay so i've spent a few days learning what a quadtree is and how to implement one. So far I have a quadtree that when I click inside a leaf it subdivides, I wondering how do I get the previous subdivisions to collapse back up, so that only one area is subdivided at a time? This is what mine looks like: (1. initial mouse click) (2. another mouse click) The aim to to eventually track the position of my mouse and subdivide the area it is in dynamically. THE OVERALL aim it to use this to create a terrain mesh and subdivide based on the camera. But I've gone right back to basics to get an understanding of how this will work. Any advice would be grand! - Caius

    Read the article

  • How can I locate empty space next to polygon regions?

    - by Stephen
    Let's say I have the following area in a top-down map: The circle is the player, the black square is an obstacle, and the grey polygons with red borders are walk-able areas that will be used as a navigation mesh for enemies. Obstacles and grey polygons are always convex. The grey regions were defined using an algorithm when the world was generated at runtime. Notice the little white column. I need to figure out where any empty space like this is, if at all, after the algorithm builds the grey regions, so that I can fill the space with another region. Basically what I'm hoping for is an algorithm that can detect empty space next to a polygon.

    Read the article

  • What exactly is UV and UVW Mapping?

    - by Michael Stum
    Trying to understand some basic 3D concepts, at the moment I'm trying to figure out how textures actually work. I know that UV and UVW mapping are techniques that map 2D Textures to 3D Objects - Wikipedia told me as much. I googled for explanations but only found tutorials that assumed that I already know what it is. From my understanding, each 3D Model is made out of Points, and several points create a face? Does each point or face have a secondary coordinate that maps to a x/y position in the 2D Texture? Or how does unwrapping manipulate the model? Also, what does the W in UVW really do, what does it offer over UV? As I understand it, W maps to the Z coordinate, but in what situation would I have different textures for the same X/Y and different Z, wouldn't the Z part be invisible? Or am I completely misunderstanding this?

    Read the article

  • Moving around/avoiding obstacles

    - by János Harsányi
    I would like to write a "game", where you can place an obstacle (red), and then the black dot tries to avoid it, and get to the green target. I'm using a very easy way to avoid it, if the black dot is close to the red, it changes its direction, and moves for a while, then it moves forward to the green point. How could I create a "smooth" path for the computer controlled "player"? Edit: Not the smoothness is the main point, but to avoid the red blocking "wall" and not to crash into it and then avoid it. How could I implement some path finding algorithm if I just have basically 3 points? (And what would it make the things much more complicated, if you could place multiple obstacles?)

    Read the article

  • Open source level editor for HTML5 platform game?

    - by Lai Yu-Hsuan
    A natty GUI editor is very helpful to create level map. I want to use some open-source choices rather than build my own from scratch. I found Tiled Map Editor but it doesn't work for what I want. Though I'm building HTML5 game, I don't have to use a HTML5 level editor as long as it can output well-formatted map files which my javascript can read. Edit: Sorry for the confusion. Tiled does not work for me because to make the player perform a 'tricky' jump, sometimes I want to set the distance between two platforms to, say, 7/3 or 8/3 tiles. But in Tiled I get only 2 or 3. If Tiled can do this, please teach me.

    Read the article

  • Unity3D - Projection matrix camera frustum

    - by MulletDevil
    I've used off centre projection to create a custom projection matrix for my camera. When I run the game I can see the scene correctly in the game view but in the editor view the camera frustum is not correct. It still shows the original frustum shape not the new one. It also appears that Unity is using the original frustum for frustum culling and not the new one as I can see object being culled which are visible to the new frustum but would not be visible in the old one. Am I wrong in thinking that a custom projection matrix would alter the view frustum? Or am I missing something else?

    Read the article

< Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >