Search Results

Search found 35343 results on 1414 pages for 'development tools'.

Page 635/1414 | < Previous Page | 631 632 633 634 635 636 637 638 639 640 641 642  | Next Page >

  • Determining the angle to fire a shot when target and shooter moves, and bullet moves with shooter velocity added in

    - by Azaral
    I saw this question: Predicting enemy position in order to have an object lead its target and followed the link in the answer to stack overflow. In the stack overflow page I used the 2nd answer, the one that is a large mathematical derivation. My situation is a little different though. My first question though is will the answer provided in the stack overflow page even work to begin with, assuming the original circumstances of moving target and stationary shooter. My situation is a little different than that situation. My target moves, the shooter moves, and the bullets from the shooter start off with the velocities in x and y added to the bullets' x and y velocities. If you are sliding to the right, the bullets will remain in front of you as you move so as long as your velocity remains constant. What I'm trying to do is to get the enemy to be able to determine where they need to shoot in order to hit the player. Unless the player and enemy is stationary, the velocity from the ship adding to the velocity of the bullets will cause a miss. I'd rather like to prevent that. I used the formula in the stack overflow answer and did what I thought were the appropriate adjustments. I've been banging at this for the last four hours and I just can't make it click. It is probably something really simple and boneheaded that I am missing (that seems to be a lot of my problems lately). Here is the solution presented from the stack overflow answer: It boils down to solving a quadratic equation of the form: a * sqr(x) + b * x + c == 0 Note that by sqr I mean square, as opposed to square root. Use the following values: a := sqr(target.velocityX) + sqr(target.velocityY) - sqr(projectile_speed) b := 2 * (target.velocityX * (target.startX - cannon.X) + target.velocityY * (target.startY - cannon.Y)) c := sqr(target.startX - cannon.X) + sqr(target.startY - cannon.Y) Now we can look at the discriminant to determine if we have a possible solution. disc := sqr(b) - 4 * a * c If the discriminant is less than 0, forget about hitting your target -- your projectile can never get there in time. Otherwise, look at two candidate solutions: t1 := (-b + sqrt(disc)) / (2 * a) t2 := (-b - sqrt(disc)) / (2 * a) Note that if disc == 0 then t1 and t2 are equal. If there are no other considerations such as intervening obstacles, simply choose the smaller positive value. (Negative t values would require firing backward in time to use!) Substitute the chosen t value back into the target's position equations to get the coordinates of the leading point you should be aiming at: aim.X := t * target.velocityX + target.startX aim.Y := t * target.velocityY + target.startY Here is my code, after being corrected by Sam Hocevar (thank you again for your help!). It still doesn't work. For some reason it never enters the section of code inside the if(disc = 0) (obviously because it is always less than zero but...). However, if I plug the numbers from my game log on the enemy and player positions and velocities it outputs a valid firing solution. I have looked at the code side by side a couple of times now and I can't find any differences. There has got to be something simple I'm missing here. If someone else could look at this code and determine what is going on here I'd appreciate it. I know it's not going through that section because if it were, shouldShoot would become true and the enemy would be blasting away at the player. This section calls the function in question, CalculateShootHeading() if(shouldMove) { UseEngines(); } x += xVelocity; y += yVelocity; CalculateShootHeading(); if(shouldShoot) { ShootWeapons(); } UpdateWeapons(); This is CalculateShootHeading(). This is inside the enemy class so x and y are the enemy's x and y and the same with velocity. One output from my game log gives Player X = 2108, Player Y = -180.956, Player X velocity = 10.9949, Player Y Velocity = -6.26017, Enemy X = 1988.31, Enemy Y = -339.051, Enemy X velocity = 1.81666, Enemy Y velocity = -9.67762, 0 enemy projectiles. The output from the console tester is Bullet position = 2210.49, -239.313 and Player Position = 2210.49, -239.313. This doesn't make any sense. The only thing that could be different is the code or the input into my function in the game and I've checked that and I don't think that it is wrong as it's updated before this and never changed. float const bulletSpeed = 30.f; float const dx = playerX - x; float const dy = playerY - y; float const vx = playerXVelocity - xVelocity; float const vy = playerYVelocity - yVelocity; float const a = vx * vx + vy * vy - bulletSpeed * bulletSpeed; float const b = 2.f * (vx * dx + vy * dy); float const c = dx * dx + dy * dy; float const disc = b * b - 4.f * a * c; shouldShoot = false; if (disc >= 0.f) { float t0 = (-b - std::sqrt(disc)) / (2.f * a); float t1 = (-b + std::sqrt(disc)) / (2.f * a); if (t0 < 0.f || (t1 < t0 && t1 >= 0.f)) { t0 = t1; } if (t0 >= 0.f) { float shootx = vx + dx / t0; float shooty = vy + dy / t0; heading = std::atan2(shooty, shootx) * RAD2DEGREE; } shouldShoot = true; }

    Read the article

  • HTML5 Canvas Tileset Animation

    - by Veyha
    How to do in HTML5 canvas Image animating? I am have this code now: http://jsfiddle.net/WnjB6/1/ In here I am can add animations something like - Animation.add('stand', [0, 1, 2, 3, 4, 5]); But how to play this animation? My image drawing function is - drawTile(canvasX, canvasY, tile, tileWidth, tileHeight); Animation['stand']; return 0, 1, 2, 3, 4, 5 I am need something like when I am run Animation.play('stand') run animation from 'stand' array. I am try to do this something like one day, but no have more idea how. :( Thanks and sorry for my bad English language.

    Read the article

  • Why do we move the world instead of the camera

    - by sharethis
    I heard that in an OpenGL game what we do to let the player move is not to move the camera but to move the whole world around. For example here is an extract of this tutorial: http://open.gl/transformations In real life you're used to moving the camera to alter the view of a certain scene, in OpenGL it's the other way around. The camera in OpenGL cannot move and is defined to be located at (0,0,0) facing the negative Z direction. That means that instead of moving and rotating the camera, the world is moved and rotated around the camera to construct the appropriate view. Why do we do that?

    Read the article

  • How to program a cutting tool for 3D model in game

    - by Jesse S
    I'm looking for a resource to figure out how to program a function to cut a 3d model in game. Example: Enemy/NPC is sliced into 2 pieces with a sword. His body is not hollow, you can see bloody texture where normally a 'polygon hole' would be. The first step is to actually 'cut/slice' the model, then add in polygons to fill the hole in the model. I know this can be done in 3D modelling software, but I'm not sure how to go about doing this in a game, code-wise. I do not wish to use 'pre cut-up" models, the code will determine where the cut is. Any pointers in the right direction would be greatly appreciated.

    Read the article

  • Instead of the specified Texture, black circles on a green background are getting rendered. Why?

    - by vinzBad
    I'm trying to render a Texture via OpenGL. But instead of the texture black circles on a green background are rendered. (They scale, depending what the rotation of the texture is) Example: The texture I'm trying to render is the following: This is the code I use to render the texture, it's located in my Sprite-class. public void Render() { Matrix4 matrix = Matrix4.CreateTranslation(-OriginX, -OriginY, 0) * Matrix4.CreateRotationZ(Rotation) * Matrix4.CreateTranslation(X, Y, 0); Vector2[] corners = { new Vector2(0,0), //top left new Vector2(Width ,0),//top right new Vector2(Width,Height),//bottom rigth new Vector2(0,Height)//bottom left }; //copy the corners to the uv coordinates Vector2[] uv = corners.ToArray<Vector2>(); //transform the coordinates for (int i = 0; i < 4; i++) corners[i] = new Vector2(Vector3.Transform(new Vector3(corners[i]), matrix)); //GL.Color3(TintColor); GL.BindTexture(TextureTarget.Texture2D, _ID); GL.Begin(BeginMode.Quads); { for (int i = 0; i < 4; i++) { GL.TexCoord2(uv[i]); GL.Vertex3(corners[i].X, corners[i].Y, _layerDepth); } } GL.End(); if (EnableDebugDraw) { GL.Color3(Color.Violet); GL.PointSize(3); GL.Begin(BeginMode.Points); { for (int i = 0; i < 4; i++) GL.Vertex2(corners[i]); } GL.End(); GL.Color3(Color.Green); GL.Begin(BeginMode.Points); GL.Vertex2(X, Y); GL.End(); } } This is how I setup OpenGL. public static void SetupGL() { GL.Enable(EnableCap.AlphaTest); GL.AlphaFunc(AlphaFunction.Greater, 0.1f); GL.Enable(EnableCap.Texture2D); GL.Hint(HintTarget.PerspectiveCorrectionHint, HintMode.Nicest); } With this function I load the texture: public static uint LoadTexture(string path) { uint id; GL.GenTextures(1, out id); GL.BindTexture(TextureTarget.Texture2D, id); Bitmap bitmap = new Bitmap(path); BitmapData data = bitmap.LockBits(new System.Drawing.Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb); GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, data.Width, data.Height, 0, OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, data.Scan0); bitmap.UnlockBits(data); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear); return id; } And here I call Sprite.Render() protected override void OnRenderFrame(FrameEventArgs e) { GL.ClearColor(Color.MidnightBlue); GL.Clear(ClearBufferMask.ColorBufferBit); _sprite.Render(); SwapBuffers(); base.OnRenderFrame(e); } As I stole this code from the Textures-Example from OpenTK, I don't understand why this doesn't work.

    Read the article

  • In what kind of variable type is the player position stored on a MMORPG such as WoW?

    - by jokoon
    I even heard J. Carmack quickly talk about it... How a software can track a player's position so accurately, being on a such huge world, without loading between zones, and on a multiplayer scale ? How is the data formatted when it passes through the netcode ? I can understand how vertices are stored into the graphic card's memory, but when it comes to synchronize the multiplayer, I can't imagine what is best.

    Read the article

  • Are VM-based languages becoming viable for Graphics since the move to GPU computing?

    - by skiwi
    Perhaps the title is not the most clear, so let me elaborate it more: I am talking about VM-based languages, by that I mean languages that run on the JVM (java) and for example C#. Also I am talking about 3D graphics, just to be clear. Lately the trend has been that most computing is being done on the GPU and not on the CPU, and since times the issue with programming games on a VM-based language is that garbage collecting may happen randomly. So let's take a look which is responsible for what: Showing the graphics: GPU Uploading graphics to the GPU: CPU? Needs to be done every frame? Calculating physics constraints: GPU Doing the real game logic (Determining when to move objects (independent of physics calculations), processing AI): CPU Is my list actually correct? And if it is, is for example Java becoming more viable? Or is uploading the graphics (vertices) still the most expensive operation? Would like to get more insight into this.

    Read the article

  • 2-components color model

    - by Cyan
    RGB is the natural color model for OpenGL. But a lot of other color models exist. For example, CMY(K) for printers, YUV for JPEG, the little cousins YCbCr and YCoCg, HSL & HSV from the 70's, and so on. All these models tend to share a common property : they are based on 3 components. Therefore my question is : Does it exist a 2-components color model ? I'm surprised to not find any. I was expecting something along the line of Hue+light could exist. I guess it cannot be as "complete" as a true 3-components color model, but a fine-enough approximation will be good for my usecase. The end objective is to store the 2 components into a single BC5 texture (GL_COMPRESSED_RED_GREEN_RGTC2 in OpenGL). The 3rd component requires a second fetch into a second texture, which hurts performance.

    Read the article

  • android multitouch problem

    - by Max
    Im aware that there a a couple of posts on this matter, but Ive tried all of them and none of them gets rid of my problem. Im starting to get close to the end of my game so I bought a cabel to try it on a real phone, and as I expected my multitouch dosnt work. I use 2 joysticks, one to move my character and one to change his direction so he can shoot while walking backwards etc. my local variable: public void update(MotionEvent event) { if (event == null && lastEvent == null) { return; } else if (event == null && lastEvent != null) { event = lastEvent; } else { lastEvent = event; } int index = event.getActionIndex(); int pointerId = event.getPointerId(index); statement for left Joystick: if (pointerId == 0 && event.getAction() == MotionEvent.ACTION_DOWN && (int) event.getX() > steeringxMesh - 50 && (int) event.getX() < steeringxMesh + 50 && (int) event.getY() > yMesh - 50 && (int) event.getY() < yMesh + 50) { dragging = true; } else if (event.getAction() == MotionEvent.ACTION_UP) { dragging = false; } if (dragging) { //code for moving my character statement for my right joystick: if (pointerId == 1 && event.getAction() == MotionEvent.ACTION_DOWN && (int) event.getX() > shootingxMesh - 50 && (int) event.getX() < shootingxMesh + 50 && (int) event.getY() > yMesh - 50 && (int) event.getY() < yMesh + 50) { shooting = true; } else if (event.getAction() == MotionEvent.ACTION_UP) { shooting = false; } if (shooting) { // code for aiming } This class is my main-Views onTouchListener and is called in a update-method that gets called in my game-loop, so its called every frame. Im really at a loss here, I've done a couple of tutorials and Ive tried all relevant solutions to similar posts. Can post entire Class if necessary but I think this is all the relevant code. Just hope someone can make some sence out of this.

    Read the article

  • XNA content.load Dependancy

    - by Richard
    Quick question, My project i'm building for test purposes is working fine but i have dependencies flying around everywhere due to the XNA framework. In Update i have gametime passed everywhere... this is okay. In Draw i have gametime & spritebatch passed everywhere... this is okay. My issue is in the content.load textures/sounds/fonts. I have them as public variables ie Texture1 = Content.load(of texture2d)("Texture1") I'm passing a 'Game1' pointer into the constructor of every new class being instantiated to gain access to these variables. Am i missing an OOP trick to prevent me having to pass a pointer to 'game1' to every New class?

    Read the article

  • Identifying connected lines drawn free-hand by a user

    - by rawrgoesthelion
    I have a series of 'images' described by a mixture of connected lines and curves. Users will draw on the screen, free hand, and my goal is to break their drawing down into a series of lines and curves that can be matched with the 'images' in my set. For the sake of simplicity, let's assume this is occurring on a touch screen. These lines will be connected. Each time the user's finger moves, the dx and dy is recorded. The drawing is considered complete and analyzed when the user's finger leaves the screen. I'm having trouble figuring out a good way to break the user's drawing down into lines. Is there any well known approach to this problem, a C++ library that solves it, or any good articles/technical papers on how to achieve this?

    Read the article

  • After one has made many grid based puzzles how does one make then into a PDF ready for printing

    - by alan ross
    After one has generated many grid based puzzles like sudoku, kakuro or even plain crosswords and now one has to print them in a book. How does one make a pdf (book file) from them automatically. To explain the question better. One has the puzzle ready in computer format like ..35.6.89 for all nine rows. The dot being the empty cell. How does one convert then to a picture on a PDF page complete with box, automatically without doing them individually and then print a book from the pdf file. As can be seen there are other things also printed on the page all this is done automatically.

    Read the article

  • How do I create tileable solid noise for map generation?

    - by nickbadal
    Hey guys, I'm trying to figure out how to generate tileable fractals in code (for game maps, but that's irrelevant) I've been trying to modify the Solid Noise plug-in shipped with GIMP (with my extremely limited understanding of how the code works) but I cant get mine to work correctly. My modified code so far (Java) GIMP's solid noise module that I'm basing my code off of (C) Here is what I'm trying to achieve but This is what I'm getting So if anybody can see what I've done wrong, or has a suggestion how I could go about doing it differently, I'd greatly appreciate it. Thanks in advance. And if I'm asking way to much or if just a huge failure at life, I apologize.

    Read the article

  • How to run around another football player

    - by Lumis
    I have finished a simple 2D one-on-one indoor football Android game. The thing that it seemed so simple to me, a human being, turned out to be difficult for a computer: how to go around the opponent … At the moment the game logic of the computer player is that if it hits into the human player will step back few points on the pixel greed and then try again to go towards the ball. The problem is if the human player is in-between then the computer player will oscillate in one place, which does not look very nice and the human opponent can use this weakness to control the game. You can see this in the photo – at the moment the computer will go along the red line indefinitely. I tried few ideas but it proved not easy to do it when both the human player and the ball are constantly moving so at each step computer would change directions and “oscillate” again. Once when the computer player reaches the ball it will kick it with certain amount of random strength and direction towards the human’s goal. The question here is how to formulate the logic of going around the ever moving human opponent and how to translate it into the co-ordinate system and frame by frame animation… any suggestions welcome.

    Read the article

  • Shader compile log depending on hardware

    - by dreta
    I'm done with the core of my graphics engine and I'm testing it on every platform I can get my hands on. Now, what I noticed is that different drivers return different shader and program compile log content. For example, on my friend's laptop if you successfuly compile a shader then the log is simply empty. However on my PC I get some useful information along with it. So if I compile a vertex shader, I'll get: Vertex shader was successfully compiled to run on hardware. Which isn't that impressive, but is what happens when I compile a program. On my friend's computer the log is empty, since the program compiles. However on my own computer I get: Vertex shader(s) linked, fragment shader(s) linked. Which is awesome, because I'm attaching a geometry shader with 0 (I have a geometry shader file with trash, so it doesn't compile and the pointer is set to 0), and the compiler just tells me which shaders linked. Now it got me thinking, if I was going to buy a graphics card, is there a way for me to get the information about whether or not I'll get this "extended" compile information? Maybe it's vendor specific? Now I don't expect an answer TBH, this seems a bit obscure, but maybe somebody has any experience with this and could post it.

    Read the article

  • Could I be going crazy with Event Handlers? Am I going the "wrong way" with my design?

    - by sensae
    I guess I've decided that I really like event handlers. I may be suffering a bit from analysis paralysis, but I'm concerned about making my design unwieldy or running into some other unforeseen consequence to my design decisions. My game engine currently does basic sprite-based rendering with a panning overhead camera. My design looks a bit like this: SceneHandler Contains a list of classes that implement the SceneListener interface (currently only Sprites). Calls render() once per tick, and sends onCameraUpdate(); messages to SceneListeners. InputHandler Polls the input once per tick, and sends a simple "onKeyPressed" message to InputListeners. I have a Camera InputListener which holds a SceneHandler instance and triggers updateCamera(); events based on what the input is. AgentHandler Calls default actions on any Agents (AI) once per tick, and will check a stack for any new events that are registered, dispatching them to specific Agents as needed. So I have basic sprite objects that can move around a scene and use rudimentary steering behaviors to travel. I've gotten onto collision detection, and this is where I'm not sure the direction my design is going is good. Is it a good practice to have many, small event handlers? I imagine going the way I am that I'd have to implement some kind of CollisionHandler. Would I be better off with a more consolidated EntityHandler which handles AI, collision updates, and other entity interactions in one class? Or will I be fine just implementing many different event handling subsystems which pass messages to each other based on what kind of event it is? Should I write an EntityHandler which is simply responsible for coordinating all these sub event handlers? I realize in some cases, such as my InputHandler and SceneHandler, those are very specific types of events. A large portion of my game code won't care about input, and a large portion won't care about updates that happen purely in the rendering of the scene. Thus I feel my isolation of those systems is justified. However, I'm asking this question specifically approaching game logic type events.

    Read the article

  • XNA: draw a sprite in 3d, is that possible?

    - by Heisenbug
    since now I always used sprited to draw in 2D: spriteBatch.Draw(myTexture, rectangle, color); (I suppose the texture is binded internally to 2 triangles and then scaled.) Now, I'm porting my game in 3D and I have to draw several planes (walls, floor, roof,..). Do I need to manually binding a texture to a geometry (for example using VertexPositionColorTexture with VertexBuffer and IndexBuffer), or is there any simpler way to do that? I'm looking for something like spriteBatch.Draw with the rectangle clip specified in 3d space: spriteBatch.Draw(myTexture, rectangleIn3D, color);

    Read the article

  • How can I make this arcade-highscore game more fun/interesting?

    - by j-a
    I'm having difficulties getting the fun factor into this iPhone game, and I am looking for some ideas or advice. I was asked to generalize the question a bit. What are some techniques for arcade highscore games that can be applied to this game in order to: Make each second of the game fun and challenging, from the first second to the end of the game. Regardless of skill level. Make the player want to try again and again to beat the high score. Briefly about the game: you aim using your finger and pull the bow chord and release by lifting your finger. That part feels quite nice how the bow interacts with the finger. The game idea: hearts fall down and you get 1 pt for each heart you shoot. You start with a few arrows and every now and then a bag of arrow comes down which - if you hit it, you get more arrows. Once your out of arrows the game is over. So it is all about beating your previous high score or your friends high scores. Unfortunately I don't find it that fun. Thankful for any ideas/suggestions/thoughts on how to make it more fun/interesting.

    Read the article

  • Is there any guarantee about the graphical output of different GPUs in DirectX?

    - by cloudraven
    Let's say that I run the same game in two different computers with different GPUs. If for example they are both certified for DirectX 10. Is there a guarantee that the output for a given program (game) is going to be the same regardless the manufacturer or model of the GPU? I am assuming the configurable settings are exactly the same in both cases. I heard that it is not the case for DirectX 9 and older, but that it is true for DirectX 10. If someone could provide a source confirming or denying it, it would be great. Also what is the guarantee offered. Will the output be exactly the same or just perceptually the same to the human eye?

    Read the article

  • How do I add Different Screens to my C#/XNA Game?

    - by Ramses Brown
    I'm working on a Pong clone in XNA. Gameplay-wise, I have it where I want it to be. I want to add a title screen and some other screens to it like a menu, as well as a screen for the Winning/Losing results. I've tried the Game State Management Example on the App Hub site, but It's very complicated and I haven't been able to make sense of it. Is there a simpler way? I'm hoping for a solution that can be used in other projects too. Plus I'd like to know how to actually create menu items (basically, how do I display the different options on it, and highlight them, etc).

    Read the article

  • Rendering projectiles

    - by Chris
    I'm working on a simple game that has the user control a space ship that shoots small circular projectiles. However, I'm not sure how to render these. Right now I know how to make a LPDIREC3DSURFACE for a sprite and render it onto a LPDIRECT3DDEVICE9, but that's only for a single sprite. I assume I don't need to constantly create new surfaces and devices. How should projectile generation/rendering be handled? Thanks in advance.

    Read the article

  • Using texture() in combination with JBox2D

    - by Valentino Ru
    I'm getting some trouble using the texture() method inside beginShape()/endShape() clause. In the display()-method of my class TowerElement (a bar which is DYNAMIC), I draw the object like following: void display(){ Vec2 pos = level.getLevel().getBodyPixelCoord(body); float a = body.getAngle(); // needed for rotation pushMatrix(); translate(pos.x, pos.y); rotate(-a); fill(temp); // temp is a color defined in the constructor stroke(0); beginShape(); vertex(-w/2,-h/2); vertex(w/2,-h/2); vertex(w/2,h-h/2); vertex(-w/2,h-h/2); endShape(CLOSE); popMatrix(); } Now, according to the API, I can use the texture() method inside the shape definition. Now when I remove the fill(temp) and put texture(img) (img is a PImage defined in the constructor), the stroke gets drawn, but the bar isn't filled and I get the warning texture() is not available with this renderer What can I do in order to use textures anyway? I don't even understand the error message, since I do not know much about different renderers.

    Read the article

  • Increase moving speed of body

    - by Siddharth
    How to move ball speedily on the screen using box2d in libGDX? public class Box2DDemo implements ApplicationListener { private SpriteBatch batch; private TextureRegion texture; private World world; private Body groundDownBody, groundUpBody, groundLeftBody, groundRightBody, ballBody; private BodyDef groundBodyDef1, groundBodyDef2, groundBodyDef3, groundBodyDef4, ballBodyDef; private PolygonShape groundDownPoly, groundUpPoly, groundLeftPoly, groundRightPoly; private CircleShape ballPoly; private Sprite sprite; private FixtureDef fixtureDef; private Vector2 ballPosition; private Box2DDebugRenderer renderer; Vector2 vector2; @Override public void create() { texture = new TextureRegion(new Texture( Gdx.files.internal("img/red_ring.png"))); sprite = new Sprite(texture); sprite.setOrigin(sprite.getWidth() / 2, sprite.getHeight() / 2); batch = new SpriteBatch(); world = new World(new Vector2(0.0f, -10.0f), false); groundBodyDef1 = new BodyDef(); groundBodyDef1.type = BodyType.StaticBody; groundBodyDef1.position.x = 0.0f; groundBodyDef1.position.y = 0.0f; groundDownBody = world.createBody(groundBodyDef1); groundBodyDef2 = new BodyDef(); groundBodyDef2.type = BodyType.StaticBody; groundBodyDef2.position.x = 0f; groundBodyDef2.position.y = Gdx.graphics.getHeight(); groundUpBody = world.createBody(groundBodyDef2); groundBodyDef3 = new BodyDef(); groundBodyDef3.type = BodyType.StaticBody; groundBodyDef3.position.x = 0f; groundBodyDef3.position.y = 0f; groundLeftBody = world.createBody(groundBodyDef3); groundBodyDef4 = new BodyDef(); groundBodyDef4.type = BodyType.StaticBody; groundBodyDef4.position.x = Gdx.graphics.getWidth(); groundBodyDef4.position.y = 0f; groundRightBody = world.createBody(groundBodyDef4); groundDownPoly = new PolygonShape(); groundDownPoly.setAsBox(480.0f, 10f); fixtureDef = new FixtureDef(); fixtureDef.density = 0f; fixtureDef.restitution = 1f; fixtureDef.friction = 0f; fixtureDef.shape = groundDownPoly; fixtureDef.filter.groupIndex = 0; groundDownBody.createFixture(fixtureDef); groundUpPoly = new PolygonShape(); groundUpPoly.setAsBox(480.0f, 10f); fixtureDef = new FixtureDef(); fixtureDef.friction = 0f; fixtureDef.restitution = 0f; fixtureDef.density = 0f; fixtureDef.shape = groundUpPoly; fixtureDef.filter.groupIndex = 0; groundUpBody.createFixture(fixtureDef); groundLeftPoly = new PolygonShape(); groundLeftPoly.setAsBox(10f, 320f); fixtureDef = new FixtureDef(); fixtureDef.friction = 0f; fixtureDef.restitution = 0f; fixtureDef.density = 0f; fixtureDef.shape = groundLeftPoly; fixtureDef.filter.groupIndex = 0; groundLeftBody.createFixture(fixtureDef); groundRightPoly = new PolygonShape(); groundRightPoly.setAsBox(10f, 320f); fixtureDef = new FixtureDef(); fixtureDef.friction = 0f; fixtureDef.restitution = 0f; fixtureDef.density = 0f; fixtureDef.shape = groundRightPoly; fixtureDef.filter.groupIndex = 0; groundRightBody.createFixture(fixtureDef); ballPoly = new CircleShape(); ballPoly.setRadius(16f); fixtureDef = new FixtureDef(); fixtureDef.shape = ballPoly; fixtureDef.density = 1f; fixtureDef.friction = 1f; fixtureDef.restitution = 1f; ballBodyDef = new BodyDef(); ballBodyDef.type = BodyType.DynamicBody; ballBodyDef.position.x = (int) 200; ballBodyDef.position.y = (int) 200; ballBody = world.createBody(ballBodyDef); // ballBody.setLinearVelocity(200f, 200f); // ballBody.applyLinearImpulse(new Vector2(250f, 250f), // ballBody.getLocalCenter()); ballBody.createFixture(fixtureDef); renderer = new Box2DDebugRenderer(true, false, false); } @Override public void dispose() { ballPoly.dispose(); groundLeftPoly.dispose(); groundUpPoly.dispose(); groundDownPoly.dispose(); groundRightPoly.dispose(); world.destroyBody(ballBody); world.dispose(); } @Override public void pause() { } @Override public void render() { world.step(1f/30f, 3, 3); Gdx.gl.glClearColor(1f, 1f, 1f, 1f); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); batch.begin(); vector2 = ballBody.getLinearVelocity(); System.out.println("X=" + vector2.x + " Y=" + vector2.y); ballPosition = ballBody.getPosition(); renderer.render(world,batch.getProjectionMatrix()); // int preX = (int) (vector2.x / Math.abs(vector2.x)); // int preY = (int) (vector2.y / Math.abs(vector2.y)); // // if (Math.abs(vector2.x) == 0.0f) // ballBody1.setLinearVelocity(1.4142137f, vector2.y); // else if (Math.abs(vector2.x) < 1.4142137f) // ballBody1.setLinearVelocity(preX * 5, vector2.y); // // if (Math.abs(vector2.y) == 0.0f) // ballBody1.setLinearVelocity(vector2.x, 1.4142137f); // else if (Math.abs(vector2.y) < 1.4142137f) // ballBody1.setLinearVelocity(vector2.x, preY * 5); batch.draw(sprite, (ballPosition.x - (texture.getRegionWidth() / 2)), (ballPosition.y - (texture.getRegionHeight() / 2))); batch.end(); } @Override public void resize(int arg0, int arg1) { } @Override public void resume() { } } I implement above code but I can not achieve higher moving speed of the ball

    Read the article

  • Problem when texturing triangles using glVertexPointer()

    - by tigrou
    I'm having a problem for displaying a single quad, here is how i do : float tex_coord[] = { 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0 }; //how many coords should i give ? int indexes[] = { 3, 2, 0, 0, 1, 3 } float vertexes[] = { -37, 0, 30, -38, 0, 29, -41, 0, 32, -42, 0, 31 } glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, 0, vertexes); glTexCoordPointer(2, GL_FLOAT, 0, tex_coord); glDrawElements(GL_TRIANGLES, 2, GL_UNSIGNED_INT, indices); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); The result : (with 2 triangles) (with 4 triangles)

    Read the article

  • Maya and Game Engines (i.e. Environment Testing)

    - by DiscreteGenius
    What does it mean if I'm designing an environment and I want to test it in the game engine, to see what its like to "run" [or fly] around my environment? I heard an instructor say that exact thing in a Maya training video and I'd like to know more about "How Game Engines and Maya are related to each other." He stated this would be done to see how things look in "size" (e.g. I assume he meant: 'How big is the cathedral, bridge, wall, building, etc.'). I've tried to research such information but it's too complicated, and detailed. I just want a simplistic response to my query. Thanks to everyone willing to help and not criticize my question.

    Read the article

< Previous Page | 631 632 633 634 635 636 637 638 639 640 641 642  | Next Page >