Search Results

Search found 19338 results on 774 pages for 'game loop'.

Page 382/774 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • Nice function for "rolling score up"?

    - by bobobobo
    I'm adding to the player's score, and I'm using a per-frame formula like: int score, displayedScore ;// score is ACTUAL score player has, // displayedScore is what is shown this frame to the player // (the creeping/"rolling" number) float disparity = score - displayedScore ; int d = disparity * .1f ; // add 1/10 of the difference, if( !d ) d = signum( disparity ) ; // last 10 go by 1's score += d ; Where inline int signum( float val ){ if( val > 0 ) return 1 ; else if( val < 0 ) return -1 ; else return 0 ; } So, it kind of works where it makes big changes rapidly, then it creeps in the last few one at a time. But I'm looking for better (or possibly well known?) score-creeping functions. Any one?

    Read the article

  • Changes to myApp.js files are reverted back to normal when the project is build - Cocos2dx

    - by Mansoor
    I am trying to do some changes to my myApp.js file of coco2dx project for android in eclipse but I am not able to do it. I am actually trying to change the default background image of my app. But when I run my project all the changes goes back to before values For Eg: This is the default line wer we are setting our background image this.sprite = cc.Sprite.create("res/HelloWorld.png"); I am changing it to the following line: this.sprite = cc.Sprite.create("res/CloseNormal.png"); But when I run my project CloseNormal.png goes back to HelloWorld.png I am using: OS: Win7 Cocos2d Ver: cocos2dx 2.2.2 Why is this happening. Can anybody help me?

    Read the article

  • Storing a Hex Grid

    - by Pedro Caetano
    I've been creating a small hex grid framework for Unity3D and have come to the following dilema. This is my coordinate system (taken from here) Link because I'm a new user It all works pretty nicely except for the fact I have no idea how to store it. I originally intended to store this in a 2D array and use images to generate my maps. One problem was that it had negative values (this was easily fixed by offsetting the coordinates a bit). However, due to this coordinate system, such an image or bitmap would have to be diamond shaped - and since these structures are square shaped, this would cause a lot of headaches even if I hack something together. Is there anything I'm missing that could fix this? I recall seeing a forum post regarding this in the unity forums but I can no longer find the link. Is writing a set of coordinate translators the best solution here? If you guys think it would be helpful, I can post code and images of my problem.

    Read the article

  • TGA loader: reverse height

    - by aVoX
    I wrote a TGA image loader in Java which is working perfectly for files created with GIMP as long as they are saved with the option "origin" set to "Top Left" (Note: Actually TGA files are meant to be stored upside down - "Bottom Left" in GIMP). My problem is that I want my image loader to be capable of reading all different kinds of TGAs, so my question is, how do I flip the image upside down? Note that I store all image data inside a one-dimensional byte array, because OpenGL (glTexImage2D to be specific) requires it that way. Thanks in advance.

    Read the article

  • Collision detection of player larger than clipping tile

    - by user1306322
    I want to know how to check for collisions efficiently in case where the player's box is larger than a map tile. On the left is my usual case where I make 8 checks against every surrounding tile, but with the right one it would be much more inefficient. (picture of two cases: on the left is the simple case, on the right is the one I need help with) http://i.stack.imgur.com/k7q0l.png How should I handle the right case?

    Read the article

  • viewing fbx files in windows via xna 4.0

    - by user17753
    I've made some models in Blender and exported them in Autodesk fbx format. I'm trying to view them using XNA 4.0 Refresh. Loading them isn't much an issue, but I'm not familiar enough with XNA 4.0 to, well basically I want to load in the model at say the origin (0,0,0) world coordinates, and then rotate and/or zoom the camera about the world coordinates origin as well so that I can test the model. Typically the mouse, and maybe some arrow keys for zooming/rotating the camera. Anyways, this seems like a simple task and I shouldn't have to re-invent this, isn't there a skeleton code somewhere for this kind of thing for XNA 4.0? I couldn't find a solid example for this on the web. I found a couple that seemed like they might work for xbox, but I'm trying to do this on windows only. Anyways, just looking to be pointed in the right direction on this one, thanks.

    Read the article

  • Leg animation not working

    - by Monacraft
    I am making a simple animation in XNA C# of a leg moving. This is the logic code for the thigh. It is meant to swing from 25' to 335'. However instead, it hits a point and then keeps on spinning in the other direction. Please help, here's the code: private void Thigh_method() { if (Legdata.Left == true) signvalue = -0.05f; else signvalue = 0.05f; if (Legdata.ToMid == true) Thighturn_ang += signvalue; if (Legdata.ToMid == false) Thighturn_ang -= signvalue; if (Thighturn_ang <= 25 || Thighturn_ang <= 335 && Thighturn_ang <= 180) Legdata.Left = true; if (Thighturn_ang >= 25 || Thighturn_ang >= 335 && Thighturn_ang >= 180) Legdata.Left = false; if (Thighturn_ang == 0) Legdata.ToMid = false; if (Math.Abs(Thighturn_ang) >= 25f) Legdata.ToMid = true; } Thanks in advance, Yours: Mona

    Read the article

  • bump mapping with 2 normal maps

    - by DorkMonstuh
    I was wondering if its actually possible to do bump mapping with 2 normal maps... I have tried doing it this way however I get a function overload on max and dot. uniform sampler2D n_mapTex; uniform sampler2D n_mapTex2; uniform sampler2D refTex; varying mediump vec2 TexCoord; varying mediump float vTime; void main() { mediump vec4 wave = texture2D(n_mapTex, TexCoord - vTime); mediump vec4 wave2 = texture2D(n_mapTex2, TexCoord + vTime); mediump vec4 bump = mix(wave2, wave, 0.5); //this extracts the normals from the combined normal maps mediump vec4 normal = normalize(bump.xyzw * 2.0 - 1.0); //determines light position mediump vec3 lightPos = normalize(vec3(0.0, 1.0, 3.0)); mediump float diffuse = max(dot(normal, lightPos),0.0); gl_FragColor = mix(texture2D(refTex, TexCoord), bump, 0.5); }

    Read the article

  • Mesh with Alpha Texture doesn't blend properly

    - by faulty
    I've followed example from various place regarding setting OutputMerger's BlendState to enable alpha/transparent texture on mesh. The setup is as follows: var transParentOp = new BlendStateDescription { SourceBlend = BlendOption.SourceAlpha, DestinationBlend = BlendOption.InverseDestinationAlpha, BlendOperation = BlendOperation.Add, SourceAlphaBlend = BlendOption.Zero, DestinationAlphaBlend = BlendOption.Zero, AlphaBlendOperation = BlendOperation.Add, }; I've made up a sample that display 3 mesh A, B and C, where each overlaps one another. They are drawn sequentially, A to C. Distance from camera where A is nearest and C is furthest. So, the expected output is that A see through and saw part of B and C. B will see through and saw part of C. But what I get was none of them see through in that order, but if I move C closer to the camera, then it will be semi transparent and see through A and B. B if move closer to camera will see A but not C. Sort of reverse. So it seems that I need to draw them in reverse order where furthest from camera is drawn first then nearest to camera is drawn last. Is it suppose to be done this way, or I can actually configure the blendstate so it works no matter in which order i draw them? Thanks

    Read the article

  • Pre baked fractures and explosion : I need an answer for C++

    - by Ken
    What are the prebaked or precomputed explosions or fractures from a programmer viewpoint ? I would like to know how to achieve this in C++ and how this things are usually considered (they are animations? textures?), it would be perfect if there will be some examples available or someone that can picture a broad view about this. I need to add a really small support for this in my code and i need an hint about how to start, i would like to do this on my own without other libraries.

    Read the article

  • Problems when rendering code on Nvidia GPU

    - by 2am
    I am following OpenGL GLSL cookbook 4.0, I have rendered a tesselated quad, as you see in the screenshot below, and i am moving Y coordinate of every vertex using a time based sin function as given in the code in the book. This program, as you see on the text in the image, runs perfectly on built in Intel HD graphics of my processor, but i have Nvidia GT 555m graphics in my laptop, (which by the way has switchable graphics) when I run the program on the graphic card, the OpenGL shader compilation fails. It fails on following instruction.. pos.y = sin.waveAmp * sin(u); giving error Error C1105 : Cannot call a non-function I know this error is coming on the sin(u) function which you see in the instruction. I am not able to understand why? When i removed sin(u) from the code, the program ran fine on Nvidia card. Its running with sin(u) fine on Intel HD 3000 graphics. Also, if you notice the program is almost unusable with intel HD 3000 graphics, I am getting only 9FPS, which is not enough. Its too much load for intel HD 3000. So, sin(X) function is not defined in the OpenGL specification given by Nvidia drivers or something else??

    Read the article

  • HTML5 Canvas Converting between cartesian and isometric coordinates

    - by Amir
    I'm having issues wrapping my head around the Cartesian to Isometric coordinate conversion in HTML5 canvas. As I understand it, the process is two fold: (1) Scale down the y-axis by 0.5, i.e. ctx.scale(1,0.5); or ctx.setTransform(1,0,0,0.5,0,0); This supposedly produces the following matrix: [x; y] x [1, 0; 0, 0.5] (2) Rotate the context by 45 degrees, i.e. ctx.rotate(Math.PI/4); This should produce the following matrix: [x; y] x [cos(45), -sin(45); sin(45), cos(45)] This (somehow) results in the final matrix of ctx.setTransform(2,-1,1,0.5,0,0); which I cannot seem to understand... How is this matrix derived? I cannot seem to produce this matrix by multiplying the scaling and rotation matrices produced earlier... Also, if I write out the equation for the final transformation matrix, I get: newX = 2x + y newY = -x + y/2 But this doesn't seem to be correct. For example, the following code draws an isometric tile at cartesian coordinates (500, 100). ctx.setTransform(2,-1,1,0.5,0,0); ctx.fillRect(500, 100, width*2, height); When I check the result on the screen, the actual coordinates are (285, 215) which do not satisfy the equations I produced earlier... So what is going on here? I would be very grateful if you could: (1) Help me understand how the final isometric transformation matrix is derived; (2) Help me produce the correct equation for finding the on-screen coordinates of an isometric projection. Many thanks and kind regards

    Read the article

  • Can i change the order of these OpenGL / Win32 calls?

    - by Adam Naylor
    I've been adapting the NeHe ogl/win32 code to be more object orientated and I don't like the way some of the calls are structured. The example has the following pseudo structure: Register window class Change display settings with a DEVMODE Adjust window rect Create window Get DC Find closest matching pixel format Set the pixel format to closest match Create rendering context Make that context current Show the window Set it to foreground Set it to having focus Resize the GL scene Init GL The points in bold are what I want to move into a rendering class (the rest are what I see being pure win32 calls) but I'm not sure if I can call them after the win32 calls. Essentially what I'm aiming for is to encapsulate the Win32 calls into a Platform::Initiate() type method and the rest into a sort of Renderer::Initiate() method. So my question essentially boils down to: "Would OpenGL allow these methods to be called in this order?" Register window class Adjust window rect Create window Get DC Show the window Set it to foreground Set it to having focus Change display settings with a DEVMODE Find closest matching pixel format Set the pixel format to closest match Create rendering context Make that context current Resize the GL scene Init GL (obviously passing through the appropriate window handles and device contexts.) Thanks in advance.

    Read the article

  • How can I render a semi transparent model with OpenGL correctly?

    - by phobitor
    I'm using OpenGL ES 2 and I want to render a simple model with some level of transparency. I'm just starting out with shaders, and I wrote a simple diffuse shader for the model without any issues but I don't know how to add transparency to it. I tried to set my fragment shader's output (gl_FragColor) to a non opaque alpha value but the results weren't too great. It sort of works, but it looks like certain model triangles are only rendered based on the camera position... It's really hard to describe what's wrong so please watch this short video I recorded: http://www.youtube.com/watch?v=s0JqA0rZabE I thought this was a depth testing issue so I tried playing around with enabling/disabling depth testing and back face culling. Enabling back face culling changes the output slightly but the problem in the video is still there. Enabling/disabling depth testing doesn't seem to do anything. Could anyone explain what I'm seeing and how I can add some simple transparency to my model with the shader? I'm not looking for advanced order independent transparency implementations. edit: Vertex Shader: // color varying for fragment shader varying mediump vec3 LightIntensity; varying highp vec3 VertexInModelSpace; void main() { // vec4 LightPosition = vec4(0.0, 0.0, 0.0, 1.0); vec3 LightColor = vec3(1.0, 1.0, 1.0); vec3 DiffuseColor = vec3(1.0, 0.25, 0.0); // find the vector from the given vertex to the light source vec4 vertexInWorldSpace = gl_ModelViewMatrix * vec4(gl_Vertex); vec3 normalInWorldSpace = normalize(gl_NormalMatrix * gl_Normal); vec3 lightDirn = normalize(vec3(LightPosition-vertexInWorldSpace)); // save vertexInWorldSpace VertexInModelSpace = vec3(gl_Vertex); // calculate light intensity LightIntensity = LightColor * DiffuseColor * max(dot(lightDirn,normalInWorldSpace),0.0); // calculate projected vertex position gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } Fragment Shader: // varying to define color varying vec3 LightIntensity; varying vec3 VertexInModelSpace; void main() { gl_FragColor = vec4(LightIntensity,0.5); }

    Read the article

  • Rotate object Up/Down/Left/Right in any orientation

    - by George Duckett
    I'm rendering model at the origin with a fixed camera looking at it positioned on the z axis. I want to be able to rotate the model up/down and left/right. Currently I have 2 variables, HorizontalRotation and VerticalRotation. When calculating the world matrix I rotate about the Y axis by HorizontalRotation and about the X axis by VerticalRotation. The ..Rotation variables are controlled by pressing up/down/left/right arrow keys. The problem I'm having is that the rotations are happening relative to the object. Lets say it's a model of the world. Pressing Up a bit would let me look at the north pole. Currently when i press right the earth spins infront of the camera on its axis; I'm still looking at the north pole. How can i get it so that no matter what rotations are currently applied i can always rotate my model relative to the camera/world axis?

    Read the article

  • Cocos2d: Adding a CCSequence to a CCArray

    - by Axort
    I have a problem with an action performed by a sprite. I have one CCSequence in a CCArray and I have an scheduled method (is called every 5 seconds) that make the sprite run the action. The action is performed correctly only the first time (the first 5 seconds), after that, the action do whatever it wants lol. Here is the code: In .h - @interface PowerUpLayer : CCLayer { PowerUp *powerUp; CCArray *trajectories; } @property (nonatomic, retain) CCArray *trajectories; In .mm - @implementation PowerUpLayer @synthesize trajectories; -(id)init { if((self = [super init])) { [self createTrajectories]; self.isTouchEnabled = YES; [self schedule:@selector(spawn:) interval:5]; } return self; } -(void)createTrajectories { self.trajectories = [CCArray arrayWithCapacity:1]; //Wave trajectory ccBezierConfig firstWave, secondWave; firstWave.controlPoint_1 = CGPointMake([[CCDirector sharedDirector] winSize].width + 30, [[CCDirector sharedDirector] winSize].height / 2);//powerUp.sprite.position.x, powerUp.sprite.position.y); firstWave.controlPoint_2 = CGPointMake([[CCDirector sharedDirector] winSize].width - ([[CCDirector sharedDirector] winSize].width / 4), 0); firstWave.endPosition = CGPointMake([[CCDirector sharedDirector] winSize].width / 2, [[CCDirector sharedDirector] winSize].height / 2); secondWave.controlPoint_1 = CGPointMake([[CCDirector sharedDirector] winSize].width / 2, [[CCDirector sharedDirector] winSize].height / 2); secondWave.controlPoint_2 = CGPointMake([[CCDirector sharedDirector] winSize].width / 4, [[CCDirector sharedDirector] winSize].height); secondWave.endPosition = CGPointMake(-30, [[CCDirector sharedDirector] winSize].height / 2); id bezierWave1 = [CCBezierTo actionWithDuration:1 bezier:firstWave]; id bezierWave2 = [CCBezierTo actionWithDuration:1 bezier:secondWave]; id waveTrajectory = [CCSequence actions:bezierWave1, bezierWave2, [CCCallFuncN actionWithTarget:self selector:@selector(setInvisible:)], nil]; [self.trajectories addObject:waveTrajectory]; //[powerUp.sprite runAction:bezierForward]; // [CCMoveBy actionWithDuration:3 position:CGPointMake(-[[CCDirector sharedDirector] winSize].width - powerUp.sprite.contentSize.width, 0)] //[powerUp.sprite runAction:[CCSequence actions:bezierWave1, bezierWave2, [CCCallFuncN actionWithTarget:self selector:@selector(setInvisible:)], nil]]; } -(void)setInvisible:(id)sender { if(powerUp != nil) { [self removeChild:sender cleanup:YES]; powerUp = nil; } } This is the scheduled method: -(void)spawn:(ccTime)dt { if(powerUp == nil) { powerUp = [[PowerUp alloc] initWithType:0]; powerUp.sprite.position = CGPointMake([[CCDirector sharedDirector] winSize].width + powerUp.sprite.contentSize.width, [[CCDirector sharedDirector] winSize].height / 2); [self addChild:powerUp.sprite z:-1]; [powerUp.sprite runAction:((CCSequence *)[self.trajectories objectAtIndex:0])]; } } I don't know what is happening; I never modify the content of the CCSequence after the first time. Thanks!

    Read the article

  • Lerping to a center point while in motion

    - by Fibericon
    I have an enemy that initially flies in a circular motion, while facing away from the center point. This is how I achieve that: position.Y = (float)(Math.Cos(timeAlive * MathHelper.PiOver4) * radius + origin.Y); position.X = (float)(Math.Sin(timeAlive * MathHelper.PiOver4) * radius + origin.X); if (timeAlive < 5) { angle = (float)Math.Atan((0 - position.X) / (0 - position.Y)); if (0 < position.Y) RotationMatrix = Matrix.CreateRotationX(MathHelper.PiOver2) * Matrix.CreateRotationZ(-1 * angle); else RotationMatrix = Matrix.CreateRotationX(MathHelper.PiOver2) * Matrix.CreateRotationZ(MathHelper.Pi - angle); } That part works just fine. After five seconds of this, I want the enemy to turn inward, facing the center point. However, I've been trying to lerp to that point, since I don't want it to simply jump to the new rotation. Here's my code for trying to do that: else { float newAngle = -1 * (float)Math.Atan((0 - position.X) / (0 - position.Y)); angle = MathHelper.Lerp(angle, newAngle, (float)gameTime.ElapsedGameTime.Milliseconds / 1000); if (0 < position.Y) RotationMatrix = Matrix.CreateRotationX(MathHelper.PiOver2) * Matrix.CreateRotationZ(MathHelper.Pi - angle); else RotationMatrix = Matrix.CreateRotationX(MathHelper.PiOver2) * Matrix.CreateRotationZ(-1 * angle); } That doesn't work so fine. It seems like it's going to at first, but then it just sort of skips around. How can I achieve what I want here?

    Read the article

  • What are the steps taken by this GLSL code?

    - by user827992
    1 void main(void) 2 { 3 vec2 pos = mod(gl_FragCoord.xy, vec2(50.0)) - vec2(25.0); 4 float dist_squared = dot(pos, pos); 5 6 gl_FragColor = (dist_squared < 400.0) 7 ? vec4(.90, .90, .90, 1.0) 8 : vec4(.20, .20, .40, 1.0); 9 } taken from http://people.freedesktop.org/~idr/OpenGL_tutorials/03-fragment-intro.html Now, this looks really trivial and simple, but my problem is with the mod function. This function is taking 2 vec2 as inputs but is supposed to take just 2 atomic arguments according to the official documentation, also this function makes an implicit use of the floor function that only accepts, again, 1 atomic argument. Can someone explain this to me step by step and point out what I'm not getting here? It's some kind of OpenGL trick? OpenGL Math trick? in the GLSL docs i always find and explicit reference to the type accepted by the function and vec2 it's not there.

    Read the article

  • Trying to create a sphere in UDK on which I can stand

    - by Dave
    Trying to build a globe in UDK, but when I do (create a sphere), my player falls straight through it. How do I make a sphere that I can walk on? Every other shape (cube, cone...etc) work just fine. -- Edit: Specifically, I want to build a CSG/Brush sphere, not a mesh sphere. It appears to work just fine if I set the "sphere exptrapolation" to 1 or 2, but if I bump it up to 3 or higher, I fall right through. I literally created 2 spheres next to each other, one set at "2" and one at "3" - I can walk from the top of the "2" sphere and jump onto the "3" sphere, but I fall right through it.

    Read the article

  • How to check battery usage of an iPhone/Android app?

    - by Gajoo
    I think the title says Enough. For example Unity can generate you a report how much CPU/GPU power it's using or how fast it's going to drain device battery, but what about the applications developed using Cocos2d or the ones you develop directly using OpenGL? How should you profile them? In general what should you profile? or Should I simply run the application and wait for it's battery to run out?

    Read the article

  • Transform 3d viewport vector to 2d vector

    - by learning_sam
    I am playing around with 3d transformations and came along an issue. I have a 3d vector already within the viewport and need to transform it to a 2d vector. (let's say my screen is 10x10) Does that just straight works like regualar transformation or is something different here? i.e.: I have the vector a = (2, 1, 0) within the viewport and want the 2d vector. Does that works like this and if yes how do I handle the "0" within the 3rd component?

    Read the article

  • Smoothing rotation

    - by Lewis
    I've spent the last three days trying to work out how to rotate a sprite smoothly depending on the velocity.x value of the sprite. I'm using this: float Proportion = 9.5; float maxDiff = 200; float rotation = fmaxf(fminf(playerVelocity.x * Proportion, maxDiff), -maxDiff); player.rotation = rotation; The behaviour is what I required but if the velocity changes rapidly then it will look like the sprite will jump to face left or jump to face right. I'll go into the behaviour in a little more detail: 0 velocity = sprite faces forwards negative velocity = sprite faces left depending on value. positive velocity = sprite faces right (higher velocity the more it faces right) same as above. I've read about using interpolation rather than an absolute angle to rotate it to but I don't know how to implement that. I have a physics engine available. There is one other way to get around this: to use += on the rotation angle. The thing is that I would then have to change the equation to produce positive and negative values then to make sure the sprite faces 0 once it reaches 0 velocity again. If I add that in now, it keeps the previous angle even after the velocity has dropped / is dropping. Any ideas/code snippets would be greatly appreciated.

    Read the article

  • Box2D Difference Between WorldCenter and Position

    - by Free Lancer
    So this problem has been brothering for a couple of days now. First off, what is the difference between say Body.getWorldCenter() and Body.getPosition(). I heard that WorldCenter might have to do with the center of gravity or something. Second, When I create a Box2D Body for a sprite the Body is always at the lower left corner. I check it by printing a Rectangle of 1 pixel around the box.getWorldCenter(). From what I understand the Body should be in the center of the Sprite and its bounding box should wrap around the Sprite, correct? Here's an image of what I mean (The Sprite is Red, Body Blue): Here's some code: Body Creator: public static Body createBoxBody( final World pPhysicsWorld, final BodyType pBodyType, final FixtureDef pFixtureDef, Sprite pSprite ) { float pRotation = 0; float pCenterX = pSprite.getX() + pSprite.getWidth() / 2; float pCenterY = pSprite.getY() + pSprite.getHeight() / 2; float pWidth = pSprite.getWidth(); float pHeight = pSprite.getHeight(); final BodyDef boxBodyDef = new BodyDef(); boxBodyDef.type = pBodyType; //boxBodyDef.position.x = pCenterX / Constants.PIXEL_METER_RATIO; //boxBodyDef.position.y = pCenterY / Constants.PIXEL_METER_RATIO; boxBodyDef.position.x = pSprite.getX() / Constants.PIXEL_METER_RATIO; boxBodyDef.position.y = pSprite.getY() / Constants.PIXEL_METER_RATIO; Vector2 v = new Vector2( boxBodyDef.position.x * Constants.PIXEL_METER_RATIO, boxBodyDef.position.y * Constants.PIXEL_METER_RATIO ); Gdx.app.log("@Physics", "createBoxBody():: Box Position: " + v); // Temporary Box shape of the Body final PolygonShape boxPoly = new PolygonShape(); final float halfWidth = pWidth * 0.5f / Constants.PIXEL_METER_RATIO; final float halfHeight = pHeight * 0.5f / Constants.PIXEL_METER_RATIO; boxPoly.setAsBox( halfWidth, halfHeight ); // set the anchor point to be the center of the sprite pFixtureDef.shape = boxPoly; final Body boxBody = pPhysicsWorld.createBody(boxBodyDef); Gdx.app.log("@Physics", "createBoxBody():: Box Center: " + boxBody.getPosition().mul(Constants.PIXEL_METER_RATIO)); boxBody.createFixture(pFixtureDef); boxBody.setTransform( boxBody.getWorldCenter(), MathUtils.degreesToRadians * pRotation ); boxPoly.dispose(); return boxBody; } Making the Sprite: public Car( Texture texture, float pX, float pY, World world ) { super( "Car" ); mSprite = new Sprite( texture ); mSprite.setSize( mSprite.getWidth() / 6, mSprite.getHeight() / 6 ); mSprite.setPosition( pX, pY ); mSprite.setOrigin( mSprite.getWidth()/2, mSprite.getHeight()/2); FixtureDef carFixtureDef = new FixtureDef(); // Set the Fixture's properties, like friction, using the car's shape carFixtureDef.restitution = 1f; carFixtureDef.friction = 1f; carFixtureDef.density = 1f; // needed to rotate body using applyTorque mBody = Physics.createBoxBody( world, BodyDef.BodyType.DynamicBody, carFixtureDef, mSprite ); }

    Read the article

  • Bukkit inventory saving: crashing somewhere

    - by HcgRandon
    I'm working on a command for a bukkit plugin that lets you transfer worlds. In the section about saving the player's inventory, I'm getting a runtime error. My question is: Why is the error happening, and how can I prevent it? The plugin code public void savePlayerInv(Player p, World w){ File playerInvConfigFile = new File(plugin.getDataFolder() + File.separator + "players" + File.separator + p.getName(), "inventory.yml"); FileConfiguration pInv = YamlConfiguration.loadConfiguration(playerInvConfigFile); PlayerInventory inv = p.getInventory(); int i = 0; for (ItemStack stack : inv.getContents()) { //increment integer i++; String startInventory = w.getName() + ".inv." + Integer.toString(i); //save inv pInv.set(startInventory + ".amount", stack.getAmount()); pInv.set(startInventory + ".durability", Short.toString(stack.getDurability())); pInv.set(startInventory + ".type", stack.getTypeId()); //pInv.set(startInventory + ".enchantment", stack.getEnchantments()); //TODO add enchant saveing } i = 0; for (ItemStack armor : inv.getArmorContents()){ i++; String startArmor = w.getName() + ".armor." + Integer.toString(i); //save armor pInv.set(startArmor + ".amount", armor.getAmount()); pInv.set(startArmor + ".durability", armor.getDurability()); pInv.set(startArmor + ".type", armor.getTypeId()); //pInv.set(startArmor + ".enchantment", armor.getEnchantments()); } //save exp if (p.getExp() != 0) { pInv.set(w.getName() + ".exp", p.getExp()); } } The offending line The stack trace complains about line 130, which is this line. pInv.set(startInventory + ".amount", stack.getAmount()); The stack trace 2012-03-21 13:23:25 [SEVERE] null org.bukkit.command.CommandException: Unhandled exception executing command 'wtp' in plugin Needs v1.0 at org.bukkit.command.PluginCommand.execute(PluginCommand.java:42) at org.bukkit.command.SimpleCommandMap.dispatch(SimpleCommandMap.java:166) at org.bukkit.craftbukkit.CraftServer.dispatchCommand(CraftServer.java:461) at net.minecraft.server.NetServerHandler.handleCommand(NetServerHandler.java:818) at net.minecraft.server.NetServerHandler.chat(NetServerHandler.java:778) at net.minecraft.server.NetServerHandler.a(NetServerHandler.java:761) at net.minecraft.server.Packet3Chat.handle(Packet3Chat.java:33) at net.minecraft.server.NetworkManager.b(NetworkManager.java:229) at net.minecraft.server.NetServerHandler.a(NetServerHandler.java:112) at net.minecraft.server.NetworkListenThread.a(NetworkListenThread.java:78) at net.minecraft.server.MinecraftServer.w(MinecraftServer.java:554) at net.minecraft.server.MinecraftServer.run(MinecraftServer.java:452) at net.minecraft.server.ThreadServerApplication.run(SourceFile:490) Caused by: java.lang.NullPointerException at com.devoverflow.improved.needs.commands.CommandWorldtp.savePlayerInv(CommandWorldtp.java:130) at com.devoverflow.improved.needs.commands.CommandWorldtp.onCommand(CommandWorldtp.java:60) at org.bukkit.command.PluginCommand.execute(PluginCommand.java:40) ... 12 more

    Read the article

  • Zooming to point of interest

    - by user1010005
    I have the following variables: Point of interest which is the position(x,y) in pixels of the place to focus. Screen width,height which are the dimensions of the window. Zoom level which sets the zoom level of the camera. And this is the code I have so far. void Zoom(int pointOfInterestX,int pointOfInterstY,int screenWidth, int screenHeight,int zoomLevel) { glTranslatef( (pointOfInterestX/2 - screenWidth/2), (pointOfInterestY/2 - screenHeight/2),0); glScalef(zoomLevel,zoomLevel,zoomLevel); } And I want to do zoom in/out but keep the point of interest in the middle of the screen. but so far all of my attempts have failed and I would like to ask for some help.

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >