Search Results

Search found 25660 results on 1027 pages for 'dotnetnuke development'.

Page 401/1027 | < Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >

  • Unity: parallel vectors and cross product, how to compare vectors

    - by Heisenbug
    I read this post explaining a method to understand if the angle between 2 given vectors and the normal to the plane described by them, is clockwise or anticlockwise: public static AngleDir GetAngleDirection(Vector3 beginDir, Vector3 endDir, Vector3 upDir) { Vector3 cross = Vector3.Cross(beginDir, endDir); float dot = Vector3.Dot(cross, upDir); if (dot > 0.0f) return AngleDir.CLOCK; else if (dot < 0.0f) return AngleDir.ANTICLOCK; return AngleDir.PARALLEL; } After having used it a little bit, I think it's wrong. If I supply the same vector as input (beginDir equal to endDir), the cross product is zero, but the dot product is a little bit more than zero. I think that to fix that I can simply check if the cross product is zero, means that the 2 vectors are parallel, but my code doesn't work. I tried the following solution: Vector3 cross = Vector3.Cross(beginDir, endDir); if (cross == Vector.zero) return AngleDir.PARALLEL; And it doesn't work because comparison between Vector.zero and cross is always different from zero (even if cross is actually [0.0f, 0.0f, 0.0f]). I tried also this: Vector3 cross = Vector3.Cross(beginDir, endDir); if (cross.magnitude == 0.0f) return AngleDir.PARALLEL; it also fails because magnitude is slightly more than zero. So my question is: given 2 Vector3 in Unity, how to compare them? I need the elegant equivalent version of this: if (beginDir.x == endDir.x && beginDir.y == endDir.y && beginDir.z == endDir.z) return true;

    Read the article

  • Effective way to check if an Entity/Player enters a region/trigger

    - by Chris
    I was wondering how multiplayer games detect if you enter a special region. Let's assume there is a huge map that is so big that simply checking it would become a huge performance issue. I've seen bukkit (a modding API for Minecraft servers) firing an Event on every single move. I don't think that larger games do the same because even if you have only a few coordinates you are interested in, you have to loop through a few trigger zone to see if the player is inside your region - for every player. This seems like an extremely CPU-intense operation to me even though I've never developed something like that. Is there a special algorithm that is used by larger games to accomplish this? The only thing I could imagine is to split up the world into multiple parts and to register the event not on the movement itself but on all the parts that are covered by your area and only check for areas that are registered in the current part. And another thing I would like to know: How could you detect when someone must have entered a trigger but you never saw him directly in it since his client only sent you an move packet shortly before entering and after leaving the trigger area. Drawing a line and calculate all colliding parts seems rather CPU intensive if you have to perform it every time.

    Read the article

  • Asset missing problem XNA

    - by ChocoMan
    I'm using VS2010 with XNA 4.0 and I'm trying to load an FBX model with texture on the screen. The problem I'm having is this error: Missing Asset: C:\Users\ChocoMan\Documents\Visual Studio 2010\Projects\XNAGame\Documents\Visual Studio\Projects\XNAGame\XNAGameContent\Textures\texture.bmp but the actual path to the texture is C:\Users\ChocoMan\Documents\Visual Studio\Projects\XNAGame\XNAGameContent\Textures\texture.bmp Also, when I linked the texture in Maya, I used the above address. Does anyone know why VS is looking for an incorrect address that doesnt exist?

    Read the article

  • Any reliable polygon normal calculation code?

    - by Jenko
    I'm currently calculating the normal vector of a polygon using this code, but for some faces here and there it calculates a wrong normal. I don't really know what's going on or where it fails but its not reliable. Do you have any polygon normal calculation that's tested and found to be reliable? // calculate normal of a polygon using all points var n:int = points.length; var x:Number = 0; var y:Number = 0; var z:Number = 0 // ensure all points above 0 var minx:Number = 0, miny:Number = 0, minz:Number = 0; for (var p:int = 0, pl:int = points.length; p < pl; p++) { var po:_Point3D = points[p] = points[p].clone(); if (po.x < minx) { minx = po.x; } if (po.y < miny) { miny = po.y; } if (po.z < minz) { minz = po.z; } } for (p = 0; p < pl; p++) { po = points[p]; po.x -= minx; po.y -= miny; po.z -= minz; } var cur:int = 1, prev:int = 0, next:int = 2; for (var i:int = 1; i <= n; i++) { // using Newell method x += points[cur].y * (points[next].z - points[prev].z); y += points[cur].z * (points[next].x - points[prev].x); z += points[cur].x * (points[next].y - points[prev].y); cur = (cur+1) % n; next = (next+1) % n; prev = (prev+1) % n; } // length of the normal var length:Number = Math.sqrt(x * x + y * y + z * z); // turn large values into a unit vector if (length != 0){ x = x / length; y = y / length; z = z / length; }else { throw new Error("Cannot calculate normal since triangle has an area of 0"); }

    Read the article

  • Shader optimization - cg/hlsl pseudo and via multiplication

    - by teodron
    Since HLSL/Cg do not allow texture fetching inside conditional blocks, I am first checking a variable and performing some computations, afterwards setting a float flag to 0.0 or 1.0, depending on the computations. I'd like to trigger a texture fetch only if the flag is 1.0 or not null, for that matter of fact. I kind of hoped this would do the trick: float4 TU0_atlas_colour = pseudoBool * tex2Dlod(TU0_texture, float4(tileCoord, 0, mipLevel)); That is, if pseudoBool is 0, will the texture fetch function still be called and produce overhead? I was hoping to prevent it from getting executed via this trick that usually works in plain C/C++.

    Read the article

  • OpenGL: Want to keep gun on top of car and be able to control angle. Having difficulties.

    - by Blair
    So I am making a simple game. I want to put a gun on top of a car so basically like a long rod in the middle of a black is how I am modelling it right now. I want to be able to control the angle of the gun. Basically it can go forward all the way so that it is parallel to the ground facing the direction the car is moving or it can point behind the car and any of the angles in between these positions. I have something like the following right now but its not really working. Is there an better way to do this that I am not seeing? #This will place the car glPushMatrix() glTranslatef(self.position.x,1.5,self.position.z) glRotated(self.rotation, 0.0, 1.0, 0.0) glScaled(0.5, 0.5, 0.5) glCallList(self.model.gl_list) glPopMatrix() #This will place the gun on top glPushMatrix() glTranslatef(self.position.x,2.5,self.position.z) glRotated(self.tube_angle, self.direction.z, 0.0, self.direction.x) print self.direction.z glRotated(45, self.position.z, 0.0, self.position.x) glScaled(1.0, 0.5, 1.0) glCallList(self.tube.gl_list) glPopMatrix() This almost works. It moves the gun up and down. But when the car moves around the angle of the gun changes. Not what I want.

    Read the article

  • Artifacts when using SamplerState.LinearClamp in SpriteBatch

    - by Raymond Holmboe
    I'm using XNA 4.0 and VS2010 Express for Windows Phone and Windows Phone SDK 7.1. This is a platform game and I have a map made up of 16x16 textures that is drawn dynamically, tile by tile. When using SpriteBatch to draw my map with LinearClamp, I get artifacts that looks like blurry thin lines. They become visible when the camera moves from one pixel to another and when the camera is still, the artifacts disappear. Here's a small sample of what I mean: Here's how I draw with the spritebatch: SBWorld.Begin(SpriteSortMode.Deferred, BlendState.NonPremultiplied, SamplerState.LinearClamp, DepthStencilState.Default, RasterizerState.CullNone, null, camera.View); When using SamplerState.PointClamp the game just plays horribly (IMHO), so I cannot use that. Why do these lines appear and how do I get rid of those?

    Read the article

  • Lighting get darker when texture is aplied

    - by noah
    Im using OpenGL ES 1.1 for iPhone. I'm attempting to implement a skybox in my 3d world and started out by following one of Jeff Lamarches tutorials on creating textures. Heres the tutorial: iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-part-6_25.html Ive successfully added the image to my 3d world but am not sure why the lighting on the other shapes has changed so much. I want the shapes to be the original color and have the image in the background. Before: https://www.dropbox.com/s/ojmb8793vj514h0/Screen%20Shot%202012-10-01%20at%205.34.44%20PM.png After: https://www.dropbox.com/s/8v6yvur8amgudia/Screen%20Shot%202012-10-01%20at%205.35.31%20PM.png Heres the init OpenGL: - (void)initOpenGLES1 { glShadeModel(GL_SMOOTH); // Enable lighting glEnable(GL_LIGHTING); // Turn the first light on glEnable(GL_LIGHT0); const GLfloat lightAmbient[] = {0.2, 0.2, 0.2, 1.0}; const GLfloat lightDiffuse[] = {0.8, 0.8, 0.8, 1.0}; const GLfloat matAmbient[] = {0.3, 0.3, 0.3, 0.5}; const GLfloat matDiffuse[] = {1.0, 1.0, 1.0, 1.0}; const GLfloat matSpecular[] = {1.0, 1.0, 1.0, 1.0}; const GLfloat lightPosition[] = {0.0, 0.0, 1.0, 0.0}; const GLfloat lightShininess = 100.0; //Configure OpenGL lighting glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, matAmbient); glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, matDiffuse); glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, matSpecular); glMaterialf(GL_FRONT_AND_BACK, GL_SHININESS, lightShininess); glLightfv(GL_LIGHT0, GL_AMBIENT, lightAmbient); glLightfv(GL_LIGHT0, GL_DIFFUSE, lightDiffuse); glLightfv(GL_LIGHT0, GL_POSITION, lightPosition); // Define a cutoff angle glLightf(GL_LIGHT0, GL_SPOT_CUTOFF, 40.0); // Set the clear color glClearColor(0, 0, 0, 1.0f); // Projection Matrix config glMatrixMode(GL_PROJECTION); glLoadIdentity(); CGSize layerSize = self.view.layer.frame.size; // Swapped height and width for landscape mode gluPerspective(45.0f, (GLfloat)layerSize.height / (GLfloat)layerSize.width, 0.1f, 750.0f); [self initSkyBox]; // Modelview Matrix config glMatrixMode(GL_MODELVIEW); glLoadIdentity(); // This next line is not really needed as it is the default for OpenGL ES glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glDisable(GL_BLEND); // Enable depth testing glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LESS); glDepthMask(GL_TRUE); } Heres the drawSkybox that gets called in the drawFrame method: -(void)drawSkyBox { glDisable(GL_LIGHTING); glDisable(GL_DEPTH_TEST); glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_NORMAL_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); static const SSVertex3D vertices[] = { {-1.0, 1.0, -0.0}, { 1.0, 1.0, -0.0}, {-1.0, -1.0, -0.0}, { 1.0, -1.0, -0.0} }; static const SSVertex3D normals[] = { {0.0, 0.0, 1.0}, {0.0, 0.0, 1.0}, {0.0, 0.0, 1.0}, {0.0, 0.0, 1.0} }; static const GLfloat texCoords[] = { 0.0, 0.5, 0.5, 0.5, 0.0, 0.0, 0.5, 0.0 }; glLoadIdentity(); glTranslatef(0.0, 0.0, -3.0); glBindTexture(GL_TEXTURE_2D, texture[0]); glVertexPointer(3, GL_FLOAT, 0, vertices); glNormalPointer(GL_FLOAT, 0, normals); glTexCoordPointer(2, GL_FLOAT, 0, texCoords); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_NORMAL_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glEnable(GL_LIGHTING); glEnable(GL_DEPTH_TEST); } Heres the init Skybox: -(void)initSkyBox { // Turn necessary features on glEnable(GL_TEXTURE_2D); glEnable(GL_BLEND); glBlendFunc(GL_ONE, GL_SRC_COLOR); // Bind the number of textures we need, in this case one. glGenTextures(1, &texture[0]); // create a texture obj, give unique ID glBindTexture(GL_TEXTURE_2D, texture[0]); // load our new texture name into the current texture glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR); NSString *path = [[NSBundle mainBundle] pathForResource:@"space" ofType:@"jpg"]; NSData *texData = [[NSData alloc] initWithContentsOfFile:path]; UIImage *image = [[UIImage alloc] initWithData:texData]; GLuint width = CGImageGetWidth(image.CGImage); GLuint height = CGImageGetHeight(image.CGImage); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); void *imageData = malloc( height * width * 4 ); // times 4 because will write one byte for rgb and alpha CGContextRef cgContext = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big ); // Flip the Y-axis CGContextTranslateCTM (cgContext, 0, height); CGContextScaleCTM (cgContext, 1.0, -1.0); CGColorSpaceRelease( colorSpace ); CGContextClearRect( cgContext, CGRectMake( 0, 0, width, height ) ); CGContextDrawImage( cgContext, CGRectMake( 0, 0, width, height ), image.CGImage ); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData); CGContextRelease(cgContext); free(imageData); [image release]; [texData release]; } Any help is greatly appreciated.

    Read the article

  • Largest sphere inside a frustum

    - by Will
    How do you find the largest sphere that you can draw in perspective? Viewed from the top, it'd be this: Added: on the frustum on the right, I've marked four points I think we know something about. We can unproject all eight corners of the frusum, and the centres of the near and far ends. So we know point 1, 3 and 4. We also know that point 2 is the same distance from 3 as 4 is from 3. So then we can compute the nearest point on the line 1 to 4 to point 2 in order to get the centre? But the actual math and code escapes me. I want to draw models (which are approximately spherical and which I have a miniball bounding sphere for) as large as possible. Update: I've tried to implement the incircle-on-two-planes approach as suggested by bobobobo and Nathan Reed : function getFrustumsInsphere(viewport,invMvpMatrix) { var midX = viewport[0]+viewport[2]/2, midY = viewport[1]+viewport[3]/2, centre = unproject(midX,midY,null,null,viewport,invMvpMatrix), incircle = function(a,b) { var c = ray_ray_closest_point_3(a,b); a = a[1]; // far clip plane b = b[1]; // far clip plane c = c[1]; // camera var A = vec3_length(vec3_sub(b,c)), B = vec3_length(vec3_sub(a,c)), C = vec3_length(vec3_sub(a,b)), P = 1/(A+B+C), x = ((A*a[0])+(B*a[1])+(C*a[2]))*P, y = ((A*b[0])+(B*b[1])+(C*b[2]))*P, z = ((A*c[0])+(B*c[1])+(C*c[2]))*P; c = [x,y,z]; // now the centre of the incircle c.push(vec3_length(vec3_sub(centre[1],c))); // add its radius return c; }, left = unproject(viewport[0],midY,null,null,viewport,invMvpMatrix), right = unproject(viewport[2],midY,null,null,viewport,invMvpMatrix), horiz = incircle(left,right), top = unproject(midX,viewport[1],null,null,viewport,invMvpMatrix), bottom = unproject(midX,viewport[3],null,null,viewport,invMvpMatrix), vert = incircle(top,bottom); return horiz[3]<vert[3]? horiz: vert; } I admit I'm winging it; I'm trying to adapt 2D code by extending it into 3 dimensions. It doesn't compute the insphere correctly; the centre-point of the sphere seems to be on the line between the camera and the top-left each time, and its too big (or too close). Is there any obvious mistakes in my code? Does the approach, if fixed, work?

    Read the article

  • Switching my collision detection to array lists caused it to stop working

    - by Charlton Santana
    I have made a collision detection system which worked when I did not use array list and block generation. It is weird why it's not working but here's the code, and if anyone could help I would be very grateful :) The first code if the block generation. private static final List<Block> BLOCKS = new ArrayList<Block>(); Random rnd = new Random(System.currentTimeMillis()); int randomx = 400; int randomy = 400; int blocknum = 100; String Title = "blocktitle" + blocknum; private Block block; public void generateBlocks(){ if(blocknum > 0){ int offset = rnd.nextInt(250) + 100; //500 is the maximum offset, this is a constant randomx += offset;//ofset will be between 100 and 400 int randomyoff = rnd.nextInt(80); //500 is the maximum offset, this is a constant randomy = platformheighttwo - 6 - randomyoff;//ofset will be between 100 and 400 block = new Block(BitmapFactory.decodeResource(getResources(), R.drawable.block2), randomx, randomy); BLOCKS.add(block); blocknum -= 1; } The second is where the collision detection takes place note: the block.draw(canvas); works perfectly. It's the blocks that don't work. for(Block block : BLOCKS) { block.draw(canvas); if (sprite.bottomrx < block.bottomrx && sprite.bottomrx > block.bottomlx && sprite.bottomry < block.bottommy && sprite.bottomry > block.topry ){ Log.d(TAG, "Collided!!!!!!!!!!!!1"); } // bottom left touching block? if (sprite.bottomlx < block.bottomrx && sprite.bottomlx > block.bottomlx && sprite.bottomly < block.bottommy && sprite.bottomly > block.topry ){ Log.d(TAG, "Collided!!!!!!!!!!!!1"); } // top right touching block? if (sprite.toprx < block.bottomrx && sprite.toprx > block.bottomlx && sprite.topry < block.bottommy && sprite.topry > block.topry ){ Log.d(TAG, "Collided!!!!!!!!!!!!1"); } //top left touching block? if (sprite.toprx < block.bottomrx && sprite.toprx > block.bottomlx && sprite.topry < block.bottommy && sprite.topry > block.topry ){ Log.d(TAG, "Collided!!!!!!!!!!!!1"); } } The values eg bottomrx are in the block.java file..

    Read the article

  • Updating physics for animated models

    - by Mathias Hölzl
    For a new game we have do set up a scene with a minimum of 30 bone animated models.(shooter) The problem is that the update process for the animated models takes too long. Thats what I do: Each character has ~30 bones and for every update tick the animation gets calculated and every bone fires a event with the new matrix. The physics receives the event with the new matrix and updates the collision shape for that bone. The time that it takes to build the animation isn't that bad (0.2ms for 30 Bones - 6ms for 30 models). But the main problem is that the physic engine (Bullet) uses a diffrent matrix for transformation and so its necessary to convert it. Code for matrix conversion: (~0.005ms) btTransform CLEAR_PHYSICS_API Mat_to_btTransform( Mat mat ) { btMatrix3x3 bulletRotation; btVector3 bulletPosition; XMFLOAT4X4 matData = mat.GetStorage(); // copy rotation matrix for ( int row=0; row<3; ++row ) for ( int column=0; column<3; ++column ) bulletRotation[row][column] = matData.m[column][row]; for ( int column=0; column<3; ++column ) bulletPosition[column] = matData.m[3][column]; return btTransform( bulletRotation, bulletPosition ); } The function for updating the transform(Physic): void CLEAR_PHYSICS_API BulletPhysics::VKinematicMove(Mat mat, ActorId aid) { if ( btRigidBody * const body = FindActorBody( aid ) ) { btTransform tmp = Mat_to_btTransform( mat ); body->setWorldTransform( tmp ); } } The real problem is the function FindActorBody(id): ActorIDToBulletActorMap::const_iterator found = m_actorBodies.find( id ); if ( found != m_actorBodies.end() ) return found->second; All physic actors are stored in m_actorBodies and thats why the updating process takes to long. But I have no idea how I could avoid this. Friendly greedings, Mathias

    Read the article

  • Best practices for implementing collectible virtual item "packs"?

    - by Glenn Barnett
    I'm in the process of building a game in which virtual items can be obtained either by in-game play (defeating enemies, gaining levels), or by purchasing "packs" via microtransactions. Looking at an existing example like Duels.com's item packs, it looks like a lot of thought went into their implementation, including: Setting clear player expectations as to what can be obtained in the pack Limiting pack supply to increase demand and control inflation Are there other considerations that should be taken into account? For example, should the contents of the packs be pre-generated to guarantee the advertised drop rates, or is each drop rate just a random chance, and you could end up with higher or lower supply?

    Read the article

  • Strange 3D game engine camera with X,Y,Zoom instead of X,Y,Z

    - by Jenko
    I'm using a 3D game engine, that uses a 4x4 matrix to modify the camera projection, in this format: r r r x r r r y r r r z - - - zoom Strangely though, the camera does not respond to the Z translation parameter, and so you're forced to use X, Y, Zoom to move the camera around. Technically this is plausible for isometric-style games such as Age Of Empires III. But this is a 3D engine, and so why would they have designed the camera to ignore Z and respond only to zoom? Am I missing something here? I've tried every method of setting the camera and it really seems to ignore Z. So currently I have to resort to moving the main object in the scene graph instead of moving the camera in relation to the objects. My question: Do you have any idea why the engine would use such a scheme? Is it common? Why? Or does it seem like I'm missing something and the SetProjection(Matrix) function is broken and somehow ignores the Z translation in the matrix? (unlikely, but possible) Anyhow, what are the workarounds? Is moving objects around the only way? Edit: I'm sorry I cannot reveal much about the engine because we're in a binding contract. It's a locally developed engine (Australia) written in managed C# used for data visualizations. Edit: The default mode of the engine is orthographic, although I've switched it into perspective mode. Its probably more effective to use X, Y, Zoom in orthographic mode, but I need to use perspective mode to render everyday objects as well.

    Read the article

  • Cool examples of procedural pixel shader effects?

    - by Robert Fraser
    What are some good examples of procedural/screen-space pixel shader effects? No code necessary; just looking for inspiration. In particular, I'm looking for effects that are not dependent on geometry or the rest of the scene (would look okay rendered alone on a quad) and are not image processing (don't require a "base image", though they can incorporate textures). Multi-pass or single-pass is fine. Screenshots or videos would be ideal, but ideas work too. Here are a few examples of what I'm looking for (all from the RenderMonkey samples): PS - I'm aware of this question; I'm not asking for a source of actual shader implementations but instead for some inspirational ideas -- and the ones at the NVIDIA Shader Library mostly require a scene or are image processing effects. EDIT: this is an open-ended question and I wish there was a good way to split the bounty. I'll award the rep to the best answer on the last day.

    Read the article

  • Rendering Unity across multiple monitors

    - by N0xus
    At the moment I am trying to get unity to run across 2 monitors. I've done some research and know that this is, strictly, possible. There is a workaround where you basically have to fluff your window size in order to get unity to render across both monitors. What I've done is create a new custom screen resolution that takes in the width of both of my monitors, as seen in the following image, its the 3840 x 1080: How ever, when I go to run my unity game exe that size isn't available. All I get is the following: My custom size should be at the very bottom, but isn't. Is there something I haven't done, or missed, that will get unity to take in my custom screen size when it comes to running my game through its exe? Oddly enough, inside the unity editor, my custom screen size is picked up and I can have it set to that in my game window: Is there something that I have forgotten to do when I build and run the game from the file menu? Has someone ever beaten this issue before?

    Read the article

  • LibGdx drawing weird behaviour

    - by Ryckes
    I am finding strange behaviour while rendering TextureRegions in my game, only when pausing it. I am making a game for Android, in Java with LibGdx. When I comment out the line "drawLevelPaused()" everything seems to work fine, both running and paused. When it's not commented, everything works fine until I pause the screen, then it draws those two rectangles, but maybe ships are not shown, and if I comment out drawShips() and drawTarget() (just trying) maybe one of the planets disappears, or if I change the order, other things disappear and those that disappeared before now are rendered again. I can't find the way to fix this behaviour I beg your help, and I hope it's my mistake and not a LibGdx issue. I use OpenGL ES 2.0, stated in AndroidManifest.xml, if it is of any help. Thank you in advance. My Screen render method(game loop) is as follows: @Override public void render(float delta) { Gdx.gl.glClearColor(0.1f, 0.1f, 0.1f, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); controller.update(delta); renderer.render(); } When world state is PAUSED controller.update does nothing at all, there is a switch in it. And renderer.render() is as follows: public void render() { int worldState=this.world.getWorldState(); updateCamera(); spriteBatch.begin(); drawPlanets(); drawTarget(); drawShips(); if(worldState==World.PAUSED) { drawLevelPaused(); } else if(worldState==World.LEVEL_WON) { drawLevelWin(); } spriteBatch.end(); } And those methods are: private void updateCamera() { this.offset=world.getCameraOffset(); } private void drawPlanets() { for(Planet planet : this.world.getPlanets()) { this.spriteBatch.draw(this.textures.getTexture(planet.getTexture()), (planet.getPosition().x - this.offset[0]) * ppuX, (planet.getPosition().y - this.offset[1]) * ppuY); } } private void drawTarget() { Target target=this.world.getTarget(); this.spriteBatch.draw(this.textures.getTexture(target.getTexture()), (target.getPosition().x - this.offset[0]) * ppuX, (target.getPosition().y - this.offset[1]) * ppuY); } private void drawShips() { for(Ship ship : this.world.getShips()) { this.spriteBatch.draw(this.textures.getTexture(ship.getTexture()), (ship.getPosition().x - this.offset[0]) * ppuX, (ship.getPosition().y - this.offset[1]) * ppuY, ship.getBounds().width*ppuX/2, ship.getBounds().height*ppuY/2, ship.getBounds().width*ppuX, ship.getBounds().height*ppuY, 1.0f, 1.0f, ship.getAngle()-90.0f); } if(this.world.getStillShipVisibility()) { Ship ship=this.world.getStillShip(); Arrow arrow=this.world.getArrow(); this.spriteBatch.draw(this.textures.getTexture(ship.getTexture()), (ship.getPosition().x - this.offset[0]) * ppuX, (ship.getPosition().y - this.offset[1]) * ppuY, ship.getBounds().width*ppuX/2, ship.getBounds().height*ppuY/2, ship.getBounds().width*ppuX, ship.getBounds().height*ppuY, 1f, 1f, ship.getAngle() - 90f); this.spriteBatch.draw(this.textures.getTexture(arrow.getTexture()), (ship.getCenter().x - this.offset[0] - arrow.getBounds().width/2) * ppuX, (ship.getCenter().y - this.offset[1]) * ppuY, arrow.getBounds().width*ppuX/2, 0, arrow.getBounds().width*ppuX, arrow.getBounds().height*ppuY, 1f, arrow.getRate(), ship.getAngle() - 90f); } } private void drawLevelPaused() { this.shapeRenderer.begin(ShapeType.FilledRectangle); this.shapeRenderer.setColor(0f, 0f, 0f, 0.8f); this.shapeRenderer.filledRect(0, 0, this.width/this.ppuX, PAUSE_MARGIN_HEIGHT/this.ppuY); this.shapeRenderer.filledRect(0, (this.height-PAUSE_MARGIN_HEIGHT)/this.ppuY, this.width/this.ppuX, PAUSE_MARGIN_HEIGHT/this.ppuY); this.shapeRenderer.end(); for(Button button : this.world.getPauseButtons()) { this.spriteBatch.draw(this.textures.getTexture(button.getTexture()), (button.getPosition().x - this.offset[0]) * this.ppuX, (button.getPosition().y - this.offset[1]) * this.ppuY); } }

    Read the article

  • Audio Panning using RtAudio

    - by user1801724
    I use Rtaudio library. I would like to implement an audio program where I can control the panning (e.g. shifting the sound from the left channel to the right channel). In my specific case, I use a duplex mode (you can find an example here: duplex mode). It means that I link the microphone input to the speaker output. I seek on the web, but I did not find anything useful. Should I apply a filter on the output buffer? What kind of filter? Can anyone help me? Thanks

    Read the article

  • Making organic 2D tilemaps for tile based games...

    - by Codejoy
    So I have always wondered how one makes a nice (not so squarish) 2d tile map, is it possible? all games now days I think use textured polygons...but my game engine (and engine) doesn't support that to my knowledge. But it does support nice TMX files generated by mapeditor.org's Tiled Map Editor. Though in my game I want nice twisting and turning caverns to traverse ... I was wondering some ideas on such a process... is it in the art style? The type of tile engine? both? So what are some common techniques?

    Read the article

  • How to fix bad Collada produced by FBX?

    - by David
    I tried to use the FBX SDK (2011.3.1) to load FBX files and save them as Collada files in order to be able to import FBX files in Panda3D. Unfortunately the resulting Collada files are not usable for several reasons, among them: There's a Maya specific extra technique diffuse <diffuse> <texture texture="Map__2-image" texcoord="CHANNEL0"> <extra> <technique profile="MAYA"> <wrapU sid="wrapU0">TRUE</wrapU> <wrapV sid="wrapV0">TRUE</wrapV> <blend_mode>ADD</blend_mode> </technique> </extra> </texture> </diffuse> It assigns a texcoord channel name that isn't referenced anywhere else in the file (in the previous code sample, no geometry uses "CHANNEL0"...) Every polygon is exported twice, a first time with a basic material (only diffuse color, specular color, etc.) and a second time with a textured material -- this doubles the number of polygons of each model without any valuable reason Anyway, the resulting Collada file cannot be opened correctly either with OpenCOLLADA or Panda3D's "dae2egg". Anyone has any experience on how to "fix" it and make it understandable by common and well-reputed Collada importers such as OpenCOLLADA?

    Read the article

  • GLM Velocity Vectors - Basic Maths to Simulate Steering

    - by Reanimation
    UPDATE - Code updated below but still need help adjusting my math. I have a cube rendered on the screen which represents a car (or similar). Using Projection/Model matrices and Glm I am able to move it back and fourth along the axes and rotate it left or right. I'm having trouble with the vector mathematics to make the cube move forwards no matter which direction it's current orientation is. (ie. if I would like, if it's rotated right 30degrees, when it's move forwards, it travels along the 30degree angle on a new axes). I hope I've explained that correctly. This is what I've managed to do so far in terms of using glm to move the cube: glm::vec3 vel; //velocity vector void renderMovingCube(){ glUseProgram(movingCubeShader.handle()); GLuint matrixLoc4MovingCube = glGetUniformLocation(movingCubeShader.handle(), "ProjectionMatrix"); glUniformMatrix4fv(matrixLoc4MovingCube, 1, GL_FALSE, &ProjectionMatrix[0][0]); glm::mat4 viewMatrixMovingCube; viewMatrixMovingCube = glm::lookAt(camOrigin, camLookingAt, camNormalXYZ); vel.x = cos(rotX); vel.y=sin(rotX); vel*=moveCube; //move cube ModelViewMatrix = glm::translate(viewMatrixMovingCube,globalPos*vel); //bring ground and cube to bottom of screen ModelViewMatrix = glm::translate(ModelViewMatrix, glm::vec3(0,-48,0)); ModelViewMatrix = glm::rotate(ModelViewMatrix, rotX, glm::vec3(0,1,0)); //manually turn glUniformMatrix4fv(glGetUniformLocation(movingCubeShader.handle(), "ModelViewMatrix"), 1, GL_FALSE, &ModelViewMatrix[0][0]); //pass matrix to shader movingCube.render(); //draw glUseProgram(0); } keyboard input: void keyboard() { char BACKWARD = keys['S']; char FORWARD = keys['W']; char ROT_LEFT = keys['A']; char ROT_RIGHT = keys['D']; if (FORWARD) //W - move forwards { globalPos += vel; //globalPos.z -= moveCube; BACKWARD = false; } if (BACKWARD)//S - move backwards { globalPos.z += moveCube; FORWARD = false; } if (ROT_LEFT)//A - turn left { rotX +=0.01f; ROT_LEFT = false; } if (ROT_RIGHT)//D - turn right { rotX -=0.01f; ROT_RIGHT = false; } Where am I going wrong with my vectors? I would like change the direction of the cube (which it does) but then move forwards in that direction.

    Read the article

  • Implementing Light Volume Front Faces

    - by cubrman
    I recently read an article about light indexed deferred rendering from here: http://code.google.com/p/lightindexed-deferredrender/ It explains its ideas in a clear way, but there was one point that I failed to understand. It in fact is one of the most interesting ones, as it explains how to implement transparency with this approach: Typically when rendering light volumes in deferred rendering, only surfaces that intersect the light volume are marked and lit. This is generally accomplished by a “shadow volume like” technique of rendering back faces – incrementing stencil where depth is greater than – then rendering front faces and only accepting when depth is less than and stencil is not zero. By only rendering front faces where depth is less than, all future lookups by fragments in the forward rendering pass will get all possible lights that could hit the fragment. Can anyone explain how exactly you need to render only front faces? Another question is why do you need the front faces at all? Why can't we simply render all the lights and store the ones that overlap at this pixel in a texture? Does this approach serves as a cut-off plane to discard lights blocked by opaque geometry?

    Read the article

  • What are the cons of using DrawableGameComponent for every instance of a game object?

    - by Kensai
    I've read in many places that DrawableGameComponents should be saved for things like "levels" or some kind of managers instead of using them, for example, for characters or tiles (Like this guy says here). But I don't understand why this is so. I read this post and it made a lot of sense to me, but these are the minority. I usually wouldn't pay too much attention to things like these, but in this case I would like to know why the apparent majority believes this is not the way to go. Maybe I'm missing something.

    Read the article

  • My grid based collision detection is slow

    - by Fibericon
    Something about my implementation of a basic 2x4 grid for collision detection is slow - so slow in fact, that it's actually faster to simply check every bullet from every enemy to see if the BoundingSphere intersects with that of my ship. It becomes noticeably slow when I have approximately 1000 bullets on the screen (36 enemies shooting 3 bullets every .5 seconds). By commenting it out bit by bit, I've determined that the code used to add them to the grid is what's slowest. Here's how I add them to the grid: for (int i = 0; i < enemy[x].gun.NumBullets; i++) { if (enemy[x].gun.bulletList[i].isActive) { enemy[x].gun.bulletList[i].Update(timeDelta); int bulletPosition = 0; if (enemy[x].gun.bulletList[i].position.Y < 0) { bulletPosition = (int)Math.Floor((enemy[x].gun.bulletList[i].position.X + 900) / 450); } else { bulletPosition = (int)Math.Floor((enemy[x].gun.bulletList[i].position.X + 900) / 450) + 4; } GridItem bulletItem = new GridItem(); bulletItem.index = i; bulletItem.type = 5; bulletItem.parentIndex = x; if (bulletPosition > -1 && bulletPosition < 8) { if (!grid[bulletPosition].Contains(bulletItem)) { for (int j = 0; j < grid.Length; j++) { grid[j].Remove(bulletItem); } grid[bulletPosition].Add(bulletItem); } } } } And here's how I check if it collides with the ship: if (ship.isActive && !ship.invincible) { BoundingSphere shipSphere = new BoundingSphere( ship.Position, ship.Model.Meshes[0].BoundingSphere.Radius * 9.0f); for (int i = 0; i < grid.Length; i++) { if (grid[i].Contains(shipItem)) { for (int j = 0; j < grid[i].Count; j++) { //Other collision types omitted else if (grid[i][j].type == 5) { if (enemy[grid[i][j].parentIndex].gun.bulletList[grid[i][j].index].isActive) { BoundingSphere bulletSphere = new BoundingSphere(enemy[grid[i][j].parentIndex].gun.bulletList[grid[i][j].index].position, enemy[grid[i][j].parentIndex].gun.bulletModel.Meshes[0].BoundingSphere.Radius); if (shipSphere.Intersects(bulletSphere)) { ship.health -= enemy[grid[i][j].parentIndex].gun.damage; enemy[grid[i][j].parentIndex].gun.bulletList[grid[i][j].index].isActive = false; grid[i].RemoveAt(j); break; //no need to check other bullets } } else { grid[i].RemoveAt(j); } } What am I doing wrong here? I thought a grid implementation would be faster than checking each one.

    Read the article

  • Interesting/Innovative Open Source tools for indie games [closed]

    - by Gastón
    Just out of curiosity, I want to know opensource tools or projects that can add some interesting features to indie games, preferably those that could only be found on big-budget games. EDIT: As suggested by The Communist Duck and Joe Wreschnig, I'm putting the examples as answers. EDIT 2: Please do not post tools like PyGame, Inkscape, Gimp, Audacity, Slick2D, Phys2D, Blender (except for interesting plugins) and the like. I know they are great tools/libraries and some would argue essential to develop good games, but I'm looking for more rare projects. Could be something really specific or niche, like generating realistic trees and plants, or realistic AI for animals.

    Read the article

  • Using XNA for a 2D isometric game, but wanna move on

    - by Daniel Ribeiro
    I've been building a 2D isometric game (with learning purposes) in C# using XNA. I found it's really easy to manage sprite sheets loading, collision, basic physics and such with the XNA api. The thing is, I want to move on. My real goal is to learn C++ and develop a game using that language. What engine/library would you guys recommend for me to keep going on that same 2D isometric game direction using pretty much sprite sheets for the graphical part of the game?

    Read the article

< Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >