Search Results

Search found 22499 results on 900 pages for 'game level'.

Page 348/900 | < Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >

  • Need help drawings planets in Java.

    - by d33j
    I am looking for help/links/notes/agorithms/URLs/examples on drawing/rendering spheres in pure Java (so that I can hopefully, one day, generate/render planets with various surfaces & atmospheres) So for the moment, i'd be pretty happy to be able to start off with just drawing a wireframed sphere(s). ps: I don't want to use external libraries like Java3D, JOGL or aftermarket engines like JMonkeyEngine, Would rather keep it as straight Java.

    Read the article

  • Biome Transition in a Grid & Borderless World

    - by API-Beast
    I have a universe: a list of "Systems", each with their own center, type and radius. A small part of such a universe could look like this: Systems: Can be very close to a different system, e.g. overlap Can be inside another, much bigger system Can be very far away from any other systems Spawn system specific entities and particles inside the system radius Have some properties like background color So far so good. However, the player can fly around freely, inside and outside of systems, in real time. How do I interpolate and determine things like the background color now, depending on camera position? E.g. if you are halfway between a green and a red system you should see a background halfway between red and green, or if you are inside a lilac system near the center and at the border of a green system you should get a mostly lilac background etc.

    Read the article

  • Omni-directional light shadow mapping with cubemaps in WebGL

    - by Winged
    First of all I must say, that I have read a lot of posts describing an usage of cubemaps, but I'm still confused about how to use them. My goal is to achieve a simple omni-directional (point) light type shading in my WebGL application. I know that there is a lot more techniques (like using Two-Hemispheres or Camera Space Shadow Mapping) which are way more efficient, but for an educational purpose cubemaps are my primary goal. Till now, I have adapted a simple shadow mapping which works with spotlights (with one exception: I don't know how to cut off the glitchy part beyond the reach of a single shadow map texture): glitchy shadow mapping<<< So for now, this is how I understand the usage of cubemaps in shadow mapping: Setup a framebuffer (in case of cubemaps - 6 framebuffers; 6 instead of 1 because every usage of framebufferTexture2D slows down an execution which is nicely described here <<<) and a texture cubemap. Also in WebGL depth components are not well supported, so I need to render it to RGBA first. this.texture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_CUBE_MAP, this.texture); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MAG_FILTER, gl.LINEAR); for (var face = 0; face < 6; face++) gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, gl.RGBA, this.size, this.size, 0, gl.RGBA, gl.UNSIGNED_BYTE, null); gl.bindTexture(gl.TEXTURE_CUBE_MAP, null); this.framebuffer = []; for (face = 0; face < 6; face++) { this.framebuffer[face] = gl.createFramebuffer(); gl.bindFramebuffer(gl.FRAMEBUFFER, this.framebuffer[face]); gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, this.texture, 0); gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.RENDERBUFFER, this.depthbuffer); var e = gl.checkFramebufferStatus(gl.FRAMEBUFFER); // Check for errors if (e !== gl.FRAMEBUFFER_COMPLETE) throw "Cubemap framebuffer object is incomplete: " + e.toString(); } Setup the light and the camera (I'm not sure if should I store all of 6 view matrices and send them to shaders later, or is there a way to do it with just one view matrix). Render the scene 6 times from the light's position, each time in another direction (X, -X, Y, -Y, Z, -Z) for (var face = 0; face < 6; face++) { gl.bindFramebuffer(gl.FRAMEBUFFER, shadow.buffer.framebuffer[face]); gl.viewport(0, 0, shadow.buffer.size, shadow.buffer.size); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); camera.lookAt( light.position.add( cubeMapDirections[face] ) ); scene.draw(shadow.program); } In a second pass, calculate the projection a a current vertex using light's projection and view matrix. Now I don't know If should I calculate 6 of them, because of 6 faces of a cubemap. ScaleMatrix pushes the projected vertex into the 0.0 - 1.0 region. vDepthPosition = ScaleMatrix * uPMatrixFromLight * uVMatrixFromLight * vWorldVertex; In a fragment shader calculate the distance between the current vertex and the light position and check if it's deeper then the depth information read from earlier rendered shadow map. I know how to do it with a 2D Texture, but I have no idea how should I use cubemap texture here. I have read that texture lookups into cubemaps are performed by a normal vector instead of a UV coordinate. What vector should I use? Just a normalized vector pointing to the current vertex? For now, my code for this part looks like this (not working yet): float shadow = 1.0; vec3 depth = vDepthPosition.xyz / vDepthPosition.w; depth.z = length(vWorldVertex.xyz - uLightPosition) * linearDepthConstant; float shadowDepth = unpack(textureCube(uDepthMapSampler, vWorldVertex.xyz)); if (depth.z > shadowDepth) shadow = 0.5; Could you give me some hints or examples (preferably in WebGL code) how I should build it?

    Read the article

  • Character Jump Control

    - by Abdullah Sorathia
    I would like to know how can I control the jump movement of a character in Unity3D. Basically I am trying to program the jump in such a way that while a jump is in progress the character is allowed to move either left or right in mid-air when the corresponding keys are pressed. With my script the character will correctly move to the left when, for example, the left key is pressed, but when the right key is pressed afterwards, the character moves to the right before the movement to the left is completed. Following is the script: void Update () { if(touchingPlatform && Input.GetButtonDown("Jump")){ rigidbody.AddForce(jumpVelocity, ForceMode.VelocityChange); touchingPlatform = false; isJump=true; } //left & right movement Vector3 moveDir = Vector3.zero; if(Input.GetKey ("right")||Input.GetKey ("left")){ moveDir.x = Input.GetAxis("Horizontal"); // get result of AD keys in X if(ShipCurrentSpeed==0) { transform.position += moveDir * 3f * Time.deltaTime; }else if(ShipCurrentSpeed<=15) { transform.position += moveDir * ShipCurrentSpeed * 2f * Time.deltaTime; }else { transform.position += moveDir * 30f * Time.deltaTime; } }

    Read the article

  • Simple 3D Physics engine as a part of graduation project [on hold]

    - by Eugene Kolesnikov
    I am working on my graduation project and one part of it is to simulate the motion of a rigid body in 3D space. I can use either already written physics engine or to write it myself. It's quite an interesting challenge for me, so I would like to do it myself. I am able to use either C++ or Java for programming (prefer C++). I am using Mac OS X and Debian 7. Could you suggest any guides or tutorials how to do it, can't find it anywhere... More precisely, I need a very simple engine, without collision detection, and many other things that I do not know, I just need to calculate the forces and move my body, depending on the resultant force. If you think that this task is still very difficult or there is no such tutorial, please suggest me some good and simple engine.

    Read the article

  • Why does my program stutter so much?

    - by user36322
    I've been very frustrated trying to solve this. I've looked it up, and all the answers are the same: set IsFixedTimeStep = false. This doesn't help me at all, the program is still jittery and stutters. I have absolutely no idea what is going on, can you guys help? Code for movement (objects is a list): speed = Math.Min(speed + (speedIncrement * gameTime.ElapsedGameTime.Milliseconds / 200), maxSpeed); for (int i = objects.Count - 1; i >= 0; i--) { objects[i].rect.Y += (int)(speed * gameTime.ElapsedGameTime.Milliseconds); //Check if the object is past the screen. If it is, remove it if (objects[i].rect.Y > screenHeight) { objects.Remove(objects[i]); } }

    Read the article

  • Java Slick2d - Mouse picking how to take into account camera

    - by Corey
    When I move it it obviously changes the viewport so my mouse picking is off. My camera is just a float x and y and I use g.translate(-cam.cameraX+400, -cam.cameraY+300); to translate the graphics. I have the numbers hard coded just for testing purposes. How would I take into account the camera so my mouse picking works correctly. double mousetileX = Math.floor((double)mouseX/tiles.tileWidth); double mousetileY = Math.floor((double)mouseY/tiles.tileHeight); double playertileX = Math.floor(playerX/tiles.tileWidth); double playertileY = Math.floor(playerY/tiles.tileHeight); double lengthX = Math.abs((float)playertileX - mousetileX); double lengthY = Math.abs((float)playertileY - mousetileY); double distance = Math.sqrt((lengthX*lengthX)+(lengthY*lengthY)); if(input.isMousePressed(Input.MOUSE_LEFT_BUTTON) && distance < 4) { if(tiles.map[(int)mousetileX][(int)mousetileY] == 1) { tiles.map[(int)mousetileX][(int)mousetileY] = 0; } } That is my mouse picking code

    Read the article

  • One Step-Ahead A-Star

    - by Jonathan Dickinson
    I am attempting to create a server-centric RTS (as opposed to usual parallel synchronised simulation route of most RTS games today) - however I am still leveraging the discreet N-turns-ahead paradigm discussed by one of the AOE developers on Gamasutra. I have [possibly questionably?] decided that the path finding should only ever find the next cell the entity needs to move to, and was wondering if anyone has any clever ideas on how to optimize the algorithm for this specific scenario - or any other ideas on how to keep the pathfinding as lean as possible on the server. I have investigated a few possible algorithms but could only come up with one appropriation: Tiered A-Star - Relatively large T1 tiles, work out (and cache) each cell as you enter it. Other than that: doing the full A-Star pass and caching the entire path, which might use too much memory if a large amount of units are present. I know about the existence of naive progressive pathfinding algorithms (if you hit a block, turn in the direction closer to your target etc.) but they suffer from infinite feedback loops - and very poor pathing even if visited blocks are memorised. Not an option. Many thanks.

    Read the article

  • Isometric layer moving inside map

    - by gronzzz
    i'm created isometric map and now trying to limit layer moving. Main idea, that i have left bottom, right bottom, left top, right top points, that camera can not move outside, so player will not see map out of bounds. But i can not understand algorithm of how to do that. It's my layer scale/moving code. - (void)touchBegan:(UITouch *)touch withEvent:(UIEvent *)event { _isTouchBegin = YES; } - (void)touchMoved:(UITouch *)touch withEvent:(UIEvent *)event { NSArray *allTouches = [[event allTouches] allObjects]; UITouch *touchOne = [allTouches objectAtIndex:0]; CGPoint touchLocationOne = [touchOne locationInView: [touchOne view]]; CGPoint previousLocationOne = [touchOne previousLocationInView: [touchOne view]]; // Scaling if ([allTouches count] == 2) { _isDragging = NO; UITouch *touchTwo = [allTouches objectAtIndex:1]; CGPoint touchLocationTwo = [touchTwo locationInView: [touchTwo view]]; CGPoint previousLocationTwo = [touchTwo previousLocationInView: [touchTwo view]]; CGFloat currentDistance = sqrt( pow(touchLocationOne.x - touchLocationTwo.x, 2.0f) + pow(touchLocationOne.y - touchLocationTwo.y, 2.0f)); CGFloat previousDistance = sqrt( pow(previousLocationOne.x - previousLocationTwo.x, 2.0f) + pow(previousLocationOne.y - previousLocationTwo.y, 2.0f)); CGFloat distanceDelta = currentDistance - previousDistance; CGPoint pinchCenter = ccpMidpoint(touchLocationOne, touchLocationTwo); pinchCenter = [self convertToNodeSpace:pinchCenter]; CGFloat predictionScale = self.scale + (distanceDelta * PINCH_ZOOM_MULTIPLIER); if([self predictionScaleInBounds:predictionScale]) { [self scale:predictionScale scaleCenter:pinchCenter]; } } else { // Dragging _isDragging = YES; CGPoint previous = [[CCDirector sharedDirector] convertToGL:previousLocationOne]; CGPoint current = [[CCDirector sharedDirector] convertToGL:touchLocationOne]; CGPoint delta = ccpSub(current, previous); self.position = ccpAdd(self.position, delta); } } - (void)touchEnded:(UITouch *)touch withEvent:(UIEvent *)event { _isDragging = NO; _isTouchBegin = NO; // Check if i need to bounce _touchLoc = [touch locationInNode:self]; } #pragma mark - Update - (void)update:(CCTime)delta { CGPoint position = self.position; float scale = self.scale; static float friction = 0.92f; //0.96f; if(_isDragging && !_isScaleBounce) { _velocity = ccp((position.x - _lastPos.x)/2, (position.y - _lastPos.y)/2); _lastPos = position; } else { _velocity = ccp(_velocity.x * friction, _velocity.y *friction); position = ccpAdd(position, _velocity); self.position = position; } if (_isScaleBounce && !_isTouchBegin) { float min = fabsf(self.scale - MIN_SCALE); float max = fabsf(self.scale - MAX_SCALE); int dif = max > min ? 1 : -1; if ((scale > MAX_SCALE - SCALE_BOUNCE_AREA) || (scale < MIN_SCALE + SCALE_BOUNCE_AREA)) { CGFloat newSscale = scale + dif * (delta * friction); [self scale:newSscale scaleCenter:_touchLoc]; } else { _isScaleBounce = NO; } } }

    Read the article

  • How to calculate vertext normals for a mesh in Java in OpenGL ES application?

    - by alan mc
    Can some one point me to Java code ( in Java not C or C++) that calculates all the normals for all the vertices of a mesh for OpenGL ES application. I need this for lighting. Lets say I have a cube with following vertices and indices: float vertices[] = { -width, -height, -depth, // 0 width, -height, -depth, // 1 width, height, -depth, // 2 -width, height, -depth, // 3 -width, -height, depth, // 4 width, -height, depth, // 5 width, height, depth, // 6 -width, height, depth // 7 }; short indices[] = { 0, 2, 1, 0, 3, 2, 1,2,6, 6,5,1, 4,5,6, 6,7,4, 2,3,6, 6,3,7, 0,7,3, 0,4,7, 0,1,5, 0,5,4 }; In above specific example how many normals we need ?

    Read the article

  • how can i get rotation vector from matrix4x4 in xna?

    - by mr.Smyle
    i want to get rotation vector from matrix to realize some parent-children system for models. Matrix bonePos = link.Bone.Transform * World; Matrix m = Matrix.CreateTranslation(link.Offset) * Matrix.CreateScale(link.gameObj.Scale.X, link.gameObj.Scale.Y, link.gameObj.Scale.Z) * Matrix.CreateFromYawPitchRoll(MathHelper.ToRadians(link.gameObj.Rotation.Y), MathHelper.ToRadians(link.gameObj.Rotation.X), MathHelper.ToRadians(link.gameObj.Rotation.Z)) //need rotation vector from bone matrix here (now it's global model rotation vector) * Matrix.CreateFromYawPitchRoll(MathHelper.ToRadians(Rotation.Y), MathHelper.ToRadians(Rotation.X), MathHelper.ToRadians(Rotation.Z)) * Matrix.CreateTranslation(bonePos.Translation); link.gameObj.World = m; where : link - struct with children model settings, like position, rotation etc. And link.Bone - Parent Bone

    Read the article

  • Is there a way to use the facebook sdk with libgdx?

    - by Rudy_TM
    I have tried to use the facebook sdk in libgdx with callbacks, but it never enters the authetication listeners, so the user never is logged in, it permits the authorization for the facebook app but it never implements the authentication interfaces :( Is there a way to use it? public MyFbClass() { facebook = new Facebook(APPID); mAsyncRunner = new AsyncFacebookRunner(facebook); SessionStore.restore(facebook, this); FB.init(this, 0, facebook, this.permissions); } ///Method for init the permissions and my listener for authetication public void init(final Activity activity, final Facebook fb,final String[] permissions) { mActivity = activity; this.fb = fb; mPermissions = permissions; mHandler = new Handler(); async = new AsyncFacebookRunner(mFb); params = new Bundle(); SessionEvents.addAuthListener(auth); } ///I call the authetication process, I call it with a callback from libgdx public void facebookAction() { // TODO Auto-generated method stub fb.authenticate(); } ///It only allow the app permission, it doesnt register the events public void authenticate() { if (mFb.isSessionValid()) { SessionEvents.onLogoutBegin(); AsyncFacebookRunner asyncRunner = new AsyncFacebookRunner(mFb); asyncRunner.logout(getContext(), new LogoutRequestListener()); //SessionStore.save(this.mFb, getContext()); } else { mFb.authorize(mActivity, mPermissions,0 , new DialogListener()); } } public class SessionListener implements AuthListener, LogoutListener { @Override public void onAuthSucceed() { SessionStore.save(mFb, getContext()); } @Override public void onAuthFail(String error) { } @Override public void onLogoutBegin() { } @Override public void onLogoutFinish() { SessionStore.clear(getContext()); } } DialogListener() { @Override public void onComplete(Bundle values) { SessionEvents.onLoginSuccess(); } @Override public void onFacebookError(FacebookError error) { SessionEvents.onLoginError(error.getMessage()); } @Override public void onError(DialogError error) { SessionEvents.onLoginError(error.getMessage()); } @Override public void onCancel() { SessionEvents.onLoginError("Action Canceled"); } }

    Read the article

  • Creating a voxel chunk with a VBO - How to translate the coordinates of each block and add it to the VBO chunk?

    - by sunsunsunsunsun
    Im trying to make a voxel engine similar to minecraft as a little learning experience and a way to learn some opengl. I have created a chunk class and I want to put all of the vertices for the whole chunk into a single VBO. I was previously only putting each block into a vbo and making a call to render each block. Anyways, I am a bit confused about how I can translate the coordinates of each block in the chunk when I'm putting all vertices into one vbo. This is what I have at the moment. public void putVertices(float tx, float ty, float tz) { float l_length = 1.0f; float l_height = 1.0f; float l_width = 1.0f; vertexPositionData.put(new float[]{ xOffset + l_length + tx, l_height + ty, zOffset + -l_width + tz, xOffset + -l_length + tx, l_height + ty, zOffset + -l_width + tz, xOffset + -l_length + tx, l_height + ty, zOffset + l_width + tz, xOffset + l_length + tx, l_height + ty, zOffset + l_width + tz, xOffset + l_length + tx, -l_height + ty, zOffset + l_width + tz, xOffset + -l_length + tx, -l_height + ty, zOffset + l_width + tz, xOffset + -l_length + tx, -l_height + ty, zOffset + -l_width + tz, xOffset + l_length + tx, -l_height + ty, zOffset + -l_width + tz, xOffset + l_length + tx, l_height + ty, zOffset + l_width + tz, xOffset + -l_length + tx, l_height + ty,zOffset + l_width + tz, xOffset + -l_length + tx, -l_height + ty,zOffset + l_width + tz, xOffset + l_length + tx, -l_height + ty, zOffset + l_width + tz, xOffset + l_length + tx, -l_height + ty, zOffset + -l_width + tz, xOffset + -l_length + tx, -l_height + ty,zOffset + -l_width + tz, xOffset + -l_length + tx, l_height + ty, zOffset + -l_width + tz, xOffset + l_length + tx, l_height + ty, zOffset + -l_width + tz, xOffset + -l_length + tx, l_height + ty, zOffset + l_width + tz, xOffset + -l_length + tx, l_height + ty, zOffset + -l_width + tz, xOffset + -l_length + tx, -l_height + ty, zOffset + -l_width + tz, xOffset + -l_length + tx, -l_height + ty,zOffset + l_width + tz, xOffset + l_length + tx, l_height + ty,zOffset + -l_width + tz, xOffset + l_length + tx, l_height + ty, zOffset + l_width + tz, xOffset + l_length + tx, -l_height + ty, zOffset + l_width + tz, xOffset + l_length + tx, -l_height + ty, zOffset + -l_width + tz }); } public void createChunk() { vertexPositionData = BufferUtils.createFloatBuffer((24*3)*activateBlocks); Random random = new Random(); for (int x = 0; x < CHUNK_SIZE; x++) { for (int y = 0; y < CHUNK_SIZE; y++) { for (int z = 0; z < CHUNK_SIZE; z++) { if(blocks[x][y][z].getActive()) { putVertices(x*2.0f, y*2.0f, z*2.0f); } } } } Whats any easy way to translate the vertices of each block into its correct position? I was previously using glTranslatef with each call to render block but this wont work now. What I am doing now also does not work, the blocks all render in stacks on top of each other and it looks like this: http://i.imgur.com/NyFtBTI.png Thanks

    Read the article

  • Projective texture and deferred lighting

    - by Vodácek
    In my previous question, I asked whether it is possible to do projective texturing with deferred lighting. Now (more than half a year later) I have a problem with my implementation of the same thing. I am trying to apply this technique in light pass. (my projector doesn't affect albedo). I have this projector View a Projection matrix: Matrix projection = Matrix.CreateOrthographicOffCenter(-halfWidth * Scale, halfWidth * Scale, -halfHeight * Scale, halfHeight * Scale, 1, 100000); Matrix view = Matrix.CreateLookAt(Position, Target, Vector3.Up); Where halfWidth and halfHeight is are half of the texture's width and height, Position is the Projector's position and target is the projector's target. This seems to be ok. I am drawing full screen quad with this shader: float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; texture2D ProjectorTexture; float4x4 ProjectorViewProjection; sampler2D depthSampler = sampler_state { texture = <DepthTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = <NormalTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D projectorSampler = sampler_state { texture = <ProjectorTexture>; AddressU = Clamp; AddressV = Clamp; }; float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position :POSITION0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = input.Position; output.PositionCopy=output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float2 texCoord =postProjToScreen(input.PositionCopy) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); //return float4(depth.r,0,0,1); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; //compute projection float3 projection=tex2D(projectorSampler,postProjToScreen(mul(position,ProjectorViewProjection)) + halfPixel()); return float4(projection,1); } In first part of pixel shader is recovered position from G-buffer (this code I am using in other shaders without any problem) and then is tranformed to projector viewprojection space. Problem is that projection doesn't appear. Here is an image of my situation: The green lines are the rendered projector frustum. Where is my mistake hidden? I am using XNA 4. Thanks for advice and sorry for my English. EDIT: Shader above is working but projection was too small. When I changed the Scale property to a large value (e.g. 100), the projection appears. But when the camera moves toward the projection, the projection expands, as can bee seen on this YouTube video.

    Read the article

  • Permanently Sync a wiimote with a computer

    - by Adam Geisweit
    i have tried to look up many ways to sync up my wiimotes to my computer so that i can program games with it, but every time it only syncs them up temporarily, or if it says it can permanently sync it, it doesn't actually do it. it gets tiresome when i have to keep on reconnecting it every time i want to save battery life. how would i be able to sync up my wiimote to my computer so that if i turn off my wiimote, i can just hit any button and it will automatically sync it up?

    Read the article

  • Scaling along an arbitrary axis (Dealing with non-uniform scale)

    - by Jon
    I'm trying to build my own little engine to get more familiar with the concepts of 3D programming. I have a transform class that on each frame it creates a Scaling Matrix (S), a Rotation Matrix from a Quaternion (R) and concatenates them together (S*R). Once i have SR, I insert the translation values into the bottom of the three columns. So i end up with a transformation matrix that looks like: [SR SR SR 0] [SR SR SR 0] [SR SR SR 0] [tx ty tz 1] This works perfectly in all cases except when rotating an object that has a non-uniform scale. For example a unit cube with ScaleX = 4, ScaleY = 2, ScaleZ = 1 will give me a rectangular box that is 4 times as wide as the depth and twice as high as the depth. If i then translate this around, the box stays the same and looks normal. The problem happens whenever I try to rotate this scaled box. The shape itself becomes distorted and it appears as though the Scale factors are affecting the object on the World X,Y,Z axis rather than the local X,Y,Z axis of the object. I've done some pretty extensive research through a variety of textbooks (Eberly, Moller/Hoffman, Phar etc) and there isn't a ton there to go off of. Online, most of the answers say to avoid non-uniform scaling which I understand the desire to avoid it, but I'd still like to figure out how to support it. The only thing I can think off is that when constructing a Scale Matrix: [sx 0 0 0] [0 sy 0 0] [0 0 sz 0] [0 0 0 1] This is scaling along the World Axis instead of the object's local Direction, Up and Right vectors or it's local Z, Y, X axis. Does anyone have any tips or ideas on how to handle construction a transformation matrix that allows for non-uniform scaling and rotation? Thanks!

    Read the article

  • Maya Animated Character export for XNA 4.0 problem

    - by FahidK
    To begin with, I'm trying to export an animated character in .fbx format from Maya 2013 to XNA 4.0 In Maya, The Model has a basic rig and the animations are in clips made in the Trax editor. so the issue i'm having is after selecting the model and the root joint and then hitting export in .fbx format, for some reason when i open the exported .fbx file the joint system is detached from the model with no animation. Btw, i have the animations in clips so that they can be called in code, for example "run","walk","attack". So, what can i do to solve this problem? Thank you.

    Read the article

  • What should I worry about when changing OpenGL origin to upper left of screen?

    - by derivative
    For self education, I'm writing a 2D platformer engine in C++ using SDL / OpenGL. I initially began with pure SDL using the tutorials on sdltutorials.com and lazyfoo.net, but I'm now rendering in an OpenGL context (specifically immediate mode but I'm learning about VAOs/VBOs) and using SDL for interface, audio, etc. SDL uses a coordinate system with the origin in the upper left of the screen and the positive y-axis pointing down. It's easy to set up my orthographic projection in OpenGL to mirror this. I know that texture coordinates are a right-hand system with values from 0 to 1 -- flipping the texture vertically before rendering (well, flip the file before loading) yields textures that render correctly... which is fine if I'm drawing the entire texture, but ultimately I'll be using tilesets and can imagine problems. What should I be concerned about in terms of rendering when I do this? If anybody has any advice or they've done this themselves and can point out future pitfalls, that would be great, but really any thoughts would be appreciated.

    Read the article

  • OpenGL setup on Windows

    - by kevin james
    I have been trying to use OpenGL for two days now. First on Mac, then on Windows. The problem with Mac is that it doesn't support the newer versions of OpenGL. I ran a tutorial that actually did get some things working, but it only works in XCode (i.e., I can't create a new file, paste in the same code, and get it to work). Because of these issues, I moved to Windows. My Windows 7 has OpenGL 4.3, which is the same that is used in alot of other tutorials. However, not one of these tutorials gives any instruction on how to set it up for the first time. I have come across some vague posts saying that some libraries need to be linked. But WHAT libraries, and HOW do I link them? Please help. I am pretty desperate to set this up as this project is due for work soon. I have actually used OpenGL before at my university, but the computers already had everything set up. The project itself is very easy, but setting up OpenGL is not something I know how to do.

    Read the article

  • Drawing texture does not work anymore with a small amount of triangles

    - by Paul
    When i draw lines, the vertices are well connected. But when i draw the texture inside the triangles, it only works with i<4 in the for loop, otherwise with i<5 for example, there is a EXC_BAD_ACCESS message, at @synthesize textureImage = _textureImage. I don't understand why. (The generatePolygons method seems to work fine as i tried to draw lines with many vertices as in the second image below. And textureImage remains the same for i<4 or i<5 : it's a 512px square image). Here are the images : What i want to achieve is to put the red points and connect them to the y-axis (the green points) and color the area (the green triangles) : If i only draw lines, it works fine : Then with a texture color, it works for i<4 in the loop (the red points in my first image, plus the fifth one to connect the last y) : But then, if i set i<5, the debug tool says EXC_BAD_ACCESS at the synthesize of _textureImage. Here is my code : I set a texture color in HelloWordLayer.mm with : CCSprite *textureImage = [self spriteWithColor:color3 textureSize:512]; _terrain.textureImage = textureImage; Then in the class Terrain, i create the vertices and put the texture in the draw method : @implementation Terrain @synthesize textureImage = _textureImage; //EXC_BAD_ACCESS for i<5 - (void)generatePath2{ CGSize winSize = [CCDirector sharedDirector].winSize; float x = 40; float y = 0; for(int i = 0; i < kMaxKeyPoints+1; ++i) { _hillKeyPoints[i] = CGPointMake(x, y); x = 150 + (random() % (int) 30); y += 30; } } -(void)generatePolygons{ _nPolyVertices = 0; float x1 = 0; float y1 = 0; int keyPoints = 0; for (int i=0; i<4; i++){ /* HERE : 4 = OK / 5 = crash */ //V0: at (0,0) _polyVertices[_nPolyVertices] = CGPointMake(x1, y1); _polyTexCoords[_nPolyVertices++] = CGPointMake(x1, y1); //V1: to the first "point" _polyVertices[_nPolyVertices] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); _polyTexCoords[_nPolyVertices++] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); keyPoints++; //from point at index 0 to 1 //V2, same y as point n°2: _polyVertices[_nPolyVertices] = CGPointMake(0, _hillKeyPoints[keyPoints].y); _polyTexCoords[_nPolyVertices++] = CGPointMake(0, _hillKeyPoints[keyPoints].y); //V1 again _polyVertices[_nPolyVertices] = _polyVertices[_nPolyVertices-2]; _polyTexCoords[_nPolyVertices++] = _polyVertices[_nPolyVertices-2]; //V2 again _polyVertices[_nPolyVertices] = _polyVertices[_nPolyVertices-2]; _polyTexCoords[_nPolyVertices++] = _polyVertices[_nPolyVertices-2]; //V3 = same x,y as point at index 1 _polyVertices[_nPolyVertices] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); _polyTexCoords[_nPolyVertices] = CGPointMake(_hillKeyPoints[keyPoints].x, _hillKeyPoints[keyPoints].y); y1 = _polyVertices[_nPolyVertices].y; _nPolyVertices++; } } - (id)init { if ((self = [super init])) { [self generatePath2]; [self generatePolygons]; } return self; } - (void) draw { //glDisable(GL_TEXTURE_2D); glDisableClientState(GL_COLOR_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glBindTexture(GL_TEXTURE_2D, _textureImage.texture.name); glColor4f(1, 1, 1, 1); glVertexPointer(2, GL_FLOAT, 0, _polyVertices); glTexCoordPointer(2, GL_FLOAT, 0, _polyTexCoords); glDrawArrays(GL_TRIANGLE_STRIP, 0, (GLsizei)_nPolyVertices); glColor4f(1, 1, 1, 1); for(int i = 1; i < 40; ++i) { ccDrawLine(_polyVertices[i-1], _polyVertices[i]); } // restore default GL states glEnable(GL_TEXTURE_2D); glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); } Do you see anything wrong in this code? Thanks for your help

    Read the article

  • What data should be cached in a multiplayer server, relative to AI and players?

    - by DevilWithin
    In a virtual place, fully network driven, with an arbitrary number of players and an arbitrary number of enemies, what data should be cached in the server memory, in order to optimize smooth AI simulation? Trying to explain, lets say player A sees player B to E, and enemy A to G. Each of those players, see player A, but not necessarily each other. Same applies to enemies. Think of this question from a topdown perspective please. In many cases, for example, when a player shoots his gun, the server handles the sound as a radial "signal" that every other entity within reach "hear" and react upon. Doing these searches all the time for a whole area, containing possibly a lot of unrelated players and enemies, seems to be an issue, when the budget for each AI agent is so small. Should every entity cache whatever enters and exits from its radius of awareness? Is there a great way to trace the entities close by without flooding the memory with such caches? What about other AI related problems that may arise, after assuming the previous one works well? We're talking about environments with possibly hundreds of enemies, a swarm.

    Read the article

  • Sorting objects before rendering

    - by dreta
    I'm trying to implement a scene graph and in all the articles i've come across there is talk about object sorting. So you'd sort your objects by "material" for example. Now untill i sat down and started implementing it, i kind of took this for granted, because it made sense. But now i'm wondering what does sorting actually change? In my engine, i have a manager for UBOs, i use those to store data that'll be shared between programs, at the moment that only involves time, camera and projection matrices and lights (i'm not worrying about managing which lights affect which objects ATM). Now for each model i have to change the model to world matrix uniform, no sorting is going to change that. So is the jump from changing this matrix to also setting a material for each object that bad? I vaguely remember reading somewhere that each time you change something in the pipeline, it has to get flushed and that can cause performance issues. But for each drawing call i'm setting up a model to world matrix anyway, so what sense does it make to ever be concerned about this? BTW is there any information about whether changing a uniform and calling glBufferSubData is more (or less) expensive.

    Read the article

  • Calculate the intersection depth between a rectangle and a right triangle

    - by Celarix
    all. I'm working on a 2D platformer built in C#/XNA, and I'm having a lot of problems calculating the intersection depth between a standard rectangle (used for sprites) and a right triangle (used for sloping tiles). Ideally, the rectangle will collide with the solid edges of the triangle, and its bottom-center point will collide with the sloped edge. I've been fighting with this for a couple of days now, and I can't make sense of it. So far, the method detects intersections (somewhat), but it reports wildly wrong depths. How does one properly calculate the depth? Is there something I'm missing? Thanks!

    Read the article

  • How to create a copy of an instance without having access to private variables

    - by Jamie
    Im having a bit of a problem. Let me show you the code first: public class Direction { private CircularList xSpeed, zSpeed; private int[] dirSquare = {-1, 0, 1, 0}; public Direction(int xSpeed, int zSpeed){ this.xSpeed = new CircularList(dirSquare, xSpeed); this.zSpeed = new CircularList(dirSquare, zSpeed); } public Direction(Point dirs){ this(dirs.x, dirs.y); } public void shiftLeft(){ xSpeed.shiftLeft(); zSpeed.shiftRight(); } public void shiftRight(){ xSpeed.shiftRight(); zSpeed.shiftLeft(); } public int getXSpeed(){ return this.xSpeed.currentValue(); } public int getZSpeed(){ return this.zSpeed.currentValue(); } } Now lets say i have an instance of Direction: Direction dir = new Direction(0, 0); As you can see in the code of Direction, the arguments fed to the constructor, are passed directly to some other class. One cannot be sure if they stay the same because methods shiftRight() and shiftLeft could have been called, which changes thos numbers. My question is, how do i create a completely new instance of Direction, that is basically copy(not by reference) of dir? The only way i see it, is to create public methods in both CircularList(i can post the code of this class, but its not relevant) and Direction that return the variables needed to create a copy of the instance, but this solution seems really dirty since those numbers are not supposed to be touched after beeing fed to the constructor, and therefore they are private.

    Read the article

  • Tile-wide extent tracing on a grid.

    - by Larolaro
    I'm currently working on A* pathfinding on a grid and I'm looking to smooth the generated path, while also considering the extent of the character moving along it. I'm using a grid for the pathfinding, however character movement is free roaming, not strict tile to tile movement. To achieve a smoother, more efficient path, I'm doing line traces on a grid to determine if there is unwalkable tiles between tiles to shave off unecessary corners. However, because a line trace is zero extent, it doesn't consider the extent of the character and gives bad results (not returning unwalkable tiles just missed by the line, causing unwanted collisions). So what I'm looking for is rather than a line algorithm that determines the tiles under it, I'm looking for one that determines the tiles under a tile-wide extent line. Here is an image to help visualise my problem! Does anyone have any ideas? I've been working with Bresenham's line and other alternatives but I haven't yet figured out how to nail this specific problem.

    Read the article

< Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >