Search Results

Search found 28914 results on 1157 pages for 'cloud development'.

Page 438/1157 | < Previous Page | 434 435 436 437 438 439 440 441 442 443 444 445  | Next Page >

  • Car-like Physics - Basic Maths to Simulate Steering

    - by Reanimation
    As my program stands I have a cube which I can control using keyboard input. I can make it move left, right, up, down, back, fourth along the axis only. I can also rotate the cube either left or right; all the translations and rotations are implemented using glm. if (keys[VK_LEFT]) //move cube along xAxis negative { globalPos.x -= moveCube; keys[VK_RIGHT] = false; } if (keys[VK_RIGHT]) //move cube along xAxis positive { globalPos.x += moveCube; keys[VK_LEFT] = false; } if (keys[VK_UP]) //move cube along yAxis positive { globalPos.y += moveCube; keys[VK_DOWN] = false; } if (keys[VK_DOWN]) //move cube along yAxis negative { globalPos.y -= moveCube; keys[VK_UP] = false; } if (FORWARD) //W - move cube along zAxis positive { globalPos.z += moveCube; BACKWARD = false; } if (BACKWARD) //S- move cube along zAxis negative { globalPos.z -= moveCube; FORWARD = false; } if (ROT_LEFT) //rotate cube left { rotX +=0.01f; ROT_LEFT = false; } if (ROT_RIGHT) //rotate cube right { rotX -=0.01f; ROT_RIGHT = false; } I render the cube using this function which handles the shader and position on screen: void renderMovingCube(){ glUseProgram(myShader.handle()); GLuint matrixLoc4MovingCube = glGetUniformLocation(myShader.handle(), "ProjectionMatrix"); glUniformMatrix4fv(matrixLoc4MovingCube, 1, GL_FALSE, &ProjectionMatrix[0][0]); glm::mat4 viewMatrixMovingCube; viewMatrixMovingCube = glm::lookAt(camOrigin,camLookingAt,camNormalXYZ); ModelViewMatrix = glm::translate(viewMatrixMovingCube,globalPos); ModelViewMatrix = glm::rotate(ModelViewMatrix,rotX, glm::vec3(0,1,0)); //manually rotate glUniformMatrix4fv(glGetUniformLocation(myShader.handle(), "ModelViewMatrix"), 1, GL_FALSE, &ModelViewMatrix[0][0]); movingCube.render(); glUseProgram(0); } The glm::lookAt function always points to the screens centre (0,0,0). The globalPos is a glm::vec3 globalPos(0,0,0); so when the program executes, renders the cube in the centre of the screens viewing matrix; the keyboard inputs above adjust the globalPos of the moving cube. The glm::rotate is the function used to rotate manually. My question is, how can I make the cube go forwards depending on what direction the cube is facing.... ie, once I've rotated the cube a few degrees using glm, the forwards direction, relative to the cube, is no longer on the z-Axis... how can I store the forwards direction and then use that to navigate forwards no matter what way it is facing? (either using vectors that can be applied to my code or some handy maths). Thanks.

    Read the article

  • Developing for Chrome App/Android?

    - by Johnny Quest
    I have been developing for win7 mobile (XNA/silverlight and will continue to do so, love everything about it) but I wanted to branch a few of my more polished games to google app store online, and perhaps android(though not sure, as with all the different versions it makes learning/loading applications a bit tricky) What is the most versatile language to start learning from chrome apps/android: Java would be excellent for android, but could I port it to a web app for chrome? (and its close to C#) Flash would work for a web app as I can just embed it into a html page (have done actionscript before, didn't care much for the IDE though), but would it also work on android? or I guess there is always C/C++ but haven't heard much about that, though I think it works for both (though C++ does interest me) Any advice would be excellent, thanks.

    Read the article

  • Algorithm to reduce a bitmap mask to a list of rectangles?

    - by mos
    Before I go spend an afternoon writing this myself, I thought I'd ask if there was an implementation already available --even just as a reference. The first image is an example of a bitmap mask that I would like to turn into a list of rectangles. A bad algorithm would return every set pixel as a 1x1 rectangle. A good algorithm would look like the second image, where it returns the coordinates of the orange and red rectangles. The fact that the rectangles overlap don't matter, just that there are only two returned. To summarize, the ideal result would be these two rectangles (x, y, w, h): [ { 3, 1, 2, 6 }, { 1, 3, 6, 2 } ]

    Read the article

  • Moving sprites on a graph in libGDX

    - by nosferat
    In my game I'd like to move sprites on a fixed path. Until this point I was trying to stick with the tools already provided by libGDX, like the Tiled map renderer classes so I'm looking for a solution nearly as convenient as that, e.g. I'd like to avoid creating the adjacency matrix by hand. Tiled has the functionality to add objects to the map but I'm not sure if I can use it for this purpose. Any idea?

    Read the article

  • Resolution independent physics

    - by user46877
    I'm making a game like Doodlejump but don't know how to make the physics scale on multiple resolutions. I also can't find anything related to this on Google. Right now I'm scaling the game using letterboxing and tested scaling the jump height with this code: gravity = graphics.getHeight() * 0.001f; jumpVel = graphics.getHeight() * -0.04f; ... velY += gravity; y += velY; But if I test this on my smartphone or emulator with different resolutions, I always get a slightly different jump height. I know that Farseer is resolution independent. How can I replicate this in my game? Thanks in advance.

    Read the article

  • Cocos2d-x 3.0 animation frame by frame

    - by Narek
    As I know animations are actions. Now I need to play animation frame by frame. Say I have an animation from N frames. each frame should be played after t delay. Now I want to play animation frame by frame, each frame advance the animation's state. How I can do this? And what about playing actions frame by frame advancing the state in general. I ask because I use ECS, and I deal with frames. P.S. I want to do something like this: Action * a = MoveTo(initialPoint, finalPoint, durationOfAnimation); a->play(0.001 seconds); a->play(0.003 seconds); a->play(0.02 seconds); a->play(0.67 seconds); a->play(0.06 seconds); And see the animation.

    Read the article

  • Resources for game networking in Java

    - by pudelhund
    I am currently working on a Java multiplayer game. The game itself (single player) already works perfectly fine and so does the chat. The only thing that is really missing is the multiplayer part. Sadly I am absolutely clueless on where to start with that. I roughly know that I will have to work with packages, and I also know many things about streaming etc (chat is already working). Oh and it should - according to this article - be a UDP server. My problem is that I can't find any resources on how to do this. A tutorial (book or website) would be perfect, alternatively a good example of an open source client/server (in Java of course) would be fine as well. If you feel like doing something helpful I'd also really appreciate someone "privately" teaching me via email or some chat program :) Thank you!

    Read the article

  • How do you author HDR content?

    - by Nathan Reed
    How do you make it easy for your artists to author content for an HDR renderer? What kinds of tools should you provide, and what workflows need to change, in going from LDR to HDR? Note that I'm not asking about the technical aspects of implementing an HDR renderer, but about best practices for creating materials and lighting in HDR. I've googled around a bit, but there doesn't seem to be much about this topic on the web. Can anyone point me to some good resources on this, or share their own experiences? Some specific points: Lighting - how can lighting artists pick HDR light colors? Do they have a standard LDR color picker and then a multiplier? Is the multiplier in gamma or linear space? Maybe instead of a multiplier it's a log-luminance? Or a physical brightness level, like the number of lumens? How will they know what multiplier/luminance/brightness is "correct" for a given light? Materials - how can texture artists make emissive color maps, such as neon signs, TV screens, skyboxes, etc? Can you paint one as a regular LDR (8-bit-per-channel) image and apply a multiplier (or log-luminance, etc.)? Are there cases where it's necessary to actually paint HDR images? If so, how do you go about this in Photoshop (or other software)?

    Read the article

  • Car engine sound simulation

    - by Petteri Hietavirta
    I have been thinking how to create realistic sound for a car. The main sound is the engine, then all kind of wind, road and suspension sounds. Are there any open source projects for the engine sound simulation? Simply pitching up the sample does not sound too great. The ideal would be to something that allows me to pick type of the engine (i.e. inline-4 vs v-8), add extras like turbo/supercharger whine and finally set the load and rpm. Edit: Something like http://www.sonory.org/examples.html

    Read the article

  • Omni-directional shadow mapping

    - by gridzbi
    What is a good/the best way to fill a cube map with depth values that are going to give me the least amount of trouble with floating point imprecision? To get up and running I'm just writing the raw depth to the buffer, as you can imagine it's pretty terrible - I need to to improve it, but I'm not sure how. A few tutorials on directional lights divide the depth by W and store the Z/W value in the cube map - How would I perform the depth comparison in my shadow mapping step? The nvidia article here http://http.developer.nvidia.com/GPUGems/gpugems_ch12.html appears to do something completely different and use the dot of the light vector, presumably to counter the depth precision worsening over distance? He also scales the geometry so that it fits into the range -.5 +.5 - The article looks a bit dated, though - is this technique still reasonable? Shader code http://pastebin.com/kNBzX4xU Screenshot http://imgur.com/54wFI

    Read the article

  • Collisions not working as intended

    - by Stan
    I'm making a game, it's a terraria-like game, with 20x20 blocks, and you can place and remove those blocks. Now, I am trying to write collisions, but it isn't working as I want, the collision succesfully stops the player from going through the ground, but, when I press a key like A + S, that means if I walk down and left (Noclip is on atm), my player will go into the ground, bug up, and exit the ground somewhere else in the level. I made a video of it. The red text means which buttons I am pressing. http://www.youtube.com/watch?v=mo4frZyNwOs You see, if I press A and S together, I go into the ground. Here is my collision code: Vector2 collisionDist, normal; private bool IsColliding(Rectangle body1, Rectangle body2) { normal = Vector2.Zero; Vector2 body1Centre = new Vector2(body1.X + (body1.Width / 2), body1.Y + (body1.Height / 2)); Vector2 body2Centre = new Vector2(body2.X + (body2.Width / 2), body2.Y + (body2.Height / 2)); Vector2 distance, absDistance; float xMag, yMag; distance = body1Centre - body2Centre; float xAdd = ((body1.Width) + (body2.Width)) / 2.0f; float yAdd = ((body1.Height) + (body2.Height)) / 2.0f; absDistance.X = (distance.X < 0) ? -distance.X : distance.X; absDistance.Y = (distance.Y < 0) ? -distance.Y : distance.Y; if (!((absDistance.X < xAdd) && (absDistance.Y < yAdd))) return false; xMag = xAdd - absDistance.X; yMag = yAdd - absDistance.Y; if (xMag < yMag) normal.X = (distance.X > 0) ? xMag : -xMag; else normal.Y = (distance.Y > 0) ? yMag : -yMag; return true; } private void PlayerCollisions() { foreach (Block blocks in allTiles) { collisionDist = Vector2.Zero; if (blocks.Texture != airTile && blocks.Texture != stoneDarkTexture && blocks.Texture != stoneDarkTextureSelected && blocks.Texture != airTileSelected && blocks.Texture != doorTexture && blocks.Texture != doorTextureSelected) { if (IsColliding(player.plyRect, blocks.tileRect)) { if (normal.Length() > collisionDist.Length()) { collisionDist = normal; } player.Position.X += collisionDist.X; player.Position.Y += collisionDist.Y; break; } } } } I got PlayerCollisions() running in my Update method. As you can see it works partly, but if it runs perfectly, it would be awesome, though I have no idea how to fix this problem. Help would be greatly appreciated. EDIT: If I remove the break; it works partly, then it is just the thing that it spasms when it hits two or more blocks at once, like, if I touch 2/3 blocks at once, it does twice the force up. How can I make it so that it only does the force for one block, so it stays correct, and does not spasm? Thanks.

    Read the article

  • Wait till all CCActions have completed

    - by tGilani
    I am developing a simple cocos2d game in which I want to animate two CCSprites simultaneously, and for this purpose I simply set CCActions on respective `CCSprite's as follows. [first runAction:[CCMoveTo actionWithDuration:1 position:secondPosition]]; [second runAction:[CCMoveTo actionWithDuration:1 position:firstPosition]]; Now I want to wait till the animations are complete, so I can perform the next step. How should I wait for these animations to finish? There are actually two method calls, the first one animates the objects via the code above and second call does the other animation. I need to delay the second method call until the animations in first are complete. (I would not like to use CCCallFunc blocks as I want to call the second method from the same caller as the first one.

    Read the article

  • Android, how important is deltaTime?

    - by iQue
    Im making a game that is getting pretty big and sometimes my thread has to skip a frame, so far I'm not using deltaTime for setting the speed of my different objects in the game because it's still not a big enough game for it to matter imo. But its getting bigger then I planned, so my question is, how important is delta Time? If I should use delta time there is a problem, since speedX and speedY are integers(they have to be for eclipse to let you make a rectangle of them), I cant add delta time very functionally as far as I understand, but might be wrong? Ive tried adding deltaTime to the code below, and sometimes my enemies just not move after spawn, they just stand there and run in the same place Will add an some code for how I set / use speed: public void update(int dx, int dy) { double theta = 180.0 / Math.PI * Math.atan2(-(y - controls.pointerPosition.y), controls.pointerPosition.x - x); x +=dx * Math.cos(Math.toRadians(theta)); y +=dy * Math.sin(Math.toRadians(theta)); currentFrame = ++currentFrame % BMP_COLUMNS; } public void draw(Canvas canvas) { int srcX = currentFrame * width; int srcY = 1 * height; Rect src = new Rect(srcX, srcY, srcX + width, srcY + height); Rect dst = new Rect(x, y, x + width, y + height); canvas.drawBitmap(bitmap, src, dst, null); } So if someone with some experience with this has any thoughts, please share. Thank you! Changed code: public void update(int dx, int dy, float delta) { double theta = 180.0 / Math.PI * Math.atan2(-(y - controls.pointerPosition.y), controls.pointerPosition.x - x); double speedX = delta * dx * Math.cos(Math.toRadians(theta)); double speedY = delta * dy * Math.sin(Math.toRadians(theta)); x += speedX; y += speedY; currentFrame = ++currentFrame % BMP_COLUMNS; } public void draw(Canvas canvas) { int srcX = currentFrame * width; int srcY = 1 * height; Rect src = new Rect(srcX, srcY, srcX + width, srcY + height); Rect dst = new Rect(x, y, x + width, y + height); canvas.drawBitmap(bitmap, src, dst, null); } with this code my enemies move like before, except they wont move to the right (wont increment x), all other directions work.

    Read the article

  • Transform between two 3d cartesian coordinate systems

    - by Pris
    I'd like to know how to get the rotation matrix for the transformation from one cartesian coordinate system (X,Y,Z) to another one (X',Y',Z'). Both systems are defined with three orthogonal vectors as one would expect. No scaling or translation occurs. I'm using OpenSceneGraph and it offers a Matrix convenience class, if it makes finding the matrix easier: http://www.openscenegraph.org/documentation/OpenSceneGraphReferenceDocs/a00403.html.

    Read the article

  • How to stop a tap event from propagating in a XNA / Silverlight game

    - by Mech0z
    I have a game with Silverlight / XNA game where text and buttons are created in Silverlight while 3d is done in XNA. The Silverlight controls are drawn ontop of the 3D and I dont want a click on a button to interact with the 3D underneath So I have private void ButtonPlaceBrick_Tap(object sender, GestureEventArgs e) { e.Handled = true; But my gesture handling on the 3d objects still runs even though I have set handled to true. private void OnUpdate(object sender, GameTimerEventArgs e) { while (TouchPanel.IsGestureAvailable) { // Read the next gesture GestureSample gesture = TouchPanel.ReadGesture(); switch (gesture.GestureType) How am I supposed to stop it from propagating?

    Read the article

  • Interpolating Matrices

    - by sebf
    Hello, Apologies if I am missing something very obvious (likely!) but is there anything wrong with interpolating between two matrices by: float d = (float)(targetTime.Ticks - keyframe_start.ticks) / (float)(keyframe_end.ticks - keyframe_start.ticks); return ((keyframe_start.Transform * (1 - d)) + (keyframe_end.Transform * d)); As in my app, when I try an use this to interpolate between two keyframes, the model begins to 'shrink' - the severity based on how far between the two keyframes the target time is; its worst when the transform split is ~50/50.

    Read the article

  • Kepler orbit : get position on the orbit over time

    - by Artefact2
    I'm developing a space-simulation related game, and I am having some trouble implementing the movement of binary stars, like this: The two stars orbit their centroid, and their trajectories are ellipses. I basically know how to determine the angular velocity at any position, but not the angular velocity over time. So, for a given angle, I can very easily compute the stars position (cf. http://en.wikipedia.org/wiki/Orbit_equation). I'd want to get the stars position over time. The parametric equations of the ellipse works but doesn't give the correct speed : { X(t) = a×cos(t) ; Y(t) = b×sin(t) }. Is it possible, and how can it be done?

    Read the article

  • Creating predefinied camera views - How do I move the camera to make sense while using Controller?

    - by Deukalion
    I'm trying to understand 3D but the one thing I can't seem to understand is the Camera. Right now I'm rendering four 3D Cubes with textures and I set the Project Matrix: public BasicCamera3D(float fieldOfView, float aspectRatio, float clipStart, float clipEnd, Vector3 cameraPosition, Vector3 cameraLookAt) { projection_fieldOfView = MathHelper.ToRadians(fieldOfView); projection_aspectRatio = aspectRatio; projection_clipstart = clipStart; projection_clipend = clipEnd; matrix_projection = Matrix.CreatePerspectiveFieldOfView(projection_fieldOfView, aspectRatio, clipStart, clipEnd); view_cameraposition = cameraPosition; view_cameralookat = cameraLookAt; matrix_view = Matrix.CreateLookAt(cameraPosition, cameraLookAt, Vector3.Up); } BasicCamera3D gameCamera = new BasicCamera3D(45f, GraphicsDevice.Viewport.AspectRatio, 1.0f, 1000f, new Vector3(0, 0, 8), new Vector3(0, 0, 0)); This creates a sort of "Top-Down" camera, with 8 (still don't get the unit type here - it's not pixels I guess?) But, if I try to position the camera at the side to make "Side-View" or "Reverse Side View" camera, the camera is rotating to much until it's turned around it a couple of times. I render the boxes at: new Vector3(-1, 0, 0) new Vector3(0, 0, 0) new Vector3(1, 0, 0) new Vector3(1, 0, 1) and with the Top-Down camera it shows good, but I don't get how I can make the camera show the side or 45 degrees from top (Like 3rd person action games) because the logic doesn't make sense. Also, since every object you render needs a new BasicEffect with a new projection/view/world - can you still use the "same" camera always so you don't have to create a new View/Matrix and such for each object. It's seems weird. If someone could help me get the camera to navigate around my objects "naturally" so I can be able to set a few predtermined views to choose from it would be really helpful. Are there some sort of algorithm to calculate the view for this and perhaps not simply one value? Examples: Top-Down-View: I have an object at 0, 0, 0 when I turn the right stick on the Xbox 360 Controller it should rotate around that object kind of, not flip and turn upside down, disappear and then magically appear as a tiny dot somewhere I have no clue where it is like it feels like it does now. Side-View: I have an object at 0, 0, 0 when I rotate to sides or up and down, the camera should be able to show a little more of the periphery to each side (depending on which you look at), and the same while moving up or down.

    Read the article

  • How do I import service references to Unity3D?

    - by Timothy Williams
    I'm attempting access a service reference in Unity. I need two: the SOAP framework and a separate service called ContentVault. The respective service URL's are: SOAP: http://api.microsofttranslator.com/V2/Soap.svc ContentVault: http://ioun.wizards.com/ContentVault.svc Both services import fine in to Visual Studio. I've tried everything I can think of but they won't work with Unity. I just get various errors (changing depending on which solution I'm trying out). I've attempted using svcutil to export the services as external scripts, but all I got was a bunch of using errors. I've tried converting the code to work with .NET 2.0 to no avail, I've even tried making the services in to .DLL's to no success. How could get these services working with Unity?

    Read the article

  • Game Engines with real time lighting

    - by Maik Klein
    I am studying computer graphics since 3 semesters and we just started with OpenGL. I really enjoy it and want to create my own little engine for learning purposes. I already read tons of different forum posts and saw the following engines. Panda3d, Ogre3d, NeoAxis, Irrlicht and Horde3d(graphics only). Now I don't want to use something like Unity or CryEngine because I want to start more low level. Which of those engines is suited for real-time rendering? Something that CryEngine offers - no baked lightmaps. Or at least gives me the option to add a real-time renderer?

    Read the article

  • Low-level GPU code and Shader Compilation

    - by ktodisco
    Bear with me, because I will raise several questions at once. I still feel, though, that overall this can be treated as one question that may be answered succinctly. I recently dove into solidifying my understanding of the assembly language, low-level memory operations, CPU structure, and program optimizations. This also sparked my interest in how higher-level shading languages, GLSL and HLSL in particular, are compiled and optimized, as well as what formats they are reduced to before machine code is generated (assuming they are not converted directly into machine code). After a bit of research into this, the best resource I've found is this presentation from ATI about the compilation of and optimizations for HLSL. I also found sample ARB assembly code. This sort of addressed my original curiosity, but it raised several other questions. The assembler code in the ATI presentation seems like it contains instructions specifically targeted for the GPU, but is this merely a hypothetical example created for the purpose of conceptual understanding, or is this code really generated during shader compilation? If so, is it possible to inspect it, or even write it in place of the higher-level syntax? My initial searches for an answer to the last question tell me that this may be disallowed, but I have not dug too deep yet. Also, along the same lines, are GLSL shader programs compiled into ARB assembly code before machine code is generated, and is it possible to write direct ARB assembly? Lastly, and perhaps what I am most interested in finding out: are there comprehensive resources on shader compilation and low-level GPU code? I have been unable to find any thus far. I ask simply because I am curious :)

    Read the article

  • Phone crash when try to use vibration on Android

    - by Diego Unanue
    Im developing an app that when you click a button the phone has to vibrate, the issue is that the phone just chashes. Saing that I need permitions to vibrate. I've already set this permition in the build.setting (android manifiest). Here is the code build.settings: settings = { orientation = { default = "portrait", supported = { "portrait", } }, iphone = { plist= { CoronaUseIOS7LandscapeOnlyWorkaround = true, CoronaUseIOS7IPadPhotoPickerLandscapeOnlyWorkaround = true, CoronaUseIOS6LandscapeOnlyWorkaround = true, CoronaUseIOS6IPadPhotoPickerLandscapeOnlyWorkaround = true, UIApplicationExitsOnSuspend = false, UIPrerenderedIcon = true, UIStatusBarHidden = false, CFBundleIconFile = "Icon.png", CFBundleIconFiles = { "Icon.png", "[email protected]", "Icon-60.png", "[email protected]", "Icon-72.png", "[email protected]", "Icon-76.png", "[email protected]", "Icon-Small.png", "[email protected]", "Icon-Small-40.png", "[email protected]", "Icon-Small-50.png", "[email protected]", }, }, }, android = { permissions = { { name = ".permission.C2D_MESSAGE", protectionLevel = "signature" }, }, usesPermissions = { "android.permission.INTERNET", "android.permission.VIBRATE", }, }, } the file that uses the vibration is: local onButtonEvent = function (event ) system.vibrate() end I read all post in Corona page without success. Can I see the android manifest to see if the permissions are there. I've read that is a Corona issue not sure.

    Read the article

  • OpenGL - Calculating camera view matrix

    - by Karle
    Problem I am calculating the model, view and projection matrices independently to be used in my shader as follows: gl_Position = projection * view * model * vec4(in_Position, 1.0); When I try to calculate my camera's view matrix the Z axis is flipped and my camera seems like it is looking backwards. My program is written in C# using the OpenTK library. Translation (Working) I've created a test scene as follows: From my understanding of the OpenGL coordinate system they are positioned correctly. The model matrix is created using: Matrix4 translation = Matrix4.CreateTranslation(modelPosition); Matrix4 model = translation; The view matrix is created using: Matrix4 translation = Matrix4.CreateTranslation(-cameraPosition); Matrix4 view = translation; Rotation (Not-Working) I now want to create the camera's rotation matrix. To do this I use the camera's right, up and forward vectors: // Hard coded example orientation: // Normally calculated from up and forward // Similar to look-at camera. Vector3 r = Vector.UnitX; Vector3 u = Vector3.UnitY; Vector3 f = -Vector3.UnitZ; Matrix4 rot = new Matrix4( r.X, r.Y, r.Z, 0, u.X, u.Y, u.Z, 0, f.X, f.Y, f.Z, 0, 0.0f, 0.0f, 0.0f, 1.0f); This results in the following matrix being created: I know that multiplying by the identity matrix would produce no rotation. This is clearly not the identity matrix and therefore will apply some rotation. I thought that because this is aligned with the OpenGL coordinate system is should produce no rotation. Is this the wrong way to calculate the rotation matrix? I then create my view matrix as: // OpenTK is row-major so the order of operations is reversed: Matrix4 view = translation * rot; Rotation almost works now but the -Z/+Z axis has been flipped, with the green cube now appearing closer to the camera. It seems like the camera is looking backwards, especially if I move it around. My goal is to store the position and orientation of all objects (including the camera) as: Vector3 position; Vector3 up; Vector3 forward; Apologies for writing such a long question and thank you in advance. I've tried following tutorials/guides from many sites but I keep ending up with something wrong. Edit: Projection Matrix Set-up Matrix4 projection = Matrix4.CreatePerspectiveFieldOfView( (float)(0.5 * Math.PI), (float)display.Width / display.Height, 0.1f, 1000.0f);

    Read the article

  • A* Jump Point Search - how does pruning really work?

    - by DeadMG
    I've come across Jump Point Search, and it seems pretty sweet to me. However, I'm unsure as to how their pruning rules actually work. More specifically, in Figure 1, it states that we can immediately prune all grey neighbours as these can be reached optimally from the parent of x without ever going through node x However, this seems somewhat at odds. In the second image, node 5 could be reached by first going through node 7 and skipping x entirely through a symmetrical path- that is, 6 -> x -> 5 seems to be symmetrical to 6 -> 7 -> 5. This would be the same as how node 3 can be reached without going through x in the first image. As such, I don't understand how these two images are not entirely equivalent, and not just rotated versions of each other. Secondly, I'd like to understand how this algorithm could be generalized to a three-dimensional search volume.

    Read the article

  • xna orbit camera troubles

    - by user17753
    I have a Model named cube to which I load in LoadContent(): cube = Content.Load<Model>("untitled");. In the Draw Method I call DrawModel: private void DrawModel(Model m, Matrix world) { foreach (ModelMesh mesh in m.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.View = camera.View; effect.Projection = camera.Projection; effect.World = world; } mesh.Draw(); } } camera is of the Camera type, a class I've setup. Right now it is instantiated in the initialization section with the graphics aspect ratio and the translation (world) vector of the model, and the Draw loop calls the camera.UpdateCamera(); before drawing the models. class Camera { #region Fields private Matrix view; // View Matrix for Camera private Matrix projection; // Projection Matrix for Camera private Vector3 position; // Position of Camera private Vector3 target; // Point camera is "aimed" at private float aspectRatio; //Aspect Ratio for projection private float speed; //Speed of camera private Vector3 camup = Vector3.Up; #endregion #region Accessors /// <summary> /// View Matrix of the Camera -- Read Only /// </summary> public Matrix View { get { return view; } } /// <summary> /// Projection Matrix of the Camera -- Read Only /// </summary> public Matrix Projection { get { return projection; } } #endregion /// <summary> /// Creates a new Camera. /// </summary> /// <param name="AspectRatio">Aspect Ratio to use for the projection.</param> /// <param name="Position">Target coord to aim camera at.</param> public Camera(float AspectRatio, Vector3 Target) { target = Target; aspectRatio = AspectRatio; ResetCamera(); } private void Rotate(Vector3 Axis, float Amount) { position = Vector3.Transform(position - target, Matrix.CreateFromAxisAngle(Axis, Amount)) + target; } /// <summary> /// Resets Default Values of the Camera /// </summary> private void ResetCamera() { speed = 0.05f; position = target + new Vector3(0f, 20f, 20f); projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, aspectRatio, 0.5f, 100f); CalculateViewMatrix(); } /// <summary> /// Updates the Camera. Should be first thing done in Draw loop /// </summary> public void UpdateCamera() { Rotate(Vector3.Right, speed); CalculateViewMatrix(); } /// <summary> /// Calculates the View Matrix for the camera /// </summary> private void CalculateViewMatrix() { view = Matrix.CreateLookAt(position,target, camup); } I'm trying to create the camera so that it can orbit the center of the model. For a test I am calling Rotate(Vector3.Right, speed); but it rotates almost right but gets to a point where it "flips." If I rotate along a different axis Rotate(Vector3.Up, speed); everything seems OK in that direction. So I guess, can someone tell me what I'm not accounting for in the above code I wrote? Or point me to an example of an orbiting camera that can be fixed on an arbitrary point?

    Read the article

< Previous Page | 434 435 436 437 438 439 440 441 442 443 444 445  | Next Page >