Search Results

Search found 43935 results on 1758 pages for 'development process'.

Page 485/1758 | < Previous Page | 481 482 483 484 485 486 487 488 489 490 491 492  | Next Page >

  • Creating blur with an alpha channel, incorrect inclusion of black

    - by edA-qa mort-ora-y
    I'm trying to do a blur on a texture with an alpha channel. Using a typical approach (two-pass, gaussian weighting) I end up with a very dark blur. The reason is because the blurring does not properly account for the alpha channel. It happily blurs in the invisible part of the image, whcih happens to be black, and thus results in a very dark blur. Is there a technique to blur that properly accounts for the alpha channel?

    Read the article

  • How do I detect multiple sprite collisions when there are >10 sprites?

    - by yao jiang
    I making a small program to animate the astar algorithm. If you look at the image, there are lots of yellow cars moving around. Those can collide at any moment, could be just one or all of them could just stupidly crash into each other. How do I detect all of those collisions? How do I find out which specific car has crash into which other car? I understand that pygame has collision function, but it only detects one collision at a time and I'd have to specify which sprites. Right now I am just trying to iterate through each sprite to see if there is collision: for car1 in carlist: for car2 in carlist: collide(car1, car2); This can't be the proper way to do it, if the car list goes to a huge number, a double loop will be too slow.

    Read the article

  • Game engine help

    - by Nick
    So, I am looking to start designing a video game. My biggest problem right now is choosing the right game engine. I am hiring a programmer, so the language doesn't really matter as much. What I need is an engine with these features, for very, very cheap: -Ability to create very realistic AI -Ability to display, hundreds, possibly thousands of characters Also, if anyone has any experience with Darkbasic Pro, if they could give me a basic run-through and review of it. Thanks a lot!

    Read the article

  • Wait till all CCActions have completed

    - by tGilani
    I am developing a simple cocos2d game in which I want to animate two CCSprites simultaneously, and for this purpose I simply set CCActions on respective `CCSprite's as follows. [first runAction:[CCMoveTo actionWithDuration:1 position:secondPosition]]; [second runAction:[CCMoveTo actionWithDuration:1 position:firstPosition]]; Now I want to wait till the animations are complete, so I can perform the next step. How should I wait for these animations to finish? There are actually two method calls, the first one animates the objects via the code above and second call does the other animation. I need to delay the second method call until the animations in first are complete. (I would not like to use CCCallFunc blocks as I want to call the second method from the same caller as the first one.

    Read the article

  • A* Jump Point Search - how does pruning really work?

    - by DeadMG
    I've come across Jump Point Search, and it seems pretty sweet to me. However, I'm unsure as to how their pruning rules actually work. More specifically, in Figure 1, it states that we can immediately prune all grey neighbours as these can be reached optimally from the parent of x without ever going through node x However, this seems somewhat at odds. In the second image, node 5 could be reached by first going through node 7 and skipping x entirely through a symmetrical path- that is, 6 -> x -> 5 seems to be symmetrical to 6 -> 7 -> 5. This would be the same as how node 3 can be reached without going through x in the first image. As such, I don't understand how these two images are not entirely equivalent, and not just rotated versions of each other. Secondly, I'd like to understand how this algorithm could be generalized to a three-dimensional search volume.

    Read the article

  • How can I draw an arrow at the edge of the screen pointing to an object that is off screen?

    - by Adam Henderson
    I am wishing to do what is described in this topic: http://www.allegro.cc/forums/print-thread/283220 I have attempted a variety of the methods mentioned here. First I tried to use the method described by Carrus85: Just take the ratio of the two triangle hypontenuses (doesn't matter which triagle you use for the other, I suggest point 1 and point 2 as the distance you calculate). This will give you the aspect ratio percentage of the triangle in the corner from the larger triangle. Then you simply multiply deltax by that value to get the x-coordinate offset, and deltay by that value to get the y-coordinate offset. But I could not find a way to calculate how far the object is away from the edge of the screen. I then tried using ray casting (which I have never done before) suggested by 23yrold3yrold: Fire a ray from the center of the screen to the offscreen object. Calculate where on the rectangle the ray intersects. There's your coordinates. I first calculated the hypotenuse of the triangle formed by the difference in x and y positions of the two points. I used this to create a unit vector along that line. I looped through that vector until either the x coordinate or the y coordinate was off the screen. The two current x and y values then form the x and y of the arrow. Here is the code for my ray casting method (written in C++ and Allegro 5) void renderArrows(Object* i) { float x1 = i->getX() + (i->getWidth() / 2); float y1 = i->getY() + (i->getHeight() / 2); float x2 = screenCentreX; float y2 = ScreenCentreY; float dx = x2 - x1; float dy = y2 - y1; float hypotSquared = (dx * dx) + (dy * dy); float hypot = sqrt(hypotSquared); float unitX = dx / hypot; float unitY = dy / hypot; float rayX = x2 - view->getViewportX(); float rayY = y2 - view->getViewportY(); float arrowX = 0; float arrowY = 0; bool posFound = false; while(posFound == false) { rayX += unitX; rayY += unitY; if(rayX <= 0 || rayX >= screenWidth || rayY <= 0 || rayY >= screenHeight) { arrowX = rayX; arrowY = rayY; posFound = true; } } al_draw_bitmap(sprite, arrowX - spriteWidth, arrowY - spriteHeight, 0); } This was relatively successful. Arrows are displayed in the bottom right section of the screen when objects are located above and left of the screen as if the locations of the where the arrows are drawn have been rotated 180 degrees around the center of the screen. I assumed this was due to the fact that when I was calculating the hypotenuse of the triangle, it would always be positive regardless of whether or not the difference in x or difference in y is negative. Thinking about it, ray casting does not seem like a good way of solving the problem (due to the fact that it involves using sqrt() and a large for loop). Any help finding a suitable solution would be greatly appreciated, Thanks Adam

    Read the article

  • How to properly render a Frame Buffer to the BackBuffer in Stage3D / AGAL

    - by bigp
    After doing a render pass with RenderToTarget (RTT), how do you properly render that texture buffer to the screen while maintaining original scale / proportions so it doesn't stretch or lose quality? Can an AGAL VertexShader & FragmentShader be written so it's adaptable to any Texture size and Viewport dimensions? I find I'm getting some "blocky" effects in some of my first attempts at "ping-ponging" between two Texture buffers (to create trailing effects). Perhaps I'm not using the UVs correctly between the rendering-to-target and/or the backbuffer? Is there a simpler way just to "splash" the texture on the backbuffer, or is a Quad absolutely necessary (4 vertices, 2 triangles)? If it needs the Quad, should the Texture buffer be fully drawn (0.0 to 1.0 for vertical and horizontal UVs), or only a percentage of it should, like the example below? Texture Buffer U: 0.0 to viewport.width/texturebuffer.width; Texture Buffer V: 0.0 to viewport.height/texturebuffer.height; Thanks!

    Read the article

  • Cocos2d-x 3.0 animation frame by frame

    - by Narek
    As I know animations are actions. Now I need to play animation frame by frame. Say I have an animation from N frames. each frame should be played after t delay. Now I want to play animation frame by frame, each frame advance the animation's state. How I can do this? And what about playing actions frame by frame advancing the state in general. I ask because I use ECS, and I deal with frames. P.S. I want to do something like this: Action * a = MoveTo(initialPoint, finalPoint, durationOfAnimation); a->play(0.001 seconds); a->play(0.003 seconds); a->play(0.02 seconds); a->play(0.67 seconds); a->play(0.06 seconds); And see the animation.

    Read the article

  • Phone crash when try to use vibration on Android

    - by Diego Unanue
    Im developing an app that when you click a button the phone has to vibrate, the issue is that the phone just chashes. Saing that I need permitions to vibrate. I've already set this permition in the build.setting (android manifiest). Here is the code build.settings: settings = { orientation = { default = "portrait", supported = { "portrait", } }, iphone = { plist= { CoronaUseIOS7LandscapeOnlyWorkaround = true, CoronaUseIOS7IPadPhotoPickerLandscapeOnlyWorkaround = true, CoronaUseIOS6LandscapeOnlyWorkaround = true, CoronaUseIOS6IPadPhotoPickerLandscapeOnlyWorkaround = true, UIApplicationExitsOnSuspend = false, UIPrerenderedIcon = true, UIStatusBarHidden = false, CFBundleIconFile = "Icon.png", CFBundleIconFiles = { "Icon.png", "[email protected]", "Icon-60.png", "[email protected]", "Icon-72.png", "[email protected]", "Icon-76.png", "[email protected]", "Icon-Small.png", "[email protected]", "Icon-Small-40.png", "[email protected]", "Icon-Small-50.png", "[email protected]", }, }, }, android = { permissions = { { name = ".permission.C2D_MESSAGE", protectionLevel = "signature" }, }, usesPermissions = { "android.permission.INTERNET", "android.permission.VIBRATE", }, }, } the file that uses the vibration is: local onButtonEvent = function (event ) system.vibrate() end I read all post in Corona page without success. Can I see the android manifest to see if the permissions are there. I've read that is a Corona issue not sure.

    Read the article

  • Rotate view matrix based on touch coordinates

    - by user1055947
    I'm working on an Android game where I need to rotate the camera around the origin based on the user dragging their finger. My view matrix has initial position of sitting on the negative z and facing origin. I have succeeded in moving the camera through rotation left or right, up or down based on the user dragging the finger, but my problem is obviously that after I drag my finger up/down and rotate say 90 degrees so my intial position of -z is now +y and still facing origin, if I drag my finger left/right I want to rotate from +y to +x, but what happens is it rotates around the pole +y. This is to be expected as I am mapping 2D touch drag coords to 3D space, but I dont know where to start trying to do what I want. Perhaps someone can point me in the right direction, I've been googling for a while now but I don't know what I want to do is called! Edit __ What I was looking for is called an ArcBall, google it for lots of info on it.

    Read the article

  • (Libgdx) Move Vector2 along angle?

    - by gemurdock
    I have seen several answers on here about moving along angle, but I can't seem to get this to work properly for me and I am new to LibGDX... just trying to learn. These are my Vector2's that I am using for this function. public Vector2 position = new Vector2(); public Vector2 velocity = new Vector2(); public Vector2 movement = new Vector2(); public Vector2 direction = new Vector2(); Here is the function that I use to move the position vector along an angle. setLocation() just sets the new location of the image. public void move(float delta, float degrees) { position.set(image.getX() + image.getWidth() / 2, image.getY() + image.getHeight() / 2); direction.set((float) Math.cos(degrees), (float) Math.sin(degrees)).nor(); velocity.set(direction).scl(speed); movement.set(velocity).scl(delta); position.add(movement); setLocation(position.x, position.y); // Sets location of image } I get a lot of different angles with this, just not the correct angles. How should I change this function to move a Vector2 along an angle using the Vector2 class from com.badlogic.gdx.math.Vector2 within the LibGDX library? I found this answer, but not sure how to implement it. Update: I figured out part of the issue. Should convert degrees to radians. However, the angle of 0 degrees is towards the right. Is there any way to fix this? As I shouldn't have to add 90 to degrees in order to have correct heading. New code is below public void move(float delta, float degrees) { degrees += 90; // Set degrees to correct heading, shouldn't have to do this position.set(image.getX() + image.getWidth() / 2, image.getY() + image.getHeight() / 2); direction.set(MathUtils.cos(degrees * MathUtils.degreesToRadians), MathUtils.sin(degrees * MathUtils.degreesToRadians)).nor(); velocity.set(direction).scl(speed); movement.set(velocity).scl(delta); position.add(movement); setLocation(position.x, position.y); }

    Read the article

  • Cannot convert parameter 1 from 'short *' to 'int *' [closed]

    - by Torben Carrington
    I'm trying to learn pointers and since I recently learned that short int takes up less memory [2 bytes as apposed to the long int's memory usage of 4 which is the default for int] I wanted to create a pointer that uses the memory address of a short integer. I'm following a tutorial in my book about Pointers and it's using the Swap function. The problem is I receive this error the moment I change everything from int to short int: error C2664: 'Swap' : cannot convert parameter 1 from 'short *' to 'int *' 1 Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast Since my code is so small here is the whole thing: void Swap(short int *sipX, short int *sipY) { short int siTemp = *sipX; *sipX = *sipY; *sipY = siTemp; } int main() { short int siBig = 100; short int siSmall = 1; std::cout << "Pre-Swap: " << siBig << " " << siSmall << std::endl; Swap(&siBig, &siSmall); std::cout << "Post-Swap: " << siBig << " " << siSmall << std::endl; return 0; }

    Read the article

  • Cocos2D 2.0 - masking a sprite

    - by Desperate Developer
    I have read this tutorial about how to mask sprites using Cocos2D 2.0. http://www.raywenderlich.com/4428/how-to-mask-a-sprite-with-cocos2d-2-0 But the author talks about OpenGL ES textures and vertices as they were common knowledge. My knowledge about OpenGl is zero raised to infinity. All I want is to use a rectangle to mask a sprite to it. How I would do in Photoshop using a rectangle as mask (yes, I want to clip a sprite to the rectangle bounds and no, I do not want to use the ClippingNode solution, that do not works for animation/scaling etc.). So, can you guys translate the klingon language used in this tutorial and tell how a solid rectangle can be used to mask a sprite in Cocos2D? I am desperate, as my username states. I am searching this for a week and have tried several solutions without satisfactory results. Please help me. Thanks!

    Read the article

  • What kind of steering behaviour or logic can I use to get mobiles to surround another?

    - by Vaughan Hilts
    I'm using path finding in my game to lead a mob to another player (to pursue them). This works to get them overtop of the player, but I want them to stop slightly before their destination (so picking the penultimate node works fine). However, when multiple mobs are pursuing the mobile they sometimes "stack on top of each other". What's the best way to avoid this? I don't want to treat the mobs as opaque and blocked (because they're not, you can walk through them) but I want the mobs to have some sense of structure. Example: Imagine that each snake guided itself to me and should surround "Setsuna". Notice how both snakes have chosen to prong me? This is not a strict requirement; even being slightly offset is okay. But they should "surround" Setsuna.

    Read the article

  • Moving sprites on a graph in libGDX

    - by nosferat
    In my game I'd like to move sprites on a fixed path. Until this point I was trying to stick with the tools already provided by libGDX, like the Tiled map renderer classes so I'm looking for a solution nearly as convenient as that, e.g. I'd like to avoid creating the adjacency matrix by hand. Tiled has the functionality to add objects to the map but I'm not sure if I can use it for this purpose. Any idea?

    Read the article

  • XNA - Error while rendering a texture to a 2D render target via SpriteBatch

    - by Jared B
    I've got this simple code that uses SpriteBatch to draw a texture onto a RenderTarget2D: private void drawScene(GameTime g) { GraphicsDevice.Clear(skyColor); GraphicsDevice.SetRenderTarget(targetScene); drawSunAndMoon(); effect.Fog = true; GraphicsDevice.SetVertexBuffer(line); effect.MainEffect.CurrentTechnique.Passes[0].Apply(); GraphicsDevice.DrawPrimitives(PrimitiveType.TriangleStrip, 0, 2); GraphicsDevice.SetRenderTarget(null); SceneTexture = targetScene; } private void drawPostProcessing(GameTime g) { effect.SceneTexture = SceneTexture; GraphicsDevice.SetRenderTarget(targetBloom); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque, null, null, null); { if (Bloom) effect.BlurEffect.CurrentTechnique.Passes[0].Apply(); spriteBatch.Draw( targetScene, new Rectangle(0, 0, Window.ClientBounds.Width, Window.ClientBounds.Height), Color.White); } spriteBatch.End(); BloomTexture = targetBloom; GraphicsDevice.SetRenderTarget(null); } Both methods are called from my Draw(GameTime gameTime) function. First drawScene is called, then drawPostProcessing is called. The thing is, when I run this code I get an error on the spriteBatch.Draw call: The render target must not be set on the device when it is used as a texture. I already found the solution, which is to draw the actual render target (targetScene) to the texture so it doesn't create a reference to the loaded render target. However, to my knowledge, the only way of doing this is to write: GraphicsDevice.SetRenderTarget(outputTarget) SpriteBatch.Draw(inputTarget, ...) GraphicsDevice.SetRenderTarget(null) Which encounters the same exact problem I'm having right now. So, the question I'm asking is: how would I render inputTarget to outputTarget without reference issues?

    Read the article

  • GLSL Bokeh using Quads and Textures

    - by Notoriousaur
    I'm trying to create a depth of field effect with bokeh sprites in GLSL. Specifically, what i would like to do is, for each pixel: See if the pixel is out of the focal range If it is, draw a quad and apply a texture to provide a bokeh sprite. This kind of implementation is seen in the Unreal Engine and by Matt Pettineo, however, both implementations are in DX11 and I'm using OpenGL. I'm a bit stuck on the drawing a quad and applying a texture bit. Does anyone know how I can do this, or provide any relevant links as to how I can do this? Thanks

    Read the article

  • How to Export Flash Animation Data

    - by charliep
    I'd love for my partner, the artist, to be able to animate using flash movieclips and timelines. Then I, the programmer, would like to read the raw Flash info and re-program it into my engine of choice (which happens to be Torque2D). The data I'd want is the bitmap images that were used in Flash, like the head and body the links between the images, like where the head connects to the body the motion data from the flash animation, like move, rotate (at what speed), shear, etc. for the head or arms or whatever. Is there any way to get this data? Here's what I know so far. There are tools like SWFSheet and Spriteloq that convert the entire flash animation into a frame by frame sprite animation (in a sprite sheet). This would take too much space in my case, so I'd like to avoid that. Re-animating on the fly would take much less texture memory. There is a PDF that describes the SWF file format but NOT the individual components like the movieclips. So anyone know of a library I can use, or how I can learn more about the movieclip components and whatnot? (more better tags: transform, export, convert)

    Read the article

  • Simple heart container script for 2D game (Unity)?

    - by N1ghtshade3
    I'm attempting to create a simple mobile game (C#) that involves a simple three-heart life system. After searching for hours online, many of the solutions use OnGUI (which is apparently horrible for performance) and the rest are too complicated for me to understand and add to my code. The other solutions involve using a single texture and just hiding part of it when damage is taken. In my game, however, the player should be able to go over three hearts (for example, every 100 points). Sebastian Lague's Zelda-Style Health is what I'm looking for, but even though it's a tutorial there is way too much going on that I don't need or can't customize to fit in mine. What I have so far is a script called HealthScript.cs which contains a variable lives. I have another script, PlayerPhysics.cs which calls HealthScript and subtracts a life when an enemy is hit. The part I don't get is actually drawing the hearts. I think I understand what needs to happen, I just am not experienced enough with Unity to know how. The Start function should draw three (or whatever lives is set to) hearts in the top right corner. Since the game should be resolution-independent to accommodate the various sizes of Android devices, I'd rather use scaling rather than PixelInset. When the player hits an enemy as detected by PlayerPhysics.cs, it should subtract from lives. I think that I have this working using this.GetComponent<HealthScript>().lives -= 1 but I'm not sure if it actually works. This should trigger a redraw of the hearts so that there are now two hearts. The same principle would apply for adding hearts when a score is reached, except when lives > maxHeartsPerRow, the new hearts should be drawn below the old ones. I realise I don't have much code to show but believe me; I've tried for quite some time to figure this out and have little to show for it. Any help at all would be welcome; it seems like it shouldn't take that much code to put an image on the screen for each life there is, but I haven't found anything yet. Thanks!

    Read the article

  • Xna Loading Screens

    - by Cyral
    I'm making a 2D XNA game. I'd like to implement loading screens when stuff has to load for a while. Like when I login to an account, connect to the server, and generate worlds. I'm pretty sure it needs to be multithreaded, because I want to be able to do something like "Generating World 10%...11%...". GenerateWorld() { //Call StartLoading("Generating World"); or something //Starter generating, Updating progress... //End loading screen and fade into world } Help appreciated, I'm new.

    Read the article

  • Prototype experience: Unity3D vs UDK

    - by LukeN
    Has anyone yet prototyped a game in both Unity3D and UDK? If so, which features made prototyping the game easier or more difficult in each toolkit? Was one prototype demonstrably better than the other (given the same starting assets)? I'm looking for specific answers with regard to using the toolkit features, not a comparison of available features. E.g. Destructable terrain is easier in toolkit X for reasons Y and Z. I can code, so the limitations of the inbuilt scripting languages are not a problem.

    Read the article

  • Gamemaker: Making a bullet Spawn at the enemy it was called from

    - by Strokes
    I'm making a gamemaker game with gml. In this game I have multiple enemies (same object) on screen at the same time. I want them to all spawn a bullet at their location. But instead each enemy spawns a bullet at one single enemy. They all shoot but the bullets appear in the wrong location. I want the bullet to spawn at the location of the instance is was called for. How do I do this? Thank you for reading my question. Code: obj_carrier is the enemy I want to spawn from. obj_carrier_bullet is the bullet I want to spawn at location of the carrier There are multiple carriers around the stage. In the step event of the carrier following an if statement: instance_create(obj_carrier.x, obj_carrier.y, obj_carrier_bullet)

    Read the article

  • Entity component system -> handling components that depend on one another

    - by jtedit
    I really like the idea of an entity component system and feel it has great flexibility, but have a question. How should dependent components be handled? I'm not talking about how components should communicate with other components they depend on, I have that sorted, but rather how to ensure components are present. For example, an entity cannot have a "velocity" component if it doesn't have a "position" component, in the same way it cant have an "acceleration" component if it doesn't have a "velocity" component. My first idea was every component class overrides an "onAddedToEntity(Entity ent)" function. Then in that function it checks that prerequisite components are also added to the entity, eg: struct EntCompVelocity() : public EntityComponent{ //member variables here void onAddedToEntity(Entity ent){ if(!ent.hasComponent(EntCompPosition::Id)){ ent.addComponent(new EntCompPosition()); } } } This has the nice property that if the acceleration component adds the velocity component, the velocity component will itself add the position component to the entity so dependency "trees" will sort themselves out. However my concern is if I do this components will silently be added with default values and, in the example of adding position, many entities will appear at the origin. Another idea was to simple have the "Entity.addComponent();" function return false if the component's prerequisite components aren't already on the entity, this would force you to manually add the position component and set its value before adding the velocity component. Finally I could simply not ensure a components prerequisite components are added, the "UpdatePosition" system only deals with entities with both a position and velocity component, so therefore adding a velocity component without having a position component wont be a problem (it wont cause crashes due to null pointer/etc), but it does mean entities will carry useless unused data if you add components but not their prerequisite components. Does anyone have experience with this problem and/or any of these methods to solve it? How did you solve the problem?

    Read the article

  • xna orbit camera troubles

    - by user17753
    I have a Model named cube to which I load in LoadContent(): cube = Content.Load<Model>("untitled");. In the Draw Method I call DrawModel: private void DrawModel(Model m, Matrix world) { foreach (ModelMesh mesh in m.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.View = camera.View; effect.Projection = camera.Projection; effect.World = world; } mesh.Draw(); } } camera is of the Camera type, a class I've setup. Right now it is instantiated in the initialization section with the graphics aspect ratio and the translation (world) vector of the model, and the Draw loop calls the camera.UpdateCamera(); before drawing the models. class Camera { #region Fields private Matrix view; // View Matrix for Camera private Matrix projection; // Projection Matrix for Camera private Vector3 position; // Position of Camera private Vector3 target; // Point camera is "aimed" at private float aspectRatio; //Aspect Ratio for projection private float speed; //Speed of camera private Vector3 camup = Vector3.Up; #endregion #region Accessors /// <summary> /// View Matrix of the Camera -- Read Only /// </summary> public Matrix View { get { return view; } } /// <summary> /// Projection Matrix of the Camera -- Read Only /// </summary> public Matrix Projection { get { return projection; } } #endregion /// <summary> /// Creates a new Camera. /// </summary> /// <param name="AspectRatio">Aspect Ratio to use for the projection.</param> /// <param name="Position">Target coord to aim camera at.</param> public Camera(float AspectRatio, Vector3 Target) { target = Target; aspectRatio = AspectRatio; ResetCamera(); } private void Rotate(Vector3 Axis, float Amount) { position = Vector3.Transform(position - target, Matrix.CreateFromAxisAngle(Axis, Amount)) + target; } /// <summary> /// Resets Default Values of the Camera /// </summary> private void ResetCamera() { speed = 0.05f; position = target + new Vector3(0f, 20f, 20f); projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, aspectRatio, 0.5f, 100f); CalculateViewMatrix(); } /// <summary> /// Updates the Camera. Should be first thing done in Draw loop /// </summary> public void UpdateCamera() { Rotate(Vector3.Right, speed); CalculateViewMatrix(); } /// <summary> /// Calculates the View Matrix for the camera /// </summary> private void CalculateViewMatrix() { view = Matrix.CreateLookAt(position,target, camup); } I'm trying to create the camera so that it can orbit the center of the model. For a test I am calling Rotate(Vector3.Right, speed); but it rotates almost right but gets to a point where it "flips." If I rotate along a different axis Rotate(Vector3.Up, speed); everything seems OK in that direction. So I guess, can someone tell me what I'm not accounting for in the above code I wrote? Or point me to an example of an orbiting camera that can be fixed on an arbitrary point?

    Read the article

  • Transform between two 3d cartesian coordinate systems

    - by Pris
    I'd like to know how to get the rotation matrix for the transformation from one cartesian coordinate system (X,Y,Z) to another one (X',Y',Z'). Both systems are defined with three orthogonal vectors as one would expect. No scaling or translation occurs. I'm using OpenSceneGraph and it offers a Matrix convenience class, if it makes finding the matrix easier: http://www.openscenegraph.org/documentation/OpenSceneGraphReferenceDocs/a00403.html.

    Read the article

< Previous Page | 481 482 483 484 485 486 487 488 489 490 491 492  | Next Page >