Search Results

Search found 25952 results on 1039 pages for 'development lifecycle'.

Page 493/1039 | < Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >

  • How to shift a vector based on the rotation of another vector?

    - by bpierre
    I’m learning 2D programming, so excuse my approximations, and please, don’t hesitate to correct me. I am just trying to fire a bullet from a player. I’m using HTML canvas (top left origin). Here is a representation of my problem: The black vector represent the position of the player (the grey square). The green vector represent its direction. The red disc represents the target. The red vector represents the direction of a bullet, which will move in the direction of the target (red and dotted line). The blue cross represents the point from where I really want to fire the bullet (and the blue and dotted line represents its movement). This is how I draw the player (this is the player object. Position, direction and dimensions are 2D vectors): ctx.save(); ctx.translate(this.position.x, this.position.y); ctx.rotate(this.direction.getAngle()); ctx.drawImage(this.image, Math.round(-this.dimensions.x/2), Math.round(-this.dimensions.y/2), this.dimensions.x, this.dimensions.y); ctx.restore(); This is how I instanciate a new bullet: var bulletPosition = playerPosition.clone(); // Copy of the player position var bulletDirection = Vector2D.substract(targetPosition, playerPosition).normalize(); // Difference between the player and the target, normalized new Bullet(bulletPosition, bulletDirection); This is how I move the bullet (this is the bullet object): var speed = 5; this.position.add(Vector2D.multiply(this.direction, speed)); And this is how I draw the bullet (this is the bullet object): ctx.save(); ctx.translate(this.position.x, this.position.y); ctx.rotate(this.direction.getAngle()); ctx.fillRect(0, 0, 3, 3); ctx.restore(); How can I change the direction and position vectors of the bullet to ensure it is on the blue dotted line? I think I should represent the shift with a vector, but I can’t see how to use it.

    Read the article

  • HLSL 5 interpolation issues

    - by metredigm
    I'm having issues with the depth components of my shadowmapping shaders. The shadow map rendering shader is fine, and works very well. The world rendering shader is more problematic. The only value which seems to definitely be off is the pixel's position from the light's perspective, which I pass in parallel to the position. struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; The reason that I used the semantic 'TEXCOORD2' on the light's pixel position is because I believe that the problem lies with Direct3D's interpolation of values between shaders, and I started trying random semantics and also forcing linear and noperspective interpolations. In the world rendering shader, I observed in the pixel shader that the Z value of light_pos was always extremely close to, but less than the W value. This resulted in a depth result of 0.999 or similar for every pixel. Here is the vertex shader code : struct Vertex { float3 position : POSITION; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; struct Pixel { float4 position : SV_Position; float4 light_pos : TEXCOORD2; float3 normal : NORMAL; float2 texcoord : TEXCOORD; }; cbuffer Camera : register (b0) { matrix world; matrix view; matrix projection; }; cbuffer Light : register (b1) { matrix light_world; matrix light_view; matrix light_projection; }; Pixel RenderVertexShader(Vertex input) { Pixel output; output.position = mul(float4(input.position, 1.0f), world); output.position = mul(output.position, view); output.position = mul(output.position, projection); output.world_pos = mul(float4(input.position, 1.0f), world); output.world_pos = mul(output.world_pos, light_view); output.world_pos = mul(output.world_pos, light_projection); output.texcoord = input.texcoord; output.normal = input.normal; return output; } I suspect interpolation to be the culprit, as I used the camera matrices in place of the light matrices in the vertex shader, and had the same problem. The problem is evident as both of the same vectors were passed to a pixel from the VS, but only one of them showed a change in the PS. I have already thoroughly debugged the matrices' validity, the cbuffers' validity, and the multiplicative validity. I'm very stumped and have been trying to solve this for quite some time. Misc info : The light projection matrix and the camera projection matrix are the same, generated from D3DXMatrixPerspectiveFovLH(), with an FOV of 60.0f * 3.141f / 180.0f, a near clipping plane of 0.1f, and a far clipping plane of 1000.0f. Any ideas on what is happening? (This is a repost from my question on Stack Overflow)

    Read the article

  • NVidia control panel SSAO not working

    - by János Turánszki
    I am just before implementing screen space ambient occlusion in my game, but first I wanted to try enabling it from NVidia control panel only to find out that it is greyed out so that I can not enable it. With this I could enable SSAO for some other games, but not every one. I know this technique requires the depth buffer and (optionally) a normal map texture to sample information from which I already have access to given I have a deferred renderer working. After that I actually thought to roll back to a previous version of my game which still uses forward rendering so the depth buffer is actually bound to the backbuffer which I render to from the get-go so that maybe the NVidia control panel would somehow make use of it. It was not working with forward rendering either. (I also tried FXAA in the control panel and that works - but it doesn't need any depth or normal texture) So my question is that how can I enable this function so that it would work by enabling it in the NVidia control panel?

    Read the article

  • How does this snippet of code create a ray direction vector?

    - by Isaac Waller
    In the Minecraft source code, this code is used to create a direction vector for a ray from pitch and yaw:' float f1 = MathHelper.cos(-rotationYaw * 0.01745329F - 3.141593F); float f3 = MathHelper.sin(-rotationYaw * 0.01745329F - 3.141593F); float f5 = -MathHelper.cos(-rotationPitch * 0.01745329F); float f7 = MathHelper.sin(-rotationPitch * 0.01745329F); return Vec3D.createVector(f3 * f5, f7, f1 * f5); I was wondering how it worked, and what is the constant 0.01745329F?

    Read the article

  • iOS Game that Runs Continuously in Background

    - by user2913669
    I'm trying to understand the most logical way of creating an iOS game that runs continuously in the background. For example.. you have tower and enemy waves. The game has endless enemy waves even when the game exits. When you open the game again, it will retrieve the data that occurred when the app was closed. I assume a database on a server would be the best solution. The values continuously increment on the server. The game connects to the server and retrieves the specific user's updated game data.

    Read the article

  • Flixel: doesn't light tile up

    - by Arno
    i'm creating a game with flixel, and I want to have a effect when you mouse over a tile, I tried implementing it, and this is what it gives: public class GameState extends FlxState { private var block:EmptyBlock; public function GameState() { } override public function create():void { for (var i:Number = 0; i < 30; i++) { block = new EmptyBlock(i, 20); block.create(); } } override public function update():void { block.update(); super.update(); } } } GameState class and here is the EmptyBlock class: public class EmptyBlock { private var x:int; private var y:int; private var row:FlxRect public function EmptyBlock(x:int, y:int ) { this.x = x; this.y = y; } public function create():void { row = new FlxRect(x, y, 32, 32); trace ("Created block at" + x + y); } public function update():void { if (FlxG.mouse.screenX == row.x) { if (FlxG.mouse.screenY == row.y) { var outline:FlxSprite = new FlxSprite(row.x, row.y).makeGraphic(row.width, row.height, 0x002525); } } } } }

    Read the article

  • How to acheive a smoother lighting effect

    - by Cyral
    I'm making a tile based game in XNA So currently my lightning looks like this: How can I get it to look like this? Instead of each block having its own tint, it has a smooth overlay. I'm assuming some sort of shader, and to tell it the lighting and blur it some how. But im not an expert with shaders. My current lighting calculates the light, and then passes it to a spritebatch and draws with a color parameter EDIT: No longer uses spritebatch tint, I was testing and now pass parameters to set the light values. But still looking for a way to smooth it

    Read the article

  • How do I find the angle required to point to another object?

    - by Ginamin
    I am making an air combat game, where you can fly a ship in a 3D space. There is an opponent that flies around as well. When the opponent is not on screen, I want to display an arrow pointing in the direction the user should turn, as such: So, I took the camera location and the oppenent location and did this: double newDirection = atan2(activeCamera.location.y-ship_wrap.location.y, activeCamera.location.x-ship_wrap.location.x); After which, I get the position on the circumferance of a circle which surrounds my crosshairs, like such: trackingArrow.position = point((60*sin(angle)+240),60*cos(angle)+160); It all works fine, except it's the wrong angle! I assume my calculation for the new direction is incorrect. Can anyone help?

    Read the article

  • Import 3ds into JMonkeyEngine 3

    - by Yanick Rochon
    I have asked this question on SO, but I think it will be more suitable here. Basically, we are trying to import an animated character body (with skeleton) from 3D Studio Max to JMonkeyEngine 3, but while we succeeded at importing some animations, we cannot seem to export the skeleton to .skeleton.xml using OgreXML format. Since OgreXML seems to be the favored way to import models into JME, we dropped .obj files and such. Any help appreciated.

    Read the article

  • two-part dice pool mechanic

    - by bythenumbers
    I'm working on a dice mechanic/resolution system based off of the Ghost/Echo (hereafter shortened to G/E) tabletop RPG. Specifically, since G/E can be a little harsh with dealing out consequences and failure, I was hoping to soften the system and add a little more player control, as well as offer the chance for players to evolve their characters into something unique, right from creation. So, here's the mechanic: Players roll 2d12 against the two statistics for their character (each is a number from 2-11, and may be rolled above or below depending on the nature of the action attempted, rolling your stat exactly always fails). Depending on the success for that roll, they add dice to the pool rolled for a modified G/E style action. The acting player gets two dice anyhow, and I am debating offering a bonus die for each success, or a single bonus die for succeeding on both of the statistic-compared rolls. One the size of the dice pool is set, the entire pool is rolled, and the players are allowed to assign rolled dice to a goal and a danger. Assigned results are judged as follows: 1-4 means the attempted goal fails, or the danger comes true. 5-8 is a partial success at the goal, or partially avoiding the danger. 9-12 means the goal is achieved, or the danger avoided. My concerns are twofold: Firstly, that the two-stage action is too complicated, with two rolls to judge separately before anything can happen. Secondly, that the statistics involved go too far in softening the game. I've run some basic simulations, and the approximate statistics follow: 2 dice (up to) 3 dice (up to) 4 dice failure ~33% ~25% ~20% partial ~33% ~35% ~35% success ~33% ~40% ~45% I'd appreciate any advice that addresses my concerns or offers to refine my simulation (right now the first roll is statistically modeled as sign(1d12-1d12), where 0 is a success).

    Read the article

  • Whole map design vs. tiles array design

    - by Mikalichov
    I am working on a 2D RPG, which will feature the usual dungeon/town maps (pre-generated). I am using tiles, that I will then combine to make the maps. My original plan was to assemble the tiles using Photoshop, or some other graphic program, in order to have one bigger picture that I could then use as a map. However, I have read on several places people talking about how they used arrays to build their map in the engine (so you give an array of x tiles to your engine, and it assemble them as a map). I can understand how it's done, but it seems a lot more complicated to implement, and I can't see obvious avantages. What is the most common method, and what are advantages/disadvantages of each?

    Read the article

  • OpenGL + Allegro. Moving from software drawing X Y to openGL is confusing

    - by Aaron
    Having a fair bit of trouble. I'm used to Allegro and drawing sprites on a bitmap buffer at X Y coords. Now I've started a test project with OpenGL and its weird. Basically, as far as I know, theirs many ways to draw stuff in OpenGL. At the moment, I think I'm creating a Quad? Whatever that is, and I think Ive given it a texture of a bitmap and them im drawing that: GLuint gl_image; bitmap = load_bitmap("cat.bmp", NULL); gl_image = allegro_gl_make_texture_ex(AGL_TEXTURE_MASKED, bitmap, GL_RGBA); glBindTexture(GL_TEXTURE_2D, gl_image); glBegin(GL_QUADS); glColor4ub(255, 255, 255, 255); glTexCoord2f(0, 0); glVertex3f(-0.5, 0.5, 0); glTexCoord2f(1, 0); glVertex3f(0.5, 0.5, 0); glTexCoord2f(1, 1); glVertex3f(0.5, -0.5, 0); glTexCoord2f(0, 1); glVertex3f(-0.5, -0.5, 0); glEnd(); So yeah. So I got a few questions: Is this the best way of drawing a sprite? Is it suitable? The big question: Can anyone help / Does anyone know any tutorials on this weird coordinate thing? If it even is that. It's vastly different from XY, but I want to learn it. I was thinking maybe I could learn how this weird positioning stuff works, and then write a function to try and translate it to X and Y coords. Thats about it. I'm still trying to figure it all out on my own but any contributions you guys can make would be greatly appreciated =D Thanks!

    Read the article

  • Trouble with speed and vectors

    - by Eegabooga
    I'm working on adding bullets to my game. Right now I can shoot bullets in the direction that I would like from a ship by getting the ship's angle: int speed = 5; int dx = -(cos(degreesToRadians(ship.angle)) * speed); // rate of change in the x direction int dy = -(sin(degreesToRadians(ship.angle)) * speed); // rate of change in the y direction bulletPosition.addX(dx); // addX(dx) is simply bulletPosition.x += dx bulletPosition.addY(dy); The ship is pretty much the exact same thing, except I use the += operator: int dx += -(cos(degreesToRadians(angle)) * 0.15) int dy += -(sin(degreesToRadians(angle)) * 0.15); shipPosition.addX(dx); shipPosition.addY(dy); I would like to be able to add the ship's velocity to the bullet's velocity, but I'm a little confused as to how should get the speed from the ship's vector. I thought that adding the ship's dx to the bullet's dx like int dx = -(cos(degreesToRadians(ship.angle)) * speed * dx) would work because I'm adding the rate of change of the ship to the rate of change of the bullet, but that doesn't work. So here's the final question: How can I get the speed of my ship and apply it to my bullet's speed? Thanks in advance for all help :)

    Read the article

  • Hashing 3D position into 2D position

    - by notabene
    I am doing volumetric raycasting and curently working on depth jitter. I have 3D position on ray and want to sample 2D noise texture to jitter the depth. Function for converting (or hashing) 3D position to 2D have to produce absolutely different numbers for a little changes (especialy because i am sampling in texture space so sample values differs very very little) and have to be "shader-wise" - so forget about branches, cycles etc. I'm looking forward for yours nice and fast solutions.

    Read the article

  • How to pass one float as four unsigned chars to shader by glVertexPointAttrib?

    - by Kog
    For each vertex I use two floats as position and four unsigned bytes as color. I want to store all of them in one table, so I tried casting those four unsigned bytes to one float, but I am unable to do that correctly... All in all, my tests came to one point: GLfloat vertices[] = { 1.0f, 0.5f, 0, 1.0f, 0, 0 }; glEnableVertexAttribArray(0); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(float), vertices); // VER1 - draws red triangle // unsigned char colors[] = { 0xff, 0, 0, 0xff, 0xff, 0, 0, 0xff, 0xff, 0, 0, // 0xff }; // glEnableVertexAttribArray(1); // glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), // colors); // VER2 - draws greenish triangle (not "pure" green) // float f = 255 << 24 | 255; //Hex:0xff0000ff // float colors2[] = { f, f, f }; // glEnableVertexAttribArray(1); // glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), // colors2); // VER3 - draws red triangle int i = 255 << 24 | 255; //Hex:0xff0000ff int colors3[] = { i, i, i }; glEnableVertexAttribArray(1); glVertexAttribPointer(1, 4, GL_UNSIGNED_BYTE, GL_TRUE, 4 * sizeof(GLubyte), colors3); glDrawArrays(GL_TRIANGLES, 0, 3); Above code is used to draw one simple red triangle. My question is - why do versions 1 and 3 work correctly, while version 2 draws some greenish triangle? Hex values are one I read by marking variable during debug. They are equal for version 2 and 3 - so what causes the difference?

    Read the article

  • Updates for IOS AppStore Multiplayer Game

    - by TobiHeidi
    I am developing a multiplayer game for the web, android and ios. For the web and android i can instantly push out new versions of my game because they support executing remotly loaded code. But with IOS i need to wait for an Apple approval taking about 10 days. I want to push updates more then weekly. What if my server code changes so the client MUST update? Run an old version of the server code just for IOS? How do other multiplayer devs handle this ?

    Read the article

  • Is there a good book or articles to learn about 2D Game Design and Effects?

    - by user28015
    I am not looking for a read how to develop games and how to implement one. I am looking for a general about possible effects in 2D Games and about general design of modern 2D gaming. I have programmed several smaller games over the years and also read books like "Golden Rules of Game Programming" by Martin Bronwlo. So I know how to implement games. What I am looking for are 2 things: Finishing touches such as effects like explosions, particles etc. Not how to make them, but how to design them so it looks right and cool. How to make a 2D game feel "more right" so that users get a satisfying gaming experience. I played a lot of 2D games but I could use some more advice.

    Read the article

  • Sprite/Tile Sheets Vs Single Textures

    - by Reanimation
    I'm making a race circuit which is constructed using various textures. To provide some background, I'm writing it in C++ and creating quads with OpenGL to which I assign a loaded .raw texture too. Currently I use 23 500px x 500px textures of which are all loaded and freed individually. I have now combined them all into a single sprite/tile sheet making it 3000 x 2000 pixels seems the number of textures/tiles I'm using is increasing. Now I'm wondering if it's more efficient to load them individually or write extra code to extract a certain tile from the sheet? Is it better to load the sheet, then extract 23 tiles and store them from one sheet, or load the sheet each time and crop it to the correct tile? There seems to be a number of way to implement it... Thanks in advance.

    Read the article

  • Recommended 2D Game Engine for prototyping

    - by Thomas Dufour
    What high-level game engine would you recommend to develop a 2D game prototype on windows? (or mac/linux if you wish) The kind of things I mean by "high-level" includes (but is definitely not limited to): not having to manage low-level stuff like screen buffers, graphics contexts having an API to draw geometric shapes well, I was going to omit it but I guess being based on an actual "high-level" language is a plus (automatic resource management and the existence a reasonable set of data structures in the standard library come to mind). It seems to me that Flash is the proverbial elephant in the room for this query but I'd very much like to see different answers based on all kinds of languages or SDKs.

    Read the article

  • State changes in entities or components

    - by GriffinHeart
    I'm having some trouble figuring how to deal with state management in my entities. I don't have trouble with Game state management, like pause and menus, since these are not handled as an entity component system; just with state in entities/components. Drawing from Orcs Must Die as an example, I have my MainCharacter and Trap entities which only have their components like PositionComponent, RenderComponent, PhysicsComponent. On each update the Entity will call update on its components. I also have a generic EventManager with listeners for different event types. Now I need to be able to place the traps: first select the trap and trap position then place the trap. When placing a trap it should appear in front of the MainCharacter, rendered in a different way and following it around. When placed it should just respond to collisions and be rendered in the normal way. How is this usually handled in component based systems? (This example is specific but can help figure out the general way to deal with entities states.)

    Read the article

  • How exactly to implement multiple threads in a game

    - by xerwin
    So I recently started learning Java, and having a interest in playing games as well as developing them, naturally I want to create game in Java. I have experience with games in C# and C++ but all of them were single-threaded simple games. But now, I learned how easy it is to make threads in Java, I want to take things to the next level. I started thinking about how would I actually implement threading in a game. I read couple of articles that say the same thing "Usually you have thread for rendering, for updating game logic, for AI, ..." but I haven't (or didn't look hard enough) found example of implementation. My idea how to make implementation is something like this (example for AI) public class AIThread implements Runnable{ private List<AI> ai; private Player player; /*...*/ public void run() { for (int i = 0; i < ai.size(); i++){ ai.get(i).update(player); } Thread.sleep(/* sleep until the next game "tick" */); } } I think this could work. If I also had a rendering and updating thread list of AI in both those threads, since I need to draw the AI and I need to calculate the logic between player and AI(But that could be moved to AIThread, but as an example) . Coming from C++ I'm used to do thing elegantly and efficiently, and this seems like neither of those. So what would be the correct way to handle this? Should I just keep multiple copies of resources in each thread or should I have the resources on one spot, declared with synchronized keyword? I'm afraid that could cause deadlocks, but I'm not yet qualified enough to know when a code will produce deadlock.

    Read the article

  • Infinite terrain shadows

    - by user35399
    I'm creating an infinite terrain engine, which generates the terrain either with fractals or noise. How can I make dynamic shadows for the sun on this terrain, if I don't know in advance what will be rendered in front of the sun. My terrain: The sun is the only light, it is directional, my terrain is generated on a plane which is positioned before the camera, frustum culled and fits the size of the viewing frustum. It is height mapped with generated noise texture, and using tessellation shaders on it. Video:http://www.youtube.com/watch?v=tk6yFwYusOs Dynamic shadows with the infinite terrain.

    Read the article

  • What time to display in text messages in multiplayer game?

    - by Krom Stern
    Say I'm having a multiplayer RTS game. There's a main server for each individual game and several clients connected to it. All packets are sent to server first and then server retransmits them back to clients. Say Server is located in one time-zone and all of the clients are in different time-zones. ClientA send a text message in chat at 12:03, what times should be stamped for other clients? Should his message be uniformely timestamped by Server (12:02) or each client should timestamp the message whenever it is recieved (12:04, 16:04, 03:03, etc..). Bear in mind, that all the messages are to be in the same order on all clients, server takes care of that. So thats the question - use local time for each client or use global server time to timestamp chat messages?

    Read the article

  • Color Picking Troubles - LWJGL/OpenGL

    - by Tom Johnson
    I'm attempting to check which object the user is hovering over. While everything seems to be just how I'd think it should be, I'm not able to get the correct color due to the second time I draw (without picking colors). Here is my rendering code: public void render() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); camera.applyTranslations(); scene.pick(); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); camera.applyTranslations(); scene.render(); } And here is what gets called on each block/tile on "scene.pick()": public void pick() { glColor3ub((byte) pickingColor.x, (byte) pickingColor.y, (byte) pickingColor.z); draw(); glReadBuffer(GL_FRONT); ByteBuffer buffer = BufferUtils.createByteBuffer(4); glReadPixels(Mouse.getX(), Mouse.getY(), 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, buffer); int r = buffer.get(0) & 0xFF; int g = buffer.get(1) & 0xFF; int b = buffer.get(2) & 0xFF; if(r == pickingColor.x && g == pickingColor.y && b == pickingColor.z) { hovered = true; } else { hovered = false; } } I believe the problem is that in the method of each tile/block called by scene.pick(), it is reading the color from the regular drawing state, after that method is called somehow. I believe this because when I remove the "glReadBuffer(GL_FRONT)" line from the pick method, it seems to almost fix it, but then it will also select blocks behind the one you are hovering as it is not only looking at the front. If you have any ideas of what to do, please be sure to reply!/ EDIT: Adding scene.render(), tile.render(), and tile.draw() scene.render: public void render() { for(int x = 0; x < tiles.length; x++) { for(int z = 0; z < tiles.length; z++) { tiles[x][z].render(); } } } tile.render: public void render() { glColor3f(color.x, color.y, color.z); draw(); if(hovered) { glColor3f(1, 1, 1); glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); draw(); glPolygonMode(GL_FRONT_AND_BACK, GL_FILL); } } tile.draw: public void draw() { float x = position.x, y = position.y, z = position.z; //Top glBegin(GL_QUADS); glVertex3f(x, y + size, z); glVertex3f(x + size, y + size, z); glVertex3f(x + size, y + size, z + size); glVertex3f(x, y + size, z + size); glEnd(); //Left glBegin(GL_QUADS); glVertex3f(x, y, z); glVertex3f(x + size, y, z); glVertex3f(x + size, y + size, z); glVertex3f(x, y + size, z); glEnd(); //Right glBegin(GL_QUADS); glVertex3f(x + size, y, z); glVertex3f(x + size, y + size, z); glVertex3f(x + size, y + size, z + size); glVertex3f(x + size, y, z + size); glEnd(); } (The game is like an isometric game. That's why I only draw 3 faces.)

    Read the article

  • Issues implementing arcball viewer

    - by Pris
    My scene has a simple cube, and a camera built with the lookAt function (I'm using OpenGL). The scene renders fine, and I'm sure I have my model/view/projection matrices set up correctly. Now I'm trying to implement arcball rotation for my camera, but I'm having some trouble. I've got it down to calculating the angle/axis rotation for a virtual sphere in normalized screen coordinates. That means when I move my mouse left to right, I get an angle around the Y axis... and moving my mouse up/down will get me an angle about X. I'm not sure where to go from here -- what do I need to do with my axis so I can apply the angle to simulate camera rotation about its viewpoint? If I try directly applying the axis/angle rotation the camera/view transform I get what you'd expect. The view is rotated about the world axes which the mouse moving over the virtual sphere on the screen corresponds to. So if I move the mouse up/down the view rotates about the world's X axis (what I get reminds me of a first-person view)... but this isn't what I want. I think I need the axis I get to be transformed so it passes through the camera viewpoint and is oriented correct in reference to the camera... but I don't know if that's right or how to do that.

    Read the article

< Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >