Search Results

Search found 26774 results on 1071 pages for 'distributed development'.

Page 529/1071 | < Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >

  • how get collision callback of two specific objects using bullet physics?

    - by sebap123
    I have got problem implementing collision callback into my project. I would like to have detection between two specific objects. I have got normall collision but I want one object to stop or change color or whatever when colides with another. I wrote code from bullet wiki: int numManifolds = dynamicsWorld->getDispatcher()->getNumManifolds(); for (int i=0;i<numManifolds;i++) { btPersistentManifold* contactManifold = dynamicsWorld->getDispatcher()->getManifoldByIndexInternal(i); btCollisionObject* obA = static_cast<btCollisionObject*>(contactManifold->getBody0()); btCollisionObject* obB = static_cast<btCollisionObject*>(contactManifold->getBody1()); int numContacts = contactManifold->getNumContacts(); for (int j=0;j<numContacts;j++) { btManifoldPoint& pt = contactManifold->getContactPoint(j); if (pt.getDistance()<0.f) { const btVector3& ptA = pt.getPositionWorldOnA(); const btVector3& ptB = pt.getPositionWorldOnB(); const btVector3& normalOnB = pt.m_normalWorldOnB; bool x = (ContactProcessedCallback)(pt,fallRigidBody,earthRigidBody); if(x) printf("collision\n"); } } } where fallRigidBody is a dynamic body - a sphere and earthRigiBody is static body - StaticPlaneShape and sphere isn't touching earthRigidBody all the time. I have got also other objects that are colliding with sphere and it works fine. But the program detects collision all the time. It doesn't matter if the objects are or aren't colliding. I have also added after declarations of rigid body: earthRigidBody->setCollisionFlags(earthRigidBody->getCollisionFlags() | btCollisionObject::CF_CUSTOM_MATERIAL_CALLBACK); fallRigidBody->setCollisionFlags(fallRigidBody->getCollisionFlags() | btCollisionObject::CF_CUSTOM_MATERIAL_CALLBACK); So can someone tell me what I am doing wrong? Maybe it is something simple?

    Read the article

  • Efficient skeletal animation

    - by Will
    I am looking at adopting a skeletal animation format (as prompted here) for an RTS game. The individual representation of each model on-screen will be small but there will be lots of them! In skeletal animation e.g. MD5 files, each individual vertex can be attached to an arbitrary number of joints. How can you efficiently support this whilst doing the interpolation in GLSL? Or do engines do their animation on the CPU? Or do engines set arbitrary limits on maximum joints per vertex and invoke nop multiplies for those joints that don't use the maximum number? Are there games that use skeletal animation in an RTS-like setting thus proving that on integrated graphics cards I have nothing to worry about in going the bones route?

    Read the article

  • Architectural advice - websockets javascript/php integration

    - by Ewan Vaentine
    Myself and a friend have started making a game, he's likely to be using impact.js for the user interaction etc, but we need multiplayer functionality so some form of websockets for TCP connections etc. So we were thinking impact.js into socket.io and node.js. However, user accounts, ecommerce, session handling and social media integration will all be handled with Codeigniter (PHP), my question is, is it wise to have node.js running in parallel with Codeigniter, or if this is even possible? If not, if you were to create a multiplayer online game utilising ecomms to buy credits and user accounts, how would you go about this from a structural position and what engines/frameworks would you recommend? I'm new to this site so I apologise in advance if I'm posting something inappropriate. Cheers, Ewan

    Read the article

  • point to rectangle distance

    - by john smith
    I have a 2D rectangle with x, y position and it's height and width and a randomly positioned point nearby it. Is there a way to check if this point might collide with the rectangle if closer than a certain distance? like imagine an invisible radius outside of that point colliding with said rectangle. I have problems with this simply because it is not a square, it would be so much easier this way! Any help? Thanks in advance.

    Read the article

  • Direct3D - Zooming into Mouse Position

    - by roohan
    I'm trying to implement my camera class for a simulation. But I cant figure out how to zoom into my world based on the mouse position. I mean the object under the mouse cursor should remain at the same screen position. My zooming looks like this: VOID ZoomIn(D3DXMATRIX& WorldMatrix, FLOAT const& MouseX, FLOAT const& MouseY) { this->Position.z = this->Position.z * 0.9f; D3DXMatrixLookAtLH(&this->ViewMatrix, &this->Position, &this->Target, &this->UpDirection); } I passed the world matrix to the function because I had the idea to move my drawing origin according to the mouse position. But I cant find out how to calculate the offset in to move my drawing origin. Anyone got an idea how to calculate this? Thanks in advance. SOLVED Ok I solved my problem. Here is the code if anyone is interested: VOID CAMERA2D::ZoomIn(FLOAT const& MouseX, FLOAT const& MouseY) { // Get the setting of the current view port. D3DVIEWPORT9 ViewPort; this->Direct3DDevice->GetViewport(&ViewPort); // Convert the screen coordinates of the mouse to world space coordinates. D3DXVECTOR3 VectorOne; D3DXVECTOR3 VectorTwo; D3DXVec3Unproject(&VectorOne, &D3DXVECTOR3(MouseX, MouseY, 0.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); D3DXVec3Unproject(&VectorTwo, &D3DXVECTOR3(MouseX, MouseY, 1.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); // Calculate the resulting vector components. float WorldZ = 0.0f; float WorldX = ((WorldZ - VectorOne.z) * (VectorTwo.x - VectorOne.x)) / (VectorTwo.z - VectorOne.z) + VectorOne.x; float WorldY = ((WorldZ - VectorOne.z) * (VectorTwo.y - VectorOne.y)) / (VectorTwo.z - VectorOne.z) + VectorOne.y; // Move the camera into the screen. this->Position.z = this->Position.z * 0.9f; D3DXMatrixLookAtLH(&this->ViewMatrix, &this->Position, &this->Target, &this->UpDirection); // Calculate the world space vector again based on the new view matrix, D3DXVec3Unproject(&VectorOne, &D3DXVECTOR3(MouseX, MouseY, 0.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); D3DXVec3Unproject(&VectorTwo, &D3DXVECTOR3(MouseX, MouseY, 1.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); // Calculate the resulting vector components. float WorldZ2 = 0.0f; float WorldX2 = ((WorldZ2 - VectorOne.z) * (VectorTwo.x - VectorOne.x)) / (VectorTwo.z - VectorOne.z) + VectorOne.x; float WorldY2 = ((WorldZ2 - VectorOne.z) * (VectorTwo.y - VectorOne.y)) / (VectorTwo.z - VectorOne.z) + VectorOne.y; // Create a temporary translation matrix for calculating the origin offset. D3DXMATRIX TranslationMatrix; D3DXMatrixIdentity(&TranslationMatrix); // Calculate the origin offset. D3DXMatrixTranslation(&TranslationMatrix, WorldX2 - WorldX, WorldY2 - WorldY, 0.0f); // At the offset to the cameras world matrix. this->WorldMatrix = this->WorldMatrix * TranslationMatrix; } Maybe someone has even a better solution than mine.

    Read the article

  • DirectX 10 Instancing Problem (objects cannot be seen)

    - by Riffraff
    Right now I'm trying to implement an area that is filled with vegetation. I have tried mesh version and right now I'm trying to implement instancing version but I cannot manage to make it work. I can't see any object. I search for any problem of buffers with FAILED() and D3D10_CREATE_DEVICE_DEBUG but they didn't help me either. Right now I don't even know which part of my code to share to explain my problem.

    Read the article

  • How can I change this isometric engine to make it so that you could distinguish between blocks that are on different planes?

    - by l5p4ngl312
    I have been working on an isometric minecraft-esque game engine for a strategy game I plan on making. As you can see, it really needs some sort of shading. It is difficult to distinguish between separate elevations when the camera is facing away from the slope because everything is the same shade. So my question is: can I shade just a specific section of a sprite? All of those blocks are just sprites, so if I shaded the entire image, it would shade the whole block. I am using LWJGL. Are there any other approaches to take? Heres a link to a screenshot from the engine: http://i44.tinypic.com/qxqlix.jpg

    Read the article

  • Box2D `ApplyLinearImpulse` is not working whereas `SetLinearVelocity` works

    - by Narek
    I need to mimic jumping behavior for the player in my game. Player consists of two fixtures with circle and rectangle shapes. Rectangle I use to detect ground and it is a sensor. Is some point for jumping I do this: float impulseY = body->GetMass() * PLAYER_JUMPING_VEOCITY / PTM_RATIO * std::sin(PLAYER_JUMPING_ANGLE * PI / 180); body->ApplyLinearImpulse(b2Vec2(0, impulseY), body->GetWorldCenter(), true); and player does not jump. But when I do this: body->SetLinearVelocity(b2Vec2(0, PLAYER_JUMPING_VEOCITY / PTM_RATIO * std::sin(PLAYER_JUMPING_ANGLE * PI / 180))); my player jumps. Also when I change the rectangle shape to be normal (not sensor) shape, its works again. Why? Just in case here are the parameters of my rectangular sensor: b2PolygonShape boxShape; boxShape.SetAsBox(width * 0.5/2/PTM_RATIO, height * 0.2/2/PTM_RATIO, b2Vec2(0, -height * 0.4 /PTM_RATIO), 0); b2FixtureDef boxFixtureDef; boxFixtureDef.friction = 0; boxFixtureDef.restitution = 0; boxFixtureDef.density = 1; boxFixtureDef.isSensor = true; boxFixtureDef.userData = static_cast<void*>(PLAYER_GROUP);

    Read the article

  • XNA 4: RenderTarget2D textures getting transparent on fullscreen

    - by Shashwat
    I'm generating a Texture2D object using RenderTarget2D as in the following code public static Texture2D GetTextTexture(string text, Vector2 position, SpriteFont font, Color foreColor, Color backColor, Texture2D background=null) { int width = (int)font.MeasureString(text).X; int height = (int)font.MeasureString(text).Y; GraphicsDevice device = Settings.game.GraphicsDevice; SpriteBatch spriteBatch = Settings.game.spriteBatch; RenderTarget2D renderTarget = new RenderTarget2D(device, width, height, false, SurfaceFormat.Color, DepthFormat.Depth24Stencil8, device.PresentationParameters.MultiSampleCount, RenderTargetUsage.DiscardContents); device.SetRenderTarget(renderTarget); device.Clear(backColor); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque); if (background != null) spriteBatch.Draw(background, new Rectangle(0, 0, 70, 70), Color.White); spriteBatch.End(); spriteBatch.Begin(); spriteBatch.DrawString(font, text, position, foreColor, 0, new Vector2(0), 0.8f, SpriteEffects.None, 0); spriteBatch.End(); device.SetRenderTarget(null); ResetGraphicsDeviceSettings(); return (Texture2D)renderTarget; } It's working all fine. But when I ToggleFullScreen() (and vice-versa), the previous textures are getting transparent. However, the new textures after that are being generated correctly. What can be the reason for this?

    Read the article

  • Collision with half of a semi-circle

    - by heitortsergent
    I am trying to port a game I made using Flash/AS3, to the Windows Phone using C#/XNA 4.0. You can see it here: http://goo.gl/gzFiE In the flash version I used a pixel-perfect collision between meteors (it's a rectangle, and usually rotated) that spawn outside the screen, and move towards the center, and a shield in the center of the screen(which is half of a semi-circle, also rotated by the player), which made the meteor bounce back in the opposite direction it came from, when they collided. My goal now is to make the meteors bounce in different angles, depending on the position it collides with the shield (much like Pong, hitting the borders causes a change in the ball's angle). So, these are the 3 options I thought of: Pixel-perfect collision (Microsoft has a sample) , but then I wouldn't know how to change the meteor angle after the collision 3 BoundingCircle's to represent the half semi-circle shield, but then I would have to somehow move them as I rotate the shield. Farseer Physics. I could make a shape composed of 3 lines, and use that as the collision object for the shield. Is there any other way besides those? Which would be the best way to do it(it's aimed towards mobile devices, so pixel-perfect is probably not a good choice)? Most of the time there's always a easier/better way than what we think of...

    Read the article

  • Is there an AIR native extension to use GameCenter APIs for turn-based games?

    - by Phil
    I'm planning a turn based game using the iOS 5 GameCenter (GameKit) turn-based functions. Ideally I would program the game with AIR (I'm a Flash dev), but so far I can't seem to find any already available native extension that offers that (only basic GameCenter functions), so my questions are: Does anyone know if that already exists? And secondly how complex a task would it be to create an extension that does that? Are there any pitfalls I should be aware of etc.? ** UPDATE ** There does not seem a solution to the above from Adobe. For anyone who is interested check out the Adobe Gaming SDK. It contains a Game Center ANE which I've read contains options for multiplayer but not turn-based multiplayer, at least it's a start. Comes a bit late for me as I've already learned Obj-c!

    Read the article

  • Textures selectively not applying in Unity

    - by user46790
    On certain imported objects (fbx) in Unity, upon applying a material, only the base colour of the material is applied, with none of the tiled texture showing. This isn't universal; on a test model only some submeshes didn't show the texture, while some did. I have tried every combination of import/calculate normals/tangents to no avail. FYI I'm not exactly experienced with the software or gamedev in general; this is to make a small static scene with 3-4 objects max. One model tested was created in 3DSMax, the other in Blender. I've had this happen on every export from Blender, but only some submeshes from the 3DSMax model (internet sourced to test the problem)

    Read the article

  • Converting a WIndows Store App to Android

    - by pm_2
    I cross posted this from SO I'm very new to Xamarin. I have a few published Windows Store apps and want to convert them to Android. I'm attempting to use Xamarin for this. I'm just using the free version of Xamarin. Here's where I am so far: I am trying two apps - one was build with Monogame and one is just build on the WinRT framework. I have managed to get them both into Xamarin studio, basically by hacking the csproj files. I'm getting build errors because it's missing references. There does appear to be some equivalent Mono / .Net4 libraries, but things like Microsoft.Xna.Framework seem to be missing. So, my question is: am I going about this the right way and, if so, am I missing a step ("convert dependencies" or something)? If I'm not going about this the right way then how should I be doing this (I found very few online resources on this subject)?

    Read the article

  • FPS camera specification

    - by user1095108
    I remember I once composed a FPS viewing transformation, as a composition of 3 rotations, each with an angle as a parameter. The first angle specified the left/right rotation around the y-axis, the second angle the up/down rotation around the x-axis, and the third around the z-axis. The viewing transformation was therefore specified by 3 angles. Naturally, this transformation had a gimbal lock, depending in what order the transformation were performed. What should I look at to derive my viewing transformation without the gimbal lock? I know the "lookAt" method already, but I consider that cumbersome. EDIT: MY first guess is to do the first 2 transformations to get a viewing direction and then the axis-angle rotation on this axis.

    Read the article

  • Problem with SAT collision detection overlap checking code

    - by handyface
    I'm trying to implement a script that detects whether two rotated rectangles collide for my game, using SAT (Separating Axis Theorem). I used the method explained in the following article for my implementation in Google Dart. 2D Rotated Rectangle Collision I tried to implement this code into my game. Basically from what I understood was that I have two rectangles, these two rectangles can produce four axis (two per rectangle) by subtracting adjacent corner coordinates. Then all the corners from both rectangles need to be projected onto each axis, then multiplying the coordinates of the projection by the axis coordinates (point.x*axis.x+point.y*axis.y) to make a scalar value and checking whether the range of both the rectangle's projections overlap. When all the axis have overlapping projections, there's a collision. First of all, I'm wondering whether my comprehension about this algorithm is correct. If so I'd like to get some pointers in where my implementation (written in Dart, which is very readable for people comfortable with C-syntax) goes wrong. Thanks! EDIT: The question has been solved. For those interested in the working implementation: Click here

    Read the article

  • Unable to create suitable graphics device?

    - by kraze
    I've been following the Eye of the Dragon tutorial, which is basicly a guide to making a 2D RPG game, obviously. I recently finished the tutorial about making pop up screens in the menu and changed the screen to load as a full screen. Whenever I try and load the game it just goes black and my mouse sits there. I cannot change out of it other then with CtrlAltDel. Once i do that it says No suitable graphics card found, unable to create graphics device. I read somewhere about XNA not allowing more then one screen when any one of them is full screen. but it wasnt very informative. Anyone have any ideas whats going on and/or how to fix this? Just incase if this helps this is the code for the graphics device: public Game1() { graphics = new GraphicsDeviceManager(this); graphics.PreferredBackBufferWidth = 900; graphics.PreferredBackBufferHeight = 768; graphics.IsFullScreen = true; this.Window.Title = "Eyes of the Dragon"; Content.RootDirectory = "Content"; }

    Read the article

  • 3D RTS pathfinding

    - by xcrypt
    I understand the A* algorithm, but I have some trouble doing it in 3D to suit the needs of my RTS Basically, in the game I'm making, there will be agents with different sizes of OBB collision boxes. I can use steering behaviours for avoiding other agents, so I don't need complete dynamic pathfinding. However, there is a problem because different agents have different collision geometry, and structures can be placed in almost any place. This means that there might be a gap between two structures where some agents can go through and some can't. A solution I have found to this problem is to do a sweep of the collision geometry of the agent from start node of the edge the pf algorithm is currently testing, to the end node of that edge. But this is probably a bit overkill since every edge the algorithm tests would also have to create and test with a collision geometry sweep. What are some reasonable approaches to this problem? I should mention that I'd prefer not to use navmeshes, I prefer waypoints because my entire system is based on it atm.

    Read the article

  • jump pads problem

    - by Pasquale Sada
    I'm trying to make a character jump on a landing pad who stays above him. Here is the formula I've used (everything is pretty much self-explainable, maybe except character_MaxForce that is the total force the character can jump ): deltaPosition = target - character_position; sqrtTerm = Sqrt(2*-gravity.y * deltaPosition.y + MaxYVelocity* character_MaxForce); time = (MaxYVelocity-sqrtTerm) /gravity.y; speedSq = jumpVelocity.x* jumpVelocity.x + jumpVelocity.z *jumpVelocity.z; if speedSq < (character_MaxForce * character_MaxForce) we have the right time so we can store the value jumpVelocity.x = deltaPosition.x / time; jumpVelocity.z = deltaPosition.z / time; otherwise we try the other solution time = (MaxYVelocity+sqrtTerm) /gravity.y; and then store it jumpVelocity.x = deltaPosition.x / time; jumpVelocity.z = deltaPosition.z / time; jumpVelocity.y = MaxYVelocity; rigidbody_velocity = jumpVelocity; The problem is that the character is jumping away from the landing pad or sometime he jumps too far never hitting the landing pad.

    Read the article

  • DirectCompute information

    - by N0xus
    I've been trying to make use of the GPU as part of a project of mine. I've looked into both CUDA and OpenCL, but the lack of information showing you how to introduce these into a project is shocking. Even their dedicated forum groups are dead. So now, I'm looking into DirectCompute. From what I can tell, it's simply a new type of shader file that makes use of HLSL. My question is this, does my program (aside from being DirectX 10 / 11 ) need its structure changed? I mean, is it simply a case of creating the CS file, setting in the project like I would any other shader, and watch the magic happen? Any information on this would be appreciated.

    Read the article

  • Help with calculation to steer ship in 3d space

    - by Aaron Anodide
    I'm a beginner using XNA to try and make a 3D Asteroids game. I'm really close to having my space ship drive around as if it had thrusters for pitch and yaw. The problem is I can't quite figure out how to translate the rotations, for instance, when I pitch forward 45 degrees and then start to turn - in this case there should be rotation being applied to all three directions to get the "diagonal yaw" - right? I thought I had it right with the calculations below, but they cause a partly pitched forward ship to wobble instead of turn.... :( Here's current (almost working) calculations for the Rotation acceleration: float accel = .75f; // Thrust +Y / Forward if (currentKeyboardState.IsKeyDown(Keys.I)) { this.ship.AccelerationY += (float)Math.Cos(this.ship.RotationZ) * accel; this.ship.AccelerationX += (float)Math.Sin(this.ship.RotationZ) * -accel; this.ship.AccelerationZ += (float)Math.Sin(this.ship.RotationX) * accel; } // Rotation +Z / Yaw if (currentKeyboardState.IsKeyDown(Keys.J)) { this.ship.RotationAccelerationZ += (float)Math.Cos(this.ship.RotationX) * accel; this.ship.RotationAccelerationY += (float)Math.Sin(this.ship.RotationX) * accel; this.ship.RotationAccelerationX += (float)Math.Sin(this.ship.RotationY) * accel; } // Rotation -Z / Yaw if (currentKeyboardState.IsKeyDown(Keys.K)) { this.ship.RotationAccelerationZ += (float)Math.Cos(this.ship.RotationX) * -accel; this.ship.RotationAccelerationY += (float)Math.Sin(this.ship.RotationX) * -accel; this.ship.RotationAccelerationX += (float)Math.Sin(this.ship.RotationY) * -accel; } // Rotation +X / Pitch if (currentKeyboardState.IsKeyDown(Keys.F)) { this.ship.RotationAccelerationX += accel; } // Rotation -X / Pitch if (currentKeyboardState.IsKeyDown(Keys.D)) { this.ship.RotationAccelerationX -= accel; } I'm combining that with drawing code that does a rotation to the model: public void Draw(Matrix world, Matrix view, Matrix projection, TimeSpan elsapsedTime) { float seconds = (float)elsapsedTime.TotalSeconds; // update velocity based on acceleration this.VelocityX += this.AccelerationX * seconds; this.VelocityY += this.AccelerationY * seconds; this.VelocityZ += this.AccelerationZ * seconds; // update position based on velocity this.PositionX += this.VelocityX * seconds; this.PositionY += this.VelocityY * seconds; this.PositionZ += this.VelocityZ * seconds; // update rotational velocity based on rotational acceleration this.RotationVelocityX += this.RotationAccelerationX * seconds; this.RotationVelocityY += this.RotationAccelerationY * seconds; this.RotationVelocityZ += this.RotationAccelerationZ * seconds; // update rotation based on rotational velocity this.RotationX += this.RotationVelocityX * seconds; this.RotationY += this.RotationVelocityY * seconds; this.RotationZ += this.RotationVelocityZ * seconds; Matrix translation = Matrix.CreateTranslation(PositionX, PositionY, PositionZ); Matrix rotation = Matrix.CreateRotationX(RotationX) * Matrix.CreateRotationY(RotationY) * Matrix.CreateRotationZ(RotationZ); model.Root.Transform = rotation * translation * world; model.CopyAbsoluteBoneTransformsTo(boneTransforms); foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.World = boneTransforms[mesh.ParentBone.Index]; effect.View = view; effect.Projection = projection; effect.EnableDefaultLighting(); } mesh.Draw(); } }

    Read the article

  • correct pattern to handle a lot of entities in a game

    - by lezebulon
    In my game I usually have every NPC / items etc being derived from a base class "entity". Then they all basically have a virtual method called "update" that I would class for each entity in my game at every frame. I am assuming that this is a pattern that has a lot of downsides. What are some other ways to manage different "game objects" throughout the game? Are there other well-known patterns for this? My game is a RPG if that changes anything

    Read the article

  • Adding a short delay between bullets

    - by Sun
    I'm having some trouble simulating bullets in my 2D shooter. I want similar mechanics to Megaman, where the user can hold down the shoot button and a continues stream of bullets are fired but with a slight delay. Currently, when the user fires a bullet in my game a get an almost laser like effect. Below is a screen shot of some bullets being fired while running and jumping. In my update method I have the following: if(gc.getInput().isKeyDown(Input.KEY_SPACE) ){ bullets.add(new Bullet(player.getPos().getX() + 30,player.getPos().getY() + 17)); } Then I simply iterate through the array list increasing its x value on each update. Moreover, pressing the shoot button (Space bar) creates multiple bullets instead of just creating one even though I am adding only one new bullet to my array list. What would be the best way to solve this problem?

    Read the article

  • Freelance composer seeking work! [closed]

    - by Ben Fowler
    Hey guys! I'm a freelance composer based in Victoria, Australia trying to break into the game industry to start my career! I've heard it said that having a plan B is planning for failure, so I've decided to go full on for what I want, so here I am! I have composed some music for other games, none of which have made it in yet (still hopeful :P) Any help on how I can break into to game industry as a composer would be MUCH appreciated!

    Read the article

  • How to apply Data Oriented Design with Object Oriented Programming?

    - by Pombal
    Hi. I've read lots of articles about DOD and I understand it but I can't design an Object Oriented system with DOD in mind, I think my OOP education is blocking me. How should I think to mix the two? The objective is to have a nice OO interface while using DOD behind the scenes. I saw this too but didn't help much: http://stackoverflow.com/questions/3872354/how-to-apply-dop-and-keep-a-nice-user-interface

    Read the article

  • Changing coordinate system from Z-up to Y-up

    - by Jari Komppa
    Blender's coordinate system is different from what I'm used to, in that Z points upwards instead of Y. What would be the simplest way of converting all the world data (so that all animations, texture coordinates, etc still work) so that Y points upwards? Clarification: Object positions are defined as matrices, so just switching translation/rotation/scale information in matrices is not a trivial task.

    Read the article

< Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >