Search Results

Search found 31839 results on 1274 pages for 'plugin development'.

Page 464/1274 | < Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >

  • Flash framerate reliability

    - by Tim Cooper
    I am working in Flash and a few things have been brought to my attention. Below is some code I have some questions on: addEventListener(Event.ENTER_FRAME, function(e:Event):void { if (KEY_RIGHT) { // Move character right } // Etc. }); stage.addEventListener(KeyboardEvent.KEY_DOWN, function(e:KeyboardEvent):void { // Report key which is down }); stage.addEventListener(KeyboardEvent.KEY_UP, function(e:KeyboardEvent):void { // Report key which is up }); I have the project configured so that it has a framerate of 60 FPS. The two questions I have on this are: What happens when it is unable to call that function every 1/60 of a second? Is this a way of processing events that need to be limited by time (ex: a ball which needs to travel to the right of the screen from the left in X seconds)? Or should it be done a different way?

    Read the article

  • OpenGL: Attempt to allocate a texture to big for the current hardware

    - by AnonymousMan
    I'm getting the following error: java.io.IOException: Attempt to allocate a texture to big for the current hardware at org.newdawn.slick.opengl.InternalTextureLoader.getTexture(InternalTextureLoader.java:320) at org.newdawn.slick.opengl.InternalTextureLoader.getTexture(InternalTextureLoader.java:254) at org.newdawn.slick.opengl.InternalTextureLoader.getTexture(InternalTextureLoader.java:200) at org.newdawn.slick.opengl.TextureLoader.getTexture(TextureLoader.java:64) at org.newdawn.slick.opengl.TextureLoader.getTexture(TextureLoader.java:24) The image I'm trying to use is 128x128. System.out.println(GL11.glGetInteger(GL11.GL_MAX_TEXTURE_SIZE)); I get: 32. 32??!! My graphics card is AMD Radeon HD 7970M with 2048 MB GDDR5 RAM, I can run all the latest games in 1080p and 60fps with no problem, and those textures sure as hell doesn't look like they are 32x32 pixels to me! How can I fix this? -- Edit: Here's the chaos code I use to init OpenGL: Display.setDisplayMode(new DisplayMode(500,500)); Display.create(); if (!GLContext.getCapabilities().OpenGL11) { throw new Exception("OpenGL 1.1 not supported."); } Display.setTitle("Game"); glMatrixMode(GL_PROJECTION); glLoadIdentity(); GLU.gluPerspective(45, 1, 0.1f, 5000); Mouse.setGrabbed(true); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glEnable(GL_TEXTURE_2D); glClearColor(0, 0, 0, 0); glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LEQUAL); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glEnable(GL_BLEND); glEnable(GL_POINT_SMOOTH); glEnable(GL_LINE_SMOOTH); glEnable(GL_POLYGON_SMOOTH); glEnable(GL_POLYGON_OFFSET_FILL); glShadeModel(GL_SMOOTH); Display is a LWJGL thing, it makes the OpenGL context and the window. Anyway, I don't think there's anything in the init code that can help me but you never know...

    Read the article

  • Sony PSM SDK's 2D game engine

    - by Notbad
    I have started with the Sony PSM SDK this week, I'm interested in creating a little 2D game and have been reading through the web about a so called "2D game engine" integrated into the SDK. Some information I read suggested that it was added on January 2012, but I have been going through the documentation and haven't been able to find any reference to it. Does anybody know if they finally introduced the 2D game engine for the PSM SDK? Thanks in advance.

    Read the article

  • Level editor event system, how to translate event to game action

    - by Martino Wullems
    Hello, I've been busy trying to create a level editor for a tile based game i'm working on. It's all going pretty fine, the only thing i'm having trouble with is creating a simple event system. Let's say the player steps on a particulair tile that had the action "teleport" assigned to it in the editor. The teleport string is saved in the tile object as a variable. When creating the tilegrid an actionmanager class scans the action variable and assigns actions to the variable. public static class ActionManager { public static function ParseTileAction(tile:Tile) { switch(tile.action) { case "TELEPORT": //assign action here break; } } } Now this is an collision event, so I guess I should also provide an object to colide with the tile. But what if it would have to count for collision with all objects in the world? Also, checking for collisions in the actionmanager class doesn't seem very efficient. Am I even on the right track here? I'm new to game design so I could be completly off track. Any tips on how handeling and creating events using an editor is usually done would be great. The main problem i'm having is the Thanks in advance.

    Read the article

  • How to Generate Spritesheet from a 'problematic' animated Symbol in Flash Pro CS6?

    - by Arthur Wulf White
    In the new Flash Pro CS6 there is an option to generate spriteheet from a symbol. I used these tutorials: http://www.adobe.com/devnet/flash/articles/using-sprite-sheet-generator.html http://tv.adobe.com/watch/cs6-creative-cloud-feature-tour-for-web/generating-sprite-sheets-using-flash-professional-cs6/ And it works really well! An artist I'm working with created a bunch of assets for a game. One of them is a walking person as seen from a top-down view. You can find the .fla here: https://docs.google.com/folder/d/0B3L2bumwc4onRGhLcGNId1p2Szg/edit (If this does not work let me know, it is the first time I used Google Drive to share files) 1 .When I press ctrl+enter I can see it is moving. When I look for the animation, I do not seem to find it. When I select to create a spritesheet, flash suggest creating a spritesheet with one frame in the base pose and no other (animation) frames. What is causing this and how do I correct it? 2 .I want to convert it to a sprite sheet for 32 angles of movement. Is there any magical easy way to get this done? Is there a workaround without using Flash CS6 to do the same thing?

    Read the article

  • How do people get around the Carmack's Reverse patent?

    - by Rei Miyasaka
    Apparently, Creative has a patent on Carmack's Reverse, and they successfully forced Id to modify their techniques for the source drop, as well as to include EAX in Doom 3. But Carmack's Reverse is discussed quite often and apparently it's a good choice for deferred shading, so it's presumably used in a lot of other high-budget productions too. Even though it's unlikely that Creative would go after smaller companies, I'm wondering how the bigger studios get around this problem. Do they just cross their fingers and hope Creative doesn't troll them, or do they just not use Carmack's Reverse at all?

    Read the article

  • When "W" is held, the character moves forward, but when "W" and "A" is held, movement completely stops

    - by Vlad1k
    I am making a 2D game, and when I hold the key "w", the player goes forward, but when I hold both "w" and "a", the movement stops completely, when I want it to go forward, while shifting to the left. Here is my script: var speed = 4; function Update() { // Make the character walk forward if "w" is being held if(Input.GetKey("w")) { rigidbody.velocity = transform.forward * speed; } // Stop the movement if "w" is not being held if(Input.GetKeyUp("w")) { rigidbody.velocity = transform.forward * 0; } // Make the character walk forward if "s" is being held if(Input.GetKey("s")) { rigidbody.velocity = transform.forward * -speed; } // Stop the movement if "s" is not being held if(Input.GetKeyUp("s")) { rigidbody.velocity = transform.forward * 0; } // Make the character walk left if "a" is being held if(Input.GetKey("a")) { rigidbody.velocity = transform.right * -speed; } // Stop the movement if "a" is not being held if(Input.GetKeyUp("a")) { rigidbody.velocity = transform.right * 0; } //Make the character walk right if "d" is being held if(Input.GetKey("d")) { rigidbody.velocity = transform.right * speed; } // Stop the movement if "d" is not being held if(Input.GetKeyUp("d")) { rigidbody.velocity = transform.right * 0; } } PLEASE MAKE THE CODE BETTER! I AM NEW! EDIT: Here is a video to show my problem. http://www.screenr.com/3oxH Here is the newest code: var speed = 4f; function Update() { if(Input.GetKey("w")) { rigidbody.velocity = transform.forward * speed; } else if(Input.GetKey("s")) { rigidbody.velocity = transform.forward * -speed; } else if(Input.GetKey("a")) { rigidbody.velocity = transform.right * -speed; } else if(Input.GetKey("d")) { rigidbody.velocity = transform.right * speed; } if(Input.GetKeyUp("w")) { rigidbody.velocity = transform.forward * 0; } if(Input.GetKeyUp("s")) { rigidbody.velocity = transform.forward * 0; } if(Input.GetKeyUp("a")) { rigidbody.velocity = transform.right * 0; } if(Input.GetKey("d")) { rigidbody.velocity = transform.right * speed; } if(Input.GetKeyUp("d")) { rigidbody.velocity = transform.right * 0; } }

    Read the article

  • Facilitating XNA game deployments for non programmers

    - by Sal
    I'm currently working on an RPG, using the RPG starter kit from XNA as a base. (http://xbox.create.msdn.com/en-US/education/catalog/sample/roleplaying_game) I'm working with a small team (two designers and one music/sound artist), but I'm the only programmer. Currently, we're working with the following (unsustainable) system: the team creates new pics/sounds to add to the game, or they modify existing sounds/pics, then they commit their work to a repository, where we keep a current build of everything. (Code, images, sound, etc.) Every day or so, I create a new installer, reflecting the new images, code changes, and sound, and everyone installs it. My issue is this: I want to create a system where the rest of the team can replace the combat sounds, for instance, and they can immediately see the changes, without having to wait for me to build. The way XNA's setup, if I publish, it encodes all of the image and sound files, so the team can't "hot swap." I can set up Microsoft VS on everyone's machine and show them how to quickly publish, but I wanted to know if there was a simpler way of doing this. Has anyone come up against this when working with teams using XNA?

    Read the article

  • How to create water like in new super mario bros?

    - by user1103457
    I assume the water in New super mario bros works the same as in the first part of this tutorial: http://gamedev.tutsplus.com/tutorials/implementation/make-a-splash-with-2d-water-effects/ But in new super mario bros the water also has constant waves on the surface, and the splashes look very different. What's also a difference is that in the tutorial, if you create a splash, it first creates a deep "hole" in the water at the origin of the splash. In new super mario bros this hole is absent or much smaller. When I refer to the splashes in new super mario bros I am referring to the splashes that the player creates when jumping in and out of the water. For reference you could use this video: http://www.ign.com/videos/2012/11/17/new-super-mario-bros-u-3-star-coin-walkthrough-sparkling-waters-1-waterspout-beach just after 00:50, when the camera isn't moving you can get a good look at the water and the constant waves. there are also some good examples of the splashes during that time. How do they create the constant waves and the splashes? I am programming in XNA. (I have tried this myself but couldn't really get it all to work well together) (and as bonus questions: how do they create the light spots just under the surface of the waves, and how do they texture the deeper parts of the water? This is the first time I try to create water like this.)

    Read the article

  • How do you author HDR content?

    - by Nathan Reed
    How do you make it easy for your artists to author content for an HDR renderer? What kinds of tools should you provide, and what workflows need to change, in going from LDR to HDR? Note that I'm not asking about the technical aspects of implementing an HDR renderer, but about best practices for creating materials and lighting in HDR. I've googled around a bit, but there doesn't seem to be much about this topic on the web. Can anyone point me to some good resources on this, or share their own experiences? Some specific points: Lighting - how can lighting artists pick HDR light colors? Do they have a standard LDR color picker and then a multiplier? Is the multiplier in gamma or linear space? Maybe instead of a multiplier it's a log-luminance? Or a physical brightness level, like the number of lumens? How will they know what multiplier/luminance/brightness is "correct" for a given light? Materials - how can texture artists make emissive color maps, such as neon signs, TV screens, skyboxes, etc? Can you paint one as a regular LDR (8-bit-per-channel) image and apply a multiplier (or log-luminance, etc.)? Are there cases where it's necessary to actually paint HDR images? If so, how do you go about this in Photoshop (or other software)?

    Read the article

  • Logic behind a bejeweled-like game

    - by Joe
    In a prototype I am doing, there is a minigame similar to bejeweled. Using a grid that is a 2d array (int[,]) how can I go about know when the user formed a match? I only care about horizontally and vertically. Off the top of my head I was thinking I would just look each direction. Something like: int item = grid[x,y]; if(grid[x-1,y]==item) { int step=x; int matches =2; while(grid[step-1,y]==item) { step++; matches++ } if(matches>2) //remove all matching items } else if(grid[x+1,y]==item //.... else if(grid[x,y-1==item) //... else if(grid[x,y+1]==item) //... It seems like there should be a better way. Is there?

    Read the article

  • Game Engine with a real time renderer

    - by Maik Klein
    I am studying computer graphics since 3 semester and we just started with opengl. I really enjoy it and want to create my own little engine for learning purpose. I already read tons of different forum posts and saw the following engines. Panda3d, Ogre3d, NeoAxis, Irrlicht and Horde3d(graphics only). Now I don't want to use something like unity or cryengine because I want to start more lowlevel. Which of those engines is suited for realtime rendering? Something that cryengine offers - no baked lightmaps. Or at least gives me the option to add a realtime renderer?

    Read the article

  • What are some great papers/publications relating to game programming?

    - by Archagon
    What are some of your favorite papers and publications that closely relate to game programming? I'm particularly looking for examples that are well-written and illustrated, and/or have had a profound influence on the industry. (Here's one example: in this GDC talk, Bungie's David Aldridge mentions that a paper called "The TRIBES Engine Networking Model" was the starting point for Halo's network code.)

    Read the article

  • How do I draw a scene with 2 nested frames

    - by Guido Granobles
    I have been trying for long time to figure out this: I have loaded a model from a directx file (I am using opengl and Java) the model have a hierarchical system of nested reference frames (there are not bones). There are just 2 frames, one of them is called x3ds_Torso and it has a child frame called x3ds_Arm_01. Each one of them has a mesh. The thing is that I can't draw the arm connected to the body. Sometimes the body is in the center of the screen and the arm is at the top. Sometimes they are both in the center. I know that I have to multiply the matrix transformation of every frame by its parent frame starting from the top to the bottom and after that I have to multiply every vertex of every mesh by its final transformation matrix. So I have this: public void calculeFinalMatrixPosition(Bone boneParent, Bone bone) { System.out.println("-->" + bone.name); if (boneParent != null) { bone.matrixCombined = bone.matrixTransform.multiply(boneParent.matrixCombined); } else { bone.matrixCombined = bone.matrixTransform; } bone.matrixFinal = bone.matrixCombined; for (Bone childBone : bone.boneChilds) { calculeFinalMatrixPosition(bone, childBone); } } Then I have to multiply every vertex of the mesh: public void transformVertex(Bone bone) { for (Iterator<Mesh> iterator = meshes.iterator(); iterator.hasNext();) { Mesh mesh = iterator.next(); if (mesh.boneName.equals(bone.name)) { float[] vertex = new float[4]; double[] newVertex = new double[3]; if (mesh.skinnedVertexBuffer == null) { mesh.skinnedVertexBuffer = new FloatDataBuffer( mesh.numVertices, 3); } mesh.vertexBuffer.buffer.rewind(); while (mesh.vertexBuffer.buffer.hasRemaining()) { vertex[0] = mesh.vertexBuffer.buffer.get(); vertex[1] = mesh.vertexBuffer.buffer.get(); vertex[2] = mesh.vertexBuffer.buffer.get(); vertex[3] = 1; newVertex = bone.matrixFinal.transpose().multiply(vertex); mesh.skinnedVertexBuffer.buffer.put(((float) newVertex[0])); mesh.skinnedVertexBuffer.buffer.put(((float) newVertex[1])); mesh.skinnedVertexBuffer.buffer.put(((float) newVertex[2])); } mesh.vertexBuffer = new FloatDataBuffer( mesh.numVertices, 3); mesh.skinnedVertexBuffer.buffer.rewind(); mesh.vertexBuffer.buffer.put(mesh.skinnedVertexBuffer.buffer); } } for (Bone childBone : bone.boneChilds) { transformVertex(childBone); } } I know this is not the more efficient code but by now I just want to understand exactly how a hierarchical model is organized and how I can draw it on the screen. Thanks in advance for your help.

    Read the article

  • Multiple Audio listeners in Scene

    - by Kevin Jensen Petersen
    THIS IS UNITY Im trying to make a FPS game over networking, it works fine. But now, when im trying to implement sound, it won't work. My guess would be, to add a Audio listener to the prefab, that gets instansiated whenever a player connects to the server, however the problem about this is that each player's audiolistener have been switched out which the other player(s), so the AudioSource won't play at the player, but at someone else in the game. Any suggestions ?

    Read the article

  • Trouble with UV Mapping Blender => Unity 3

    - by Lea Hayes
    For some reason I am getting nasty grey edges around the edges of rendered 3D models that are not present in Blender. I seem to be able to solve the problem by reducing the size of the UV coordinates within the part of the texture that is to be mapped. But this means that: I am wasting valuable texture space Loss of accuracy in drawn UV maps Could I be doing something wrong, perhaps a setting in Unity that needs changing? I have watched countless tutorials which demonstrate Blender default generated UV coordinates with "Texture Paint" which are perfectly aligned in Unity. Here is an illustration of the problem: Left: approximately 15 pixels of margin on each side of UV coordinates Right: Approximately 3 pixels of margin on each side of UV coordinates Note: Texture image resolution is 1024x1024

    Read the article

  • How do I identify mouse clicks versus mouse down in games?

    - by Tristan
    What is the most common way of handling mouse clicks in games? Given that all you have in way of detecting input from the mouse is whether a button is up or down. I currently just rely on the mouse down being a click, but this simple approach limits me greatly in what I want to achieve. For example I have some code that should only be run once on a mouse click, but using mouse down as a mouse click can cause the code to run more then once depending on how long the button is held down for. So I need to do it on a click! But what is the best way to handle a click? Is a click when the mouse goes from mouse up to down or from down to up or is it a click if the button was down for less then x frames/milliseconds and then if so, is it considered mouse down and a click if its down for x frames/milliseconds or a click then mouse down? I can see that each of the approaches can have their uses but which is the most common in games? And maybe i'll ask more specifically which is the most common in RTS games?

    Read the article

  • Resolution Independence in libGDX

    - by ashes999
    How do I make my libGDX game resolution/density independent? Is there a way to specify image sizes as "absolute" regardless of the underlying density? I'm making a very simple kids game; just a bunch of sprites displayed on-screen, and some text for menus (options menu primarily). What I want to know is: how do I make my sprites/fonts resolution independent? (I have wrapped them in my own classes to make things easier.) Since it's a simple kids game, I don't need to worry about the "playable area" of the game; I want to use as much of the screen space as possible. What I'm doing right now, which seems super incorrect, is to simply create images suitable for large resolutions, and then scale down (or rarely, up) to fit the screen size. This seems to work okay (in the desktop version), even with linear mapping on my textures, but the smaller resolutions look ugly. Also, this seems to fly in the face of Android's "device independent pixels" (DPs). Or maybe I'm missing something and libGDX already takes care of this somehow? What's the best way to tackle this? I found this link; is this a good way of solving the problem?: http://www.dandeliongamestudio.com/2011/09/12/android-fragmentation-density-independent-pixel-dip/ It mentions how to control the images, but it doesn't mention how to specify font/image sizes regardless of density.

    Read the article

  • Loading a new instance of a class through XML not working quite right

    - by Thegluestickman
    I'm having trouble with XML and XNA. I want to be able to load weapon settings through XML to make my weapons easier to make and to have less code in the actual project file. So I started out making a basic XML document, something to just assign variables with. But no matter what I changed it gave me a new error every time. The code below gives me a "XML element 'Tag' not found", I added and it started to say the variables weren't found. What I wanted to do in the XML file as well, was load a texture for the file too. So I created a static class to hold my texture values, then in the Texture tag of my XML document I would set it to that instance too. I think that's were the problems are occuring because that's where the "XML element 'Tag' not found" error is pointing me too. My XML document: <XnaContent> <Asset Type="ConversationEngine.Weapon"> <weaponStrength>0</weaponStrength> <damageModifiers>0</damageModifiers> <speed>0</speed> <magicDefense>0</magicDefense> <description>0</description> <identifier>0</identifier> <weaponTexture>LoadWeaponTextures.ironSword</weaponTexture> </Asset> </XnaContent> My Class to load the weapon XML: public static class LoadWeaponXML { static Weapon Weapons; public static Weapon WeaponLoad(ContentManager content, int id) { Weapons = content.Load<Weapon>(@"Weapons/" + id); return Weapons; } } public static class LoadWeaponTextures { public static Texture2D ironSword; public static void TextureLoad(ContentManager content) { ironSword = content.Load<Texture2D>("Sword"); } } I'm not entirely sure if you can load textures through XML, but any help would be greatly appreciated.

    Read the article

  • creating bounding box list

    - by Christian Frantz
    I'm trying to create a list of bounding boxes for each cube drawn, so I can use the boxes to intersect with a ray that my mouse position is casting, but I have no idea how. I've created a list that stores the boxes, but how am I getting the values from each box? for (int x = 0; x < mapHeight; x++) { for (int z = 0; z < mapWidth; z++) { cubes.Add(new Vector3(x, map[x, z], z), Matrix.Identity, grass); boxList.Add(something here); } } public Cube(GraphicsDevice graphicsDevice) { device = graphicsDevice; var vertices = new List<VertexPositionTexture>(); BuildFace(vertices, new Vector3(0, 0, 0), new Vector3(0, 1, 1)); BuildFace(vertices, new Vector3(0, 0, 1), new Vector3(1, 1, 1)); BuildFace(vertices, new Vector3(1, 0, 1), new Vector3(1, 1, 0)); BuildFace(vertices, new Vector3(1, 0, 0), new Vector3(0, 1, 0)); BuildFaceHorizontal(vertices, new Vector3(0, 1, 0), new Vector3(1, 1, 1)); BuildFaceHorizontal(vertices, new Vector3(0, 0, 1), new Vector3(1, 0, 0)); cubeVertexBuffer = new VertexBuffer(device, VertexPositionTexture.VertexDeclaration, vertices.Count, BufferUsage.WriteOnly); cubeVertexBuffer.SetData<VertexPositionTexture>(vertices.ToArray()); } There aren't any clearly defined variables for the bounds of each cube created, so where do I create the bounding box from?

    Read the article

  • *DX11, HLSL* - Colour as 4 floats or one UINT

    - by Paul
    With the DX11 pipeline, would it be much quicker for the vertex buffer to pass one single UINT with one byte per channel to the input assembler, as opposed to three floats? Then the vertex shader would convert the four bytes to four floats, which I guess is the required colour format for the pipeline. In this instance, colour accuracy isn't an issue. The vertex buffer would need to be updated many times per frame, so using a single UINT and saving 12 bytes for every vertex could well be worth it: quicker uploads to vram and also less memory used. But the cost is the extra shader work for every vertex to convert each 8 bits of the input UNIT into a float. Anyone have an idea if it might be worth doing? Or, is it possible for the pipeline to be set to just internally use a four-byte colour format? The swap chain buffer has been initialised as DXGI_FORMAT_R8G8B8A8_UNORM, so ultimately that's how the colour will be written. Thanks!

    Read the article

  • How to fix issue with my 3D first person camera?

    - by dxCUDA
    My camera moves and rotates, but relative to the worlds origin, instead of the players. I am having difficulty rotating the camera and then translating the camera in the direction relative to the camera facing angle. I have been able to translate the camera and rotate relative to the players origin, but not then rotate and translate in the direction relative to the cameras facing direction. My goal is to have a standard FPS-style camera. float yaw, pitch, roll; D3DXMATRIX rotationMatrix; D3DXVECTOR3 Direction; D3DXMATRIX matRotAxis,matRotZ; D3DXVECTOR3 RotAxis; // Set the yaw (Y axis), pitch (X axis), and roll (Z axis) rotations in radians. pitch = m_rotationX * 0.0174532925f; yaw = m_rotationY * 0.0174532925f; roll = m_rotationZ * 0.0174532925f; up = D3DXVECTOR3(0.0f, 1.0f, 0.0f);//Create the up vector //Build eye ,lookat and rotation vectors from player input data eye = D3DXVECTOR3(m_fCameraX, m_fCameraY, m_fCameraZ); lookat = D3DXVECTOR3(m_fLookatX, m_fLookatY, m_fLookatZ); rotation = D3DXVECTOR3(m_rotationX, m_rotationY, m_rotationZ); D3DXVECTOR3 camera[3] = {eye,//Eye lookat,//LookAt up };//Up RotAxis.x = pitch; RotAxis.y = yaw; RotAxis.z = roll; D3DXVec3Normalize(&Direction, &(camera[1] - camera[0]));//Direction vector D3DXVec3Cross(&RotAxis, &Direction, &camera[2]);//Strafe vector D3DXVec3Normalize(&RotAxis, &RotAxis); // Create the rotation matrix from the yaw, pitch, and roll values. D3DXMatrixRotationYawPitchRoll(&matRotAxis, pitch,yaw, roll); //rotate direction D3DXVec3TransformCoord(&Direction,&Direction,&matRotAxis); //Translate up vector D3DXVec3TransformCoord(&camera[2], &camera[2], &matRotAxis); //Translate in the direction of player rotation D3DXVec3TransformCoord(&camera[0], &camera[0], &matRotAxis); camera[1] = Direction + camera[0];//Avoid gimble locking D3DXMatrixLookAtLH(&in_viewMatrix, &camera[0], &camera[1], &camera[2]);

    Read the article

  • Efficient visualization of a large voxelized volume

    - by Alejandro Piad
    Lets consider a large voxelized volume stored in an oct-tree or any other convenient structure. This volume represents, for instance, a landscape, where each block is either empty (air), or it has an specific material that will be later used to apply a texture. Voxels that are next to each other represent connected sections of the surface. What I need is an algorithm to generate a mesh from this voxels that represents the volume, with the following caracteristics: All the "holes" in the voxelized volume are correct. All the connections are correct, i.e. seamless. The surface appears smooth. In a broad sense, I want to somehow preserve the surface topology, meaning that connected sections remain connected in the resulting mesh and that the surface has a curvature that responds to the voxels topology. Imagine trying to render the Minecraft world but getting the mountain ladders to be smooth instead of blocky.

    Read the article

  • How do I get the compression on specific dynamic body

    - by Mike JM
    Sorry, I could not find any tag that would suit my question. Let me first show you the image and then write what I want to do: I'm using box2D. As you can see there are three dynamic bodies connected to each other (think of it as a table from front view).The LEG1 and LEG2 are connected to the static body. (it's the ground body). Another dynamic body is falling onto the table. I need to get the compression in the LEG1 and LEG2 separately. Joints have GetReactionForce() function which returns a b2Vec, which in turn has Length() and LengthSqd functions. This will give the total sum of the forces in any taken joint. But what I need is forces in individual bodies that are connected with joints. Once you connect several bodies with a single joint it again will show the sum of forces which is not useful.Here's the case iI'm talking about:

    Read the article

  • How to create contracts in python

    - by recluze
    I am just moving to python from Java and have a question about the way the two do things. My question relates to contracts. An example: an application defines an interface that all plugins must implement and then the main application can call it. In Java: public interface IPlugin { public Image modify(Image img); } public class MainApp { public main_app_logic() { String pluginName = "com.example.myplugin"; IPlugin x = (IPlugin) Class.forName(pluginName); x.modify(someimg); } } The plugin implements the interface and we use reflection in main app to call it. That way, there's a contract between the main app and the plugin that both can refer to. How does one go about doing something similar in Python? And also, which approach is better? p.s. I'm not posting this on SO because I'm much more concerned with the philosophy behind the two approaches.

    Read the article

< Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >