Search Results

Search found 17924 results on 717 pages for 'game center'.

Page 305/717 | < Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >

  • Frame Interpolation issues for skeletal animation

    - by sebby_man
    I'm trying to animate in-between keyframes for skeletal animation but having some issues. Each joint is represented by a quaternion and there is no translation component. When I try to slerp between the orientations at the two key frames, I got a very wacky animation. I know my skinning equation is right because the animation is perfectly fine when the animation is directly on a keyframe rather than in-between two. I'm using glm's built in mix function to do the slerp, so I don't think there are any problems with the actual slerp implementation. There's really one thing left that could be wrong here. I must not be in the correct space to do slerp. Right now the orientations are in joint local space. Do I have to be in world space? In some other space along the way? I have the bind pose matrix and world-space transformation matrix at my disposal if those are needed.

    Read the article

  • What is the best Broadphase Interface for moving spheres?

    - by Molmasepic
    As of now I am working on optimizing the performance of the physics and collision, and as of now I am having some slowdowns on my other computers from my main. I have well over 3000 btSphereShape Rigidbodies and 2/3 of them do not move at all, but I am noticing(by the profile below) that collision is taking a bit of time to maneuver. Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 10.09 0.65 0.65 SphereTriangleDetector::collide(btVector3 const&, btVector3&, btVector3&, float&, float&, float) 7.61 1.14 0.49 btSphereTriangleCollisionAlgorithm::processCollision(btCollisionObject*, btCollisionObject*, btDispatcherInfo const&, btManifoldResult*) 5.59 1.50 0.36 btConvexTriangleCallback::processTriangle(btVector3*, int, int) 5.43 1.85 0.35 btQuantizedBvh::reportAabbOverlappingNodex(btNodeOverlapCallback*, btVector3 const&, btVector3 const&) const 4.97 2.17 0.32 btBvhTriangleMeshShape::processAllTriangles(btTriangleCallback*, btVector3 const&, btVector3 const&) const::MyNodeOverlapCallback::processNode(int, int) 4.19 2.44 0.27 btSequentialImpulseConstraintSolver::resolveSingleConstraintRowGeneric(btRigidBody&, btRigidBody&, btSolverConstraint const&) 4.04 2.70 0.26 btSequentialImpulseConstraintSolver::resolveSingleConstraintRowLowerLimit(btRigidBody&, btRigidBody&, btSolverConstraint const&) 3.73 2.94 0.24 Ogre::OctreeSceneManager::walkOctree(Ogre::OctreeCamera*, Ogre::RenderQueue*, Ogre::Octree*, Ogre::VisibleObjectsBoundsInfo*, bool, bool) 3.42 3.16 0.22 btTriangleShape::getVertex(int, btVector3&) const 2.48 3.32 0.16 Ogre::Frustum::isVisible(Ogre::AxisAlignedBox const&, Ogre::FrustumPlane*) const 2.33 3.47 0.15 1246357 0.00 0.00 Gorilla::Layer::setVisible(bool) 2.33 3.62 0.15 SphereTriangleDetector::getClosestPoints(btDiscreteCollisionDetectorInterface::ClosestPointInput const&, btDiscreteCollisionDetectorInterface::Result&, btIDebugDraw*, bool) 1.86 3.74 0.12 btCollisionDispatcher::findAlgorithm(btCollisionObject*, btCollisionObject*, btPersistentManifold*) 1.86 3.86 0.12 btSequentialImpulseConstraintSolver::setupContactConstraint(btSolverConstraint&, btCollisionObject*, btCollisionObject*, btManifoldPoint&, btContactSolverInfo const&, btVector3&, float&, float&, btVector3&, btVector3&) 1.71 3.97 0.11 btTriangleShape::getEdge(int, btVector3&, btVector3&) const 1.55 4.07 0.10 _Unwind_SjLj_Register 1.55 4.17 0.10 _Unwind_SjLj_Unregister 1.55 4.27 0.10 Ogre::D3D9HardwareVertexBuffer::updateBufferResources(char const*, Ogre::D3D9HardwareVertexBuffer::BufferResources*) 1.40 4.36 0.09 btManifoldResult::addContactPoint(btVector3 const&, btVector3 const&, float) 1.40 4.45 0.09 btSequentialImpulseConstraintSolver::setupFrictionConstraint(btSolverConstraint&, btVector3 const&, btRigidBody*, btRigidBody*, btManifoldPoint&, btVector3 const&, btVector3 const&, btCollisionObject*, btCollisionObject*, float, float, float) 1.24 4.53 0.08 btSequentialImpulseConstraintSolver::convertContact(btPersistentManifold*, btContactSolverInfo const&) 1.09 4.60 0.07 408760 0.00 0.00 Living::MapHide() 1.09 4.67 0.07 btSphereTriangleCollisionAlgorithm::~btSphereTriangleCollisionAlgorithm() 1.09 4.74 0.07 inflate_fast EDIT: Updated to show current Profile. I have only listed the functions using over 1% time from the many functions that are being used. Another thing is that each monster has a certain area that they stay in and are only active when a player is in said area. I was wondering if maybe there is a way to deactivate the non-active monsters from bullet(reactivating once in the area again) or maybe theres a different broadphase interface that I should use. The current BPI is btDbvtBroadphase. EDIT: Here is the Profile on the other computer(the top one is my main) Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 12.18 1.19 1.19 SphereTriangleDetector::collide(btVector3 const&, btVector3&, btVector3&, float&, float&, float) 6.76 1.85 0.66 btSphereTriangleCollisionAlgorithm::processCollision(btCollisionObject*, btCollisionObject*, btDispatcherInfo const&, btManifoldResult*) 5.83 2.42 0.57 btQuantizedBvh::reportAabbOverlappingNodex(btNodeOverlapCallback*, btVector3 const&, btVector3 const&) const 5.12 2.92 0.50 btConvexTriangleCallback::processTriangle(btVector3*, int, int) 4.61 3.37 0.45 btTriangleShape::getVertex(int, btVector3&) const 4.09 3.77 0.40 _Unwind_SjLj_Register 3.48 4.11 0.34 btBvhTriangleMeshShape::processAllTriangles(btTriangleCallback*, btVector3 const&, btVector3 const&) const::MyNodeOverlapCallback::processNode(int, int) 2.46 4.35 0.24 btSequentialImpulseConstraintSolver::resolveSingleConstraintRowLowerLimit(btRigidBody&, btRigidBody&, btSolverConstraint const&) 2.15 4.56 0.21 _Unwind_SjLj_Unregister 2.15 4.77 0.21 SphereTriangleDetector::getClosestPoints(btDiscreteCollisionDetectorInterface::ClosestPointInput const&, btDiscreteCollisionDetectorInterface::Result&, btIDebugDraw*, bool) 1.84 4.95 0.18 btTriangleShape::getEdge(int, btVector3&, btVector3&) const 1.64 5.11 0.16 btSequentialImpulseConstraintSolver::resolveSingleConstraintRowGeneric(btRigidBody&, btRigidBody&, btSolverConstraint const&) 1.54 5.26 0.15 btSequentialImpulseConstraintSolver::setupContactConstraint(btSolverConstraint&, btCollisionObject*, btCollisionObject*, btManifoldPoint&, btContactSolverInfo const&, btVector3&, float&, float&, btVector3&, btVector3&) 1.43 5.40 0.14 Ogre::D3D9HardwareVertexBuffer::updateBufferResources(char const*, Ogre::D3D9HardwareVertexBuffer::BufferResources*) 1.33 5.53 0.13 btManifoldResult::addContactPoint(btVector3 const&, btVector3 const&, float) 1.13 5.64 0.11 btRigidBody::predictIntegratedTransform(float, btTransform&) 1.13 5.75 0.11 btTriangleIndexVertexArray::getLockedReadOnlyVertexIndexBase(unsigned char const**, int&, PHY_ScalarType&, int&, unsigned char const**, int&, int&, PHY_ScalarType&, int) const 1.02 5.85 0.10 btSphereTriangleCollisionAlgorithm::CreateFunc::CreateCollisionAlgorithm(btCollisionAlgorithmConstructionInfo&, btCollisionObject*, btCollisionObject*) 1.02 5.95 0.10 btSphereTriangleCollisionAlgorithm::btSphereTriangleCollisionAlgorithm(btPersistentManifold*, btCollisionAlgorithmConstructionInfo const&, btCollisionObject*, btCollisionObject*, bool) Edited same as other Profile.

    Read the article

  • Rendering large and high poly meshes

    - by Aurus
    Consider an huge terrain that has a lot polygons, to render this terrain I thought of following techniques: Using height-map instead of raw meshes: Yes, but I want to create a lot of caves and stuff that simply wont work with height-maps. Using voxels: Yes, but I think that this would be to much since I don't even want to support changing terrain.. Split into multiple chunks and do some sort of LOD with the mesh: Yes, but how would I do that? Tessellation usually creates more detail not less. Precompute the same mesh in lower poly version (like Mudbox does) and depending on the distance it renders one of these meshes: Graphic memory is limited and uploading only the chunks won't solve that problem since the traffic would be too high. IMO the last one sounds really good, but imagine the following process: Upload and render the chunks depending on the current player position. [No problem] Player will walk straight forward Now we maybe have to change on of the low poly chunk with the high poly one So, Remove the low poly chunk and load the high poly chunk [Already to much traffic here, I think] I am not very experienced in graphic programming and maybe the upper process is totally okay but somehow I think it is too much. And how about the disk space it would require.. I think 3 kind of levels would be fine but isn't that also too much? (I am using OpenGL but I don't think that this is important)

    Read the article

  • DirectWrite Producing Strange Artifacts?

    - by smoth190
    I've written the basis to my UI system around Direct2D. I like it because it's fast and easy to use (even if I had to do some messy work to get it to work with DirectX11). However, I notice when using DirectWrite I'm getting strange problems with my text. As you can see, the e is a little screwwed up, and it overall looks a little bumpy. This only happens with certain fonts in certain sizes, and with certain arrangements of letters. This particular example is Verdana in size 16.0 font. Can I fix this? It's pretty annoying to change all my words and fonts because of this problem.

    Read the article

  • How to achieve uniform speed of movement on a bezier curve in cocos 2d?

    - by Andrey Chernukha
    I'm an absolute beginner in cocos2 , actually i started dealing with it yesterday. What i'm trying to do is moving an image along Bezier curve. This is how i do it - (void)startFly { [self runAction:[CCSequence actions: [CCBezierBy actionWithDuration:timeFlying bezier:[self getPathWithDirection:currentDirection]], [CCCallFuncN actionWithTarget:self selector:@selector(endFly)], nil]]; } My issue is that the image moves not uniformly. In the beginning it's moving slowly and then it accelerates gradually and at the end it's moving really fast. What should i do to get rid of this acceleration?

    Read the article

  • Smooth Camera Rotation around 90 degrees

    - by Nicholas
    I'm developing a third person 3D platformer in XNA. My problem is when I try to rotate the camera around the player. I would like to rotate (and animate) the camera 90 degrees around the player. So the camera should rotate until it has reached 90 degrees from the starting position. I cannot figure out how to keep track of the rotation, and when the rotation has made the full 90 degrees. Currently my cameras update: public void Update(Vector3 playerPosition) { if (rotateCamera) { position = Vector3.Transform(position - playerPosition, Matrix.CreateRotationY(0.1f)) + playerPosition; } this.viewMatrix = Matrix.CreateLookAt(position, playerPosition, Vector3.Up); } The initial position of the camera is set in the constructor. The "rotateCamera" bool is set on keypress. Thanks for the help in advance. Cheers.

    Read the article

  • Only draw visible objects to the camera in 2D

    - by Deukalion
    I have Map, each map has an array of Ground, each Ground consists of an array of VertexPositionTexture and a texture name reference so it renders a texture at these points (as a shape through triangulation). Now when I render my map I only want to get a list of all objects that are visible in the camera. (So I won't loop through more than I have to) Structs: public struct Map { public Ground[] Ground { get; set; } } public struct Ground { public int[] Indexes { get; set; } public VertexPositionNormalTexture[] Points { get; set; } public Vector3 TopLeft { get; set; } public Vector3 TopRight { get; set; } public Vector3 BottomLeft { get; set; } public Vector3 BottomRight { get; set; } } public struct RenderBoundaries<T> { public BoundingBox Box; public T Items; } when I load a map: foreach (Ground ground in CurrentMap.Ground) { Boundaries.Add(new RenderBoundaries<Ground>() { Box = BoundingBox.CreateFromPoints(new Vector3[] { ground.TopLeft, ground.TopRight, ground.BottomLeft, ground.BottomRight }), Items = ground }); } TopLeft, TopRight, BottomLeft, BottomRight are simply the locations of each corner that the shape make. A rectangle. When I try to loop through only the objects that are visible I do this in my Draw method: public int Draw(GraphicsDevice device, ICamera camera) { BoundingFrustum frustum = new BoundingFrustum(camera.View * camera.Projection); // Visible count int count = 0; EffectTexture.World = camera.World; EffectTexture.View = camera.View; EffectTexture.Projection = camera.Projection; foreach (EffectPass pass in EffectTexture.CurrentTechnique.Passes) { pass.Apply(); foreach (RenderBoundaries<Ground> render in Boundaries.Where(m => frustum.Contains(m.Box) != ContainmentType.Disjoint)) { // Draw ground count++; } } return count; } When I try adding just one ground, then moving the camera so the ground is out of frame it still returns 1 which means it still gets draw even though it's not within the camera's view. Am I doing something or wrong or can it be because of my Camera? Any ideas why it doesn't work?

    Read the article

  • Auto-tiling with Yoshi's Island style tiles

    - by Boreal
    I'm creating a 2D platformer and I'd like to implement an auto-tiling system. Normally, this wouldn't be particularly difficult. However, I'd like to have tiles like in Yoshi's Island, where the graphics extend past the actual collidable tile's boundaries. Consider this image: Although the eggs and the Piranha Plant are clearly resting on the ground, the flower tiles continue behind them, out of the collidable tile. I know that it would be simple to do by hand, but extremely time consuming. Using an auto-tiling algorithm would save me a lot of time and boredom, but I'm not sure where to start.

    Read the article

  • Low coupling and tight cohesion

    - by hidayat
    Of course it depends on the situation. But when a lower lever object or system communicate with an higher level system, should callbacks or events be preferred to keeping a pointer to higher level object? For example, we have a world class that has a member variable vector<monster> monsters. When the monster class is going to communicate with the world class, should I prefer using a callback function then or should I have a pointer to the world class inside the monster class?

    Read the article

  • Unity3d vector and matrix operations

    - by brandon
    I have the following three vectors: posA: (1,2,3) normal: (0,1,0) offset: (2,3,1) I want to get the vector representing the position which is offset in the direction of the normal from posA. I know how to do this by cheating (not using matrix operations): Vector3 result = new Vector3(posA.x + normal.x*offset.x posA.y + normal.y*offset.y, posA.z + normal.z*offset.z); I know how to do this mathematically Note: [] indicates a column vector, {} indicates a row vector result = [1,2,3] + {2,3,1}*{[0,0,0],[0,1,0],[0,0,0]} What I don't know is which is better to use and if it's the latter how do I do this in unity? I only know of 4x4 matrices in unity. I don't like the first option because you are instantiating a new vector instead of just modifying the original. Suggestions? Note: by asking which is better, I am asking for a quantifiable reason, not just a preference.

    Read the article

  • Restoring projection matrix

    - by brainydexter
    I am learning to use FBOs and one of the things that I need to do when rendering something onto user defined FBO, I have to setup the projection, modelview and viewport for it. Once I am done rendering to the FBO, I need to restore these matrices. I found: glPushAttrib(GL_VIEWPORT_BIT); glPopAttrib(); to restore the viewport to its old state. Is there a way to restore the projection and modelview matrix to whatever it was earlier ? Tech: C++/OpenGL Thanks!

    Read the article

  • android: How to apply pinch zoom and pan to 2D GLSurfaceView

    - by mak_just4anything
    I want to apply pinch zoom and panning effect on GLSurfaceView. It is Image editor, so It would not be 3D object. I tried to implement using these following links: https://groups.google.com/forum/#!topic/android-developers/EVNRDNInVRU Want to apply pinch and zoom to GLSurfaceView(3d Object) http://www.learnopengles.com/android-lesson-one-getting-started/ These all are links for 3D object rendering. I can not use ImageView as I need to work out with OpenGL so, had to implement it on GLSurfaceView. Suggest me or any reference links are there for such implementation. **I need it for 2D only.

    Read the article

  • Why does multiplying texture coordinates scale the texture?

    - by manning18
    I'm having trouble visualizing this geometrically - why is it that multiplying the U,V coordinates of a texture coordinate has the effect of scaling that texture by that factor? eg if you scaled the texture coordinates by a factor of 3 ..then doesn't this mean that if you had texture coordinates 0,1 and 0,2 ...you'd be sampling 0,3 and 0,6 in the U,V texture space of 0..1? How does that make it bigger eg HLSL: tex2D(textureSampler, TexCoords*3) Integers make it smaller, decimals make it bigger I mean I understand intuitively if you added to the U,V coordinates, as that is simply an offset into the sampling range, but what's the case with multiplication? I have a feeling when someone explains this to me I'm going to be feeling mighty stupid

    Read the article

  • Geometry Shader input vertices order

    - by NPS
    MSDN specifies (link) that when using triangleadj type of input to the GS, it should provide me with 6 vertices in specific order: 1st vertex of the triangle processed, vertex of an adjacent triangle, 2nd vertex of the triangle processed, another vertex of an adjacent triangle and so on... So if I wanted to create a pass-through shader (i.e. output the same triangle I got on input and nothing else) I should return vertices 0, 2 and 4. Is that correct? Well, apparently it isn't because I did just that and when I ran my app the vertices were flickering (like changing positions/disappearing/showing again or sth like that). But when I instead output vertices 0, 1 and 2 the app rendered the mesh correctly. I could provide some code but it seems like the problem is in the input vertices order, not the code itself. So what order do input vertices to the GS come in?

    Read the article

  • LWJGL Determining whether or not a polygon is on-screen.

    - by Brandon oubiub
    Not sure whether this is an LWJGL or math question. I want to check whether a shape is on-screen, so that I don't have to render it if it isn't. First of all, is there any simple way to do this that I am overlooking? Like some method or something that I haven't found? I'm going to assume there isn't. I tried using my trigonometry skills, but it is hard to do this because of how glRotate also distorts the image a little for perspective and realism. Or, is there any way to easily determine if a ray starting from the camera, and going outward in a straight line intersects a shape? (I can probably do it with my math skillz, but is there an easier way?) By the way, I can easily determine the angle at which the camera is facing around the x and y axis. EDIT: Or, possibly, I could get the angles of a vector from the camera to the object, and compare those angles to my camera angles. But I have a feeling that the distorts from glRotate and glTranslate would be an issue. I'll try it though.

    Read the article

  • Avoid if statements in DirectX 10 shaders?

    - by PolGraphic
    I have heard that if statements should be avoid in shaders, because both parts of the statements will be execute, and than the wrong will be dropped (which harms the performance). It's still a problem in DirectX 10? Somebody told me, that in it only the right branch will be execute. For the illustration I have the code: float y1 = 5; float y2 = 6; float b1 = 2; float b2 = 3; if(x>0.5){ x = 10 * y1 + b1; }else{ x = 10 * y2 + b2; } Is there an other way to make it faster? If so, how do it? Both branches looks similar, the only difference is the values of "constants" (y1, y2, b1, b2 are the same for all pixels in Pixel Shader).

    Read the article

  • Issues with touch buttons in XNA (Release state to be precise)

    - by Aditya
    I am trying to make touch buttons in WP8 with all the states (Pressed, Released, Moved), but the TouchLocationState.Released is not working. Here's my code: Class variables: bool touching = false; int touchID; Button tempButton; Button is a separate class with a method to switch states when touched. The Update method contains the following code: TouchCollection touchCollection = TouchPanel.GetState(); if (!touching && touchCollection.Count > 0) { touching = true; foreach (TouchLocation location in touchCollection) { for (int i = 0; i < menuButtons.Count; i++) { touchID = location.Id; // store the ID of current touch Point touchLocation = new Point((int)location.Position.X, (int)location.Position.Y); // create a point Button button = menuButtons[i]; if (GetMenuEntryHitBounds(button).Contains(touchLocation)) // a method which returns a rectangle. { button.SwitchState(true); // change the button state tempButton = button; // store the pressed button for accessing later } } } } else if (touchCollection.Count == 0) // clears the state of all buttons if no touch is detected { touching = false; for (int i = 0; i < menuButtons.Count; i++) { Button button = menuButtons[i]; button.SwitchState(false); } } menuButtons is a list of buttons on the menu. A separate loop (within the Update method) after the touched variable is true if (touching) { TouchLocation location; TouchLocation prevLocation; if (touchCollection.FindById(touchID, out location)) { if (location.TryGetPreviousLocation(out prevLocation)) { Point point = new Point((int)location.Position.X, (int)location.Position.Y); if (prevLocation.State == TouchLocationState.Pressed && location.State == TouchLocationState.Released) { if (GetMenuEntryHitBounds(tempButton).Contains(point)) // Execute the button action. I removed the excess } } } } The code for switching the button state is working fine but the code where I want to trigger the action is not. location.State == TouchLocationState.Released mostly ends up being false. (Even after I release the touch, it has a value of TouchLocationState.Moved) And what is more irritating is that it sometimes works! I am really confused and stuck for days now. Is this the right way? If yes then where am I going wrong? Or is there some other more effective way to do this? PS: I also posted this question on stack overflow then realized this question is more appropriate in gamedev. Sorry if it counts as being redundant.

    Read the article

  • Producing a smooth mesh from density cloud and marching cubes

    - by Wardy
    Based on my results from this question I decided to build myself a 3D noise map containing float values in place of my existing boolean point values. The effect I'm trying to produce is something like this, rather than typical rolling hills; which should explain the "missing cubes" in the image below. If I render my density map in normal "minecraft mode" (1 block per point in the density map) varying the size of the cube based on the value in my density map (floats in the range 0 to 1) I get something like this: I'm now happy that I can produce a density map for the marching cubes algorithm (which will need a little tweaking) but for some reason when I run it through my implementation it's not producing what I expect. My problem is that I'm getting something like the first image in this answer to my previous question, when I want to achieve the effect in the second image. Upon further investigation I can't see how marching cubes does the "move vertex along the edge" type logic (i.e. the difference between the two images on my previous link). I see that it does do some interpolation, but I'm not convinced I have the correct understanding of what I think it should do, because the code in question appears to give the same result regardless of whether I use boolean or float values. I took the code from here which is a C# implementation of marching cubes, but instead of using the MarchingCubesPrimitive I modified it to accept an object of type IDrawable, containing lists for the various collections (vertices, normals, UVs, indices), the logic was otherwise untouched. My understanding is that given a very low isovalue the accuracy level of the surface being rendered should increase, so in short "less 45 degree slows more rolling hills" type mesh output. However this isn't what I'm seeing. Have I missed something or is the implementation flawed and need to be fixed? EDIT: A little more detail on what I am seeing when I "marching cube" the data. Ok so firstly, ignore the fact that the meshes created by the chunks don't "connect" (i'll probably raise another question about this later). Then look at the shaping of the island, it's too ... square, from the voxels rendered as boxes you get the impression there's a clean soft gradual hill and yet from the image there are sharp falling edges even in the most central areas where the gradient in the first image looks the most smooth. The data is "regenerated" each time I run this so no 2 islands come out the same, and it's purely random so not based on noise, but still, how can it look so smooth in 1 image and so not smooth in the other?

    Read the article

  • Pitch camera around model

    - by ChocoMan
    Currently, my camera rotates with my model's Y-Axis (yaw) perfectly. What I'm having trouble with is rotating the X-Axis (pitch) along with it. I've tried the same method for cameraYaw() in the form of cameraPitch(), while adjusting the axis to Vector.Right, but the camera wouldn't pitch at all in accordance to the Y-Axes of the controller. Is there a way similar to this to get the same effect for pitching the camera around the model? // Rotates model on its own Y-axis public void modelRotMovement(GamePadState pController) { Yaw = pController.ThumbSticks.Right.X * MathHelper.ToRadians(speedAngleMAX); AddRotation = Quaternion.CreateFromYawPitchRoll(Yaw, 0, 0); ModelLoad.MRotation *= AddRotation; MOrientation = Matrix.CreateFromQuaternion(ModelLoad.MRotation); } // Orbit (yaw) Camera around model public void cameraYaw(Vector3 axis, float yaw, float pitch) { Pitch = pController.ThumbSticks.Right.Y * MathHelper.ToRadians(speedAngleMAX); ModelLoad.CameraPos = Vector3.Transform(ModelLoad.CameraPos - ModelLoad.camTarget, Matrix.CreateFromAxisAngle(axis, yaw)) + ModelLoad.camTarget; } public void updateCamera() { cameraYaw(Vector3.Up, Yaw); }

    Read the article

  • What could cause a pixel shader to paint outside the lines of the vertex shader output?

    - by Rei Miyasaka
    From what I understand, the pixels that a pixel shader operates on are specified implicitly by the SV_POSITION output (in DirectX) of the vertex shader. What then could cause a pixel shader to render in the middle of nowhere? I used the new Visual Studio 2012 graphics debugger to visualize my vertex and pixel shader output. This is the output from a DrawIndexed() call that draws a cube: The pink part is the rendered output of the pixel shader, which takes the cube on its left as its input. The vertex shader code: cbuffer Buf { float4x4 final; }; struct In { float4 pos:POSITION; float3 norm:NORMAL; float2 texuv:TEXCOORD; }; struct Out { float4 col:COLOR; float2 tex:TEXCOORD; float4 pos:SV_POSITION; }; Out main(In input) { Out output; output.pos = mul(input.pos, final); output.col = float4(1.0f, 0.5f, 0.5f, 1.0f); output.tex = input.texuv; return output; } And the pixel shader: struct In { float4 col:COLOR; float2 tex:TEXCOORD; float4 pos:SV_POSITION; }; float4 main(In input) : SV_TARGET { return input.col; } The raster stage is the only thing between the vertex shader and the pixel shader, so my suspicion is that it's some raster stage settings. But the raster stage shouldn't change the shape of the vertex shader output so drastically, should it?

    Read the article

  • Strange mesh import problem with Assimp and OpenGL

    - by Morgan
    Using the assimp library for importing 3D data into an OpenGL application. I get some strange problems regarding indexing of the vertices: If I use the following code for importing vertex indices: for (unsigned int t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace * face = &mesh->mFaces[t]; if (face->mNumIndices == 3) { indices->push_back(face->mIndices[0]); indices->push_back(face->mIndices[1]); indices->push_back(face->mIndices[2]); } } I get the following result: Instead, if I use the following code: for(int k = 0; k < 2 ; k++) { for (unsigned int t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace * face = &mesh->mFaces[t]; if (face->mNumIndices == 3) { indices->push_back(face->mIndices[0]); indices->push_back(face->mIndices[1]); indices->push_back(face->mIndices[2]); } } } I get the correct result: Hence adding the indices twice, renders the correct result? The OpenGL buffer is populated, like so: glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices->size() * sizeof(unsigned int), indices->data(), GL_STATIC_DRAW); And rendered as follows: glDrawElements(GL_TRIANGLES, vertexCount*3, GL_UNSIGNED_INT, indices->data());

    Read the article

  • Extra fire simulation on iPad device

    - by Nezam
    I have with me an iOS app for iPad which creates a few fire simulations over a png.Well,its working well exactly how we wanted it but when we are testing it on a device,we get an extra fire simulation.Heres the screen: iPad Simulator: This is how it should display (iPad Simulation) iPad Device: This is how its displaying (iPad Device) M ready to share whichever portion of my code which gets me to my solution once someone gets hit here.Thanks in advance

    Read the article

  • "Accumulate" buffer results in XNA4?

    - by Utkarsh Sinha
    I'm trying to simulate a "heightmap" buffer in XNA4.0 but the results don't look correct. Here's what I'm hoping to achieve: http://www.youtube.com/watch?feature=player_detailpage&v=-Q6ISVaM5Ww#t=517s (8:38). From what I understand, here are the steps to reach there: Pass height buffer + current entity's heightmap Generate a stencil and update the height buffer Render sprite+stencil For now, I'm just trying to get the height buffer thing to work. So here's the problem. Inside the draw loop, I do the following: Create a new render target & set it Draw the heightmap with a sprite batch(no shaders) graphicsDevice.SetRenderTarget(null) Draw the rendertarget with SpriteBatch I expected to see all entities' heightmaps. But only the last entity's heightmap is visible. Any hints on what I'm doing wrong? Here's the code inside the draw loop: RenderTarget2D tempDepthStencil = new RenderTarget2D(graphicsDevice, graphicsDevice.Viewport.Width, graphicsDevice.Viewport.Height, false, graphicsDevice.DisplayMode.Format, DepthFormat.None); graphicsDevice.SetRenderTarget(tempDepthStencil); // Gather depth information SpriteBatch depthStencilSpriteBatch = new SpriteBatch(graphicsDevice); depthStencilSpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullCounterClockwise); depthStencilSpriteBatch.Draw(texHeightmap, pos, null, Color.White, 0, Vector2.Zero, 1, spriteEffects, 1); depthStencilSpriteBatch.End(); graphicsDevice.SetRenderTarget(null); SpriteBatch b1 = new SpriteBatch(graphicsDevice); b1.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, null, null, null, null); b1.Draw((Texture2D)tempDepthStencil, Vector2.Zero, null, Color.White, 0, Vector2.Zero, 1, spriteEffects, 1); b1.End();

    Read the article

  • Shader optimization - cg/hlsl pseudo and via multiplication

    - by teodron
    Since HLSL/Cg do not allow texture fetching inside conditional blocks, I am first checking a variable and performing some computations, afterwards setting a float flag to 0.0 or 1.0, depending on the computations. I'd like to trigger a texture fetch only if the flag is 1.0 or not null, for that matter of fact. I kind of hoped this would do the trick: float4 TU0_atlas_colour = pseudoBool * tex2Dlod(TU0_texture, float4(tileCoord, 0, mipLevel)); That is, if pseudoBool is 0, will the texture fetch function still be called and produce overhead? I was hoping to prevent it from getting executed via this trick that usually works in plain C/C++.

    Read the article

  • List of bounding boxes?

    - by Christian Frantz
    When I create a bounding box for each object in my chunk, would it be better to store them in a list? List<BoundingBox> cubeBoundingBox Or can I just use a single variable? BoundingBox cubeBoundingBox The bounding boxes will be used for all types of things so they will be moving around. In any case, I'd be adding it to a method that gets called 2500+ times for each chunk, so either I have a giant list of them or 2500+ individual boxes. Is there any advantage to using one or the other?

    Read the article

< Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >