Search Results

Search found 19855 results on 795 pages for 'game console'.

Page 328/795 | < Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >

  • Writing to a D3DFMT_R32F render target clamps to 1

    - by Mike
    I'm currently implementing a picking system. I render some objects in a frame buffer, which has a render target, which has the D3DFMT_R32F format. For each mesh, I set an integer constant evaluator, which is its material index. My shader is simple: I output the position of each vertex, and for each pixel, I cast the material index in float, and assign this value to the Red channel: int ObjectIndex; float4x4 WvpXf : WorldViewProjection< string UIWidget = "None"; >; struct VS_INPUT { float3 Position : POSITION; }; struct VS_OUTPUT { float4 Position : POSITION; }; struct PS_OUTPUT { float4 Color : COLOR0; }; VS_OUTPUT VSMain( const VS_INPUT input ) { VS_OUTPUT output = (VS_OUTPUT)0; output.Position = mul( float4(input.Position, 1), WvpXf ); return output; } PS_OUTPUT PSMain( const VS_OUTPUT input, in float2 vpos : VPOS ) { PS_OUTPUT output = (PS_OUTPUT)0; output.Color.r = float( ObjectIndex ); output.Color.gba = 0.0f; return output; } technique Default { pass P0 { VertexShader = compile vs_3_0 VSMain(); PixelShader = compile ps_3_0 PSMain(); } } The problem I have, is that somehow, the values written in the render target are clamped between 0.0f and 1.0f. I've tried to change the rendertarget format, but I always get clamped values... I don't know what the root of the problem is. For information, I have a depth render target attached to the frame buffer. I disabled the blend in the render state the stencil is disabled Any ideas?

    Read the article

  • Stencil buffer appears to not be decrementing values correctly

    - by Alex Ames
    I'm attempting to use the stencil buffer as a clipper for my UI system, but I'm having trouble debugging a problem I'm running in to. This is what I'm doing: A widget can pass a rectangle to the the stencil clipper functions, which will increment the stencil buffer values that it covers. Then it will draw its children, which will only get drawn in the stencilled area (so that if they extend outside they'll be clipped). After a widget is done drawing its children, it pops that rectangle from the stack and in the process decrements the values in the stencil buffer that it has previously incremented. The slightly simplified code is below: static void drawStencil(Rect& rect, unsigned int ref) { // Save previous values of the color and depth masks GLboolean colorMask[4]; GLboolean depthMask; glGetBooleanv(GL_COLOR_WRITEMASK, colorMask); glGetBooleanv(GL_DEPTH_WRITEMASK, &depthMask); // Turn off drawing glColorMask(0, 0, 0, 0); glDepthMask(0); // Draw vertices here ... // Turn everything back on glColorMask(colorMask[0], colorMask[1], colorMask[2], colorMask[3]); glDepthMask(depthMask); // Only render pixels in areas where the stencil buffer value == ref glStencilFunc(GL_EQUAL, ref, 0xFF); glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); } void pushScissor(Rect rect) { // increment things only at the current stencil stack level glStencilFunc(GL_EQUAL, s_scissorStack.size(), 0xFF); glStencilOp(GL_KEEP, GL_INCR, GL_INCR); s_scissorStack.push_back(rect); drawStencil(rect, states, s_ScissorStack.size()); } void popScissor() { // undo what was done in the previous push, // decrement things only at the current stencil stack level glStencilFunc(GL_EQUAL, s_scissorStack.size(), 0xFF); glStencilOp(GL_KEEP, GL_DECR, GL_DECR); Rect rect = s_scissorStack.back(); s_scissorStack.pop_back(); drawStencil(rect, states, s_scissorStack.size()); } And this is how it's being used by the Widgets if (m_clip) pushScissor(m_rect); drawInternal(target, states); for (auto child : m_children) target.draw(*child, states); if (m_clip) popScissor(); This is the result of the above code: There are two things on the screen, a giant test button, and a window with some buttons and text areas on it. The text area scroll box is set to clip its children (so that the text doesn't extend outside the scroll box). The button is drawn after the window and should be on top of it completely. However, for some reason the text area is appearing on top of the button. The only reason I can think of that this would happen is if the stencil values were not getting decremented in the pop, and when it comes time to render the button, since those pixels don't have the right stencil value it doesn't draw over. But I can't figure out whats wrong with my code that would cause that to happen.

    Read the article

  • D3D11 how to simulate multiple depth channels

    - by Nock
    Here's what I'd like to achieve: Rendering a first pass of objects in my scene, using standard depth comparison Rendering another pass of objects in the same scene, but with the following rules: A Pixel of the 2nd pass always override the first pass (no depth compare between them) Use Depth comparison between pixels written from the second pass. In English I want depth comparison made inside each pass but I always want the second pass pixels to override the first pass ones. Some things I've thought: I tried to think about using stencil to solve this, but I couldn't find a way. I know I could render into a separate target the second pass then composite the result into the first, but I'd like to avoid that. I could use two separate Depth Buffer, one dedicated to each pass. (I never tried, but I figure it's possible to switch the depth buffer in a Render Target "on the fly") Any idea of the best solution? Thanks

    Read the article

  • can't spot the error. Trying to increment

    - by Kevin Jensen Petersen
    I really can't spot the error, or the misspelling. This script should increase the variable currentTime with 1 every second, as long as i am holding the Space button down. This is Unity C#. using UnityEngine; using System.Collections; public class GameTimer : MonoBehaviour { //Timer private bool isTimeDone; public GUIText counter; public int currentTime; private bool starting; //Each message will be shown random each 20 seconds. public string[] messages; public GUIText msg; //To check if this is the end private bool end; void Update () { counter.guiText.text = currentTime.ToString(); if(Input.GetKey(KeyCode.Space)) { if(starting == false) { starting = true; } if(end == false) { if(isTimeDone) { StartCoroutine(timer()); } } else { msg.guiText.text = "You think you can do better? Press 'R' to Try again!"; if(Input.GetKeyDown(KeyCode.R)) { Application.LoadLevel(Application.loadedLevel); } } } if(!Input.GetKey(KeyCode.Space) & starting) { end = true; } } IEnumerator timer() { isTimeDone = false; yield return new WaitForSeconds(1); currentTime++; isTimeDone = true; } }

    Read the article

  • Unity - Invert Movement Direction

    - by m41n
    I am currently developing a 2,5D Sidescroller in Unity (just starting to get to know it). Right now I added a turn-script to have my character face the appropriate direction of movement, though something with the movement itself is behaving oddly now. When I press the right arrow key, the character moves and faces towards the right. If I press the left arrow key, the character faces towards the left, but "moon-walks" to the right. I allready had enough trouble getting the turning to work, so what I am trying is to find a simple solution, if possible without too much reworking of the rest of my project. I was thinking of just inverting the movement direction for a specific input-key/facing-direction. So if anyone knows how to do something like that, I'd be thankful for the help. If it helps, the following is the current part of my "AnimationChooser" script to handle the turning: Quaternion targetf = Quaternion.Euler(0, 270, 0); // Vector3 Direction when facing frontway Quaternion targetb = Quaternion.Euler(0, 90, 0); // Vector3 Direction when facing opposite way if (Input.GetAxisRaw ("Vertical") < 0.0f) // if input is lower than 0 turn to targetf { transform.rotation = Quaternion.Lerp(transform.rotation, targetf, Time.deltaTime * smooth); } if (Input.GetAxisRaw ("Vertical") > 0.0f) // if input is higher than 0 turn to targetb { transform.rotation = Quaternion.Lerp(transform.rotation, targetb, Time.deltaTime * smooth); } The Values (270 and 90) and Axis are because I had to turn my model itself in the very first place to face towards any of the movement directions.

    Read the article

  • Procedural Mesh: UV mapping

    - by Esa
    I made a procedural mesh and now I want to apply a texture to it. The problem is, I cannot get it to stick the way I want it to. The idea is to have the texture painted only once over the whole mesh, so that there is no repeating. How should I map the UV to make that happen? My mesh is a simple plane consisting of 56 triangles. I'd add pictures to clear things up but I cannot since my reputation is below 10 points. Any help is appreciated. EDIT(Kind people gave me up votes, thank you): Meet my mesh: And when textured(tried to repeat the texture): And my texture:

    Read the article

  • Adding sub-entities to existing entities. Should it be done in the Entity and Component classes?

    - by Coyote
    I'm in a situation where a player can be given the control of small parts of an entity (i.e. Left missile battery). Therefore I started implementing sub entities as follow. Entities are Objects with 3 arrays: pointers to components pointers to sub entities communication subscribers (temporary implementation) Now when an entity is built it has a few components as you might expect and also I can attach sub entities which are handled with some dedicated code in the Entity and Component classes. I noticed sub entities are sharing data in 3 parts: position: the sub entities are using the parent's position and their own as an offset. scrips: sub entities are draining ammo and energy from the parent. physics: sub entities add weight to the parent I made this to quickly go forward, but as I'm slowly fixing current implementations I wonder if this wasn't a mistake. Is my current implementation something commonly done? Will this implementation put me in a corner? I thought it might be a better thing to create some sort of SubEntityComponent where sub entities are attached and handled. But before changing anything I wanted to seek the community's wisdom.

    Read the article

  • Box2D how to implement a camera?

    - by Romeo
    By now i have this Camera class. package GameObjects; import main.Main; import org.jbox2d.common.Vec2; public class Camera { public int x; public int y; public int sx; public int sy; public static final float PIXEL_TO_METER = 50f; private float yFlip = -1.0f; public Camera() { x = 0; y = 0; sx = x + Main.APPWIDTH; sy = y + Main.APPHEIGHT; } public Camera(int x, int y) { this.x = x; this.y = y; sx = x + Main.APPWIDTH; sy = y + Main.APPHEIGHT; } public void update() { sx = x + Main.APPWIDTH; sy = y + Main.APPHEIGHT; } public void moveCam(int mx, int my) { if(mx >= 0 && mx <= 80) { this.x -= 2; } else if(mx <= Main.APPWIDTH && mx >= Main.APPWIDTH - 80) { this.x += 2; } if(my >= 0 && my <= 80) { this.y += 2; } else if(my <= Main.APPHEIGHT && my >= Main.APPHEIGHT - 80) { this.y -= 2; } this.update(); } public float meterToPixel(float meter) { return meter * PIXEL_TO_METER; } public float pixelToMeter(float pixel) { return pixel / PIXEL_TO_METER; } public Vec2 screenToWorld(Vec2 screenV) { return new Vec2(screenV.x + this.x, yFlip * screenV.y + this.y); } public Vec2 worldToScreen(Vec2 worldV) { return new Vec2(worldV.x - this.x, yFlip * worldV.y - this.y); } } I need to know how to modify the screenToWorld and worldToScreen functions to include the PIXEL_TO_METER scaling.

    Read the article

  • When does depth testing happen?

    - by Utkarsh Sinha
    I'm working with 2D sprites - and I want to do 3D style depth testing with them. When writing a pixel shader for them, I get access to the semantic DEPTH0. Would writing to this value help? It seems it doesn't. Maybe it's done before the pixel shader step? Or is depth testing only done when drawing 3D things (I'm using SpriteBatch)? Any links/articles/topics to read/search for would be appreciated.

    Read the article

  • Using Lerp to create a hovering effect for a GameObject

    - by OhMrBigshot
    I want to have a GameObject that has a "hovering" effect when the mouse is over it. What I'm having trouble with is actually having a color that gradually goes from one color to the next. I'm assuming Color.Lerp() is the best function for that, but I can't seem to get it working properly. Here's my CubeBehavior.cs's Update() function: private bool ReachedTop = false; private float t = 0f; private float final_t; private bool MouseOver = false; // Update is called once per frame void Update () { if (MouseOver) { t = Time.time % 1f; // using Time.time to get a value between 0 and 1 if (t >= 1f || t <= 0f) // If it reaches either 0 or 1... ReachedTop = ReachedTop ? false : true; if (ReachedTop) final_t = 1f - t; // Make it count backwards else final_t = t; print (final_t); // for debugging purposes renderer.material.color = Color.Lerp(Color.red, Color.green, final_t); } } void OnMouseEnter() { MouseOver = true; } void OnMouseExit() { renderer.material.color = Color.white; MouseOver = false; } Now, I've tried several approaches to making it reach 1 then count backwards till 0 including a multiplier that alternates between 1 and -1, but I just can't seem to get that effect. The value goes to 1 then resets at 0. Any ideas on how to do this?

    Read the article

  • Calculating 3D camera positions from a video

    - by Geotarget
    I need to calculate the 3D camera position and rotation for each frame in a given video. This is typically used for motion-tracking, and to insert 3D objects into a video. I'm currently using VideoTrace to calculate this for me, and I'm getting the data exported as a 3DS Maxscript file. However when I try to use the 3D camera rotations, I'm getting strange errors in my 3D calculations, as if there is an error with the 3x3 rotation matrices. Can you spot any error with the data itself? Or is it my other calculations that are erroneous? frame 1 rotation=(matrix3[-0.011938, 0.756018, -0.654442][-0.382040, -0.608284, -0.695727][-0.924068, 0.241718, 0.296091][0, 0, 0]).rotationpart position=[-0.767177, 0.308723, -0.232722] fov=57.352135 frame 2 rotation=(matrix3[-0.460922, -0.726580, -0.509541][-0.200163, 0.644491, -0.737947][ 0.864572, -0.238145, -0.442495][0, 0, 0]).rotationpart position=[-0.856630, 0.198654, -0.243853] fov=57.352135

    Read the article

  • Convience of mySQL over xml

    - by Bonechilla
    Currently I use XML to store specific information to correctly load a few things such as a list of specfied characters, scenes and music, Once more I use JAXB in combination with standard compression/decompression(ZIP) functionality to store a list of extrenous data. This data is called to add functionality to the character, somewhat like Skills in an RPG. Each skill is seperated into its own XML file with a grandlist which contains the names of each file with their extensions omitted and zipped in folder that gets encrypted. At first using xml was working fine however as the skill list grow i worry about its stability. I was wondering if I should begin storing the data in mySQL. Originally I planned to simply convert everything to JSON over xml but i think possibly mySQL would be a better move. Can anyone inform me of the key difference and pros and cons of each I guess i'm looking for the best way to store the data more conviently and would be easier to operate on. The data is mostly primatives and strings and the only arraylist of values i have i can just concat into a single field and parse later

    Read the article

  • A problem with texture atlasing in Unity

    - by Hamzeh Soboh
    I have the texture below and I need to get rectangular parts from it. I could finally combine meshes of different quads to improve performance, but I with quads of different tilings, this means different materials, then combining meshes will fail. Can anybody tell me how to have a part of that texture in C#? Such that all quads will be of the same material only then combining meshes passes. Thanks in advance.

    Read the article

  • Platform jumping problems with AABB collisions

    - by Vee
    See the diagram first: When my AABB physics engine resolves an intersection, it does so by finding the axis where the penetration is smaller, then "push out" the entity on that axis. Considering the "jumping moving left" example: If velocityX is bigger than velocityY, AABB pushes the entity out on the Y axis, effectively stopping the jump (result: the player stops in mid-air). If velocityX is smaller than velocitY (not shown in diagram), the program works as intended, because AABB pushes the entity out on the X axis. How can I solve this problem? Source code: public void Update() { Position += Velocity; Velocity += World.Gravity; List<SSSPBody> toCheck = World.SpatialHash.GetNearbyItems(this); for (int i = 0; i < toCheck.Count; i++) { SSSPBody body = toCheck[i]; body.Test.Color = Color.White; if (body != this && body.Static) { float left = (body.CornerMin.X - CornerMax.X); float right = (body.CornerMax.X - CornerMin.X); float top = (body.CornerMin.Y - CornerMax.Y); float bottom = (body.CornerMax.Y - CornerMin.Y); if (SSSPUtils.AABBIsOverlapping(this, body)) { body.Test.Color = Color.Yellow; Vector2 overlapVector = SSSPUtils.AABBGetOverlapVector(left, right, top, bottom); Position += overlapVector; } if (SSSPUtils.AABBIsCollidingTop(this, body)) { if ((Position.X >= body.CornerMin.X && Position.X <= body.CornerMax.X) && (Position.Y + Height/2f == body.Position.Y - body.Height/2f)) { body.Test.Color = Color.Red; Velocity = new Vector2(Velocity.X, 0); } } } } } public static bool AABBIsOverlapping(SSSPBody mBody1, SSSPBody mBody2) { if(mBody1.CornerMax.X <= mBody2.CornerMin.X || mBody1.CornerMin.X >= mBody2.CornerMax.X) return false; if (mBody1.CornerMax.Y <= mBody2.CornerMin.Y || mBody1.CornerMin.Y >= mBody2.CornerMax.Y) return false; return true; } public static bool AABBIsColliding(SSSPBody mBody1, SSSPBody mBody2) { if (mBody1.CornerMax.X < mBody2.CornerMin.X || mBody1.CornerMin.X > mBody2.CornerMax.X) return false; if (mBody1.CornerMax.Y < mBody2.CornerMin.Y || mBody1.CornerMin.Y > mBody2.CornerMax.Y) return false; return true; } public static bool AABBIsCollidingTop(SSSPBody mBody1, SSSPBody mBody2) { if (mBody1.CornerMax.X < mBody2.CornerMin.X || mBody1.CornerMin.X > mBody2.CornerMax.X) return false; if (mBody1.CornerMax.Y < mBody2.CornerMin.Y || mBody1.CornerMin.Y > mBody2.CornerMax.Y) return false; if(mBody1.CornerMax.Y == mBody2.CornerMin.Y) return true; return false; } public static Vector2 AABBGetOverlapVector(float mLeft, float mRight, float mTop, float mBottom) { Vector2 result = new Vector2(0, 0); if ((mLeft > 0 || mRight < 0) || (mTop > 0 || mBottom < 0)) return result; if (Math.Abs(mLeft) < mRight) result.X = mLeft; else result.X = mRight; if (Math.Abs(mTop) < mBottom) result.Y = mTop; else result.Y = mBottom; if (Math.Abs(result.X) < Math.Abs(result.Y)) result.Y = 0; else result.X = 0; return result; }

    Read the article

  • geomipmapping using displacement mapping (and glVertexAttribDivisor)

    - by Will
    I wake up with a clear vision, but sadly my laptop card doesn't do displacement mapping nor glVertexAttribDivisor so I can't test it out; I'm left sharing here: With geomipmapping, the grid at any factor is transposable - if you pass in an offset - say as a uniform - you can reuse the same vertex and index array again and again. If you also pass in the offset into the heightmap as a uniform, the vertex shader can do displacement mapping. If the displacement map is mipmapped, you get the advantages of trilinear filtering for distant maps. And, if the scenery is closer, rather than exposing that the you have a world made out of quads, you can use your transposable grid vertex array and indices to do vertex-shader interpolation (fancy splines) to do super-smooth infinite zoom? So I have some questions: does it work? In theory, in practice? does anyone do it? Does this technique have a name? Papers, demos, anything I can look at? does glVertexAttribDivisor mean that you can have a single glMultiDrawElementsEXT or similar approach to draw all your terrain tiles in one call rather than setting up the uniforms and emitting each tile? Would this offer any noticeable gains? does a heightmap that is GL_LUMINANCE take just one byte per pixel(=vertex)? (On mainstream cards, obviously. Does storage vary in practice?) Does going to the effort of reusing the same vertices and indices mean that you can basically fill the GPU RAM with heightmap and not a lot else, giving you either bigger landscapes or more detailed landscapes/meshes for the same bang? is mipmapping the displacement map going to work? On future cards? Is it going to introduce unsurmountable inaccuracies if it is enabled?

    Read the article

  • Most efficient way to handle coordinate maps in Java

    - by glowcoder
    I have a rectangular tile-based layout. It's your typical Cartesian system. I would like to have a single class that handles two lookup styles Get me the set of players at position X,Y Get me the position of player with key K My current implementation is this: class CoordinateMap<V> { Map<Long,Set<V>> coords2value; Map<V,Long> value2coords; // convert (int x, int y) to long key - this is tested, works for all values -1bil to +1bil // My map will NOT require more than 1 bil tiles from the origin :) private Long keyFor(int x, int y) { int kx = x + 1000000000; int ky = y + 1000000000; return (long)kx | (long)ky << 32; } // extract the x and y from the keys private int[] coordsFor(long k) { int x = (int)(k & 0xFFFFFFFF) - 1000000000; int y = (int)((k >>> 32) & 0xFFFFFFFF) - 1000000000; return new int[] { x,y }; } } From there, I proceed to have other methods that manipulate or access the two maps accordingly. My question is... is there a better way to do this? Sure, I've tested my class and it works fine. And sure, something inside tells me if I want to reference the data by two different keys, I need two different maps. But I can also bet I'm not the first to run into this scenario. Thanks!

    Read the article

  • Most efficient way to implement delta time

    - by Starkers
    Here's one way to implement delta time: /// init /// var duration = 5000, currentTime = Date.now(); // and create cube, scene, camera ect ////// function animate() { /// determine delta /// var now = Date.now(), deltat = now - currentTime, currentTime = now, scalar = deltat / duration, angle = (Math.PI * 2) * scalar; ////// /// animate /// cube.rotation.y += angle; ////// /// update /// requestAnimationFrame(render); ////// } Could someone confirm I know how it works? Here what I think is going on: Firstly, we set duration at 5000, which how long the loop will take to complete in an ideal world. With a computer that is slow/busy, let's say the animation loop takes twice as long as it should, so 10000: When this happens, the scalar is set to 2.0: scalar = deltat / duration scalar = 10000 / 5000 scalar = 2.0 We now times all animation by twice as much: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 2.0; angle = (Math.PI * 4) // which is 2 rotations When we do this, the cube rotation will appear to 'jump', but this is good because the animation remains real-time. With a computer that is going too quickly, let's say the animation loop takes half as long as it should, so 2500: When this happens, the scalar is set to 0.5: scalar = deltat / duration scalar = 2500 / 5000 scalar = 0.5 We now times all animation by a half: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 0.5; angle = (Math.PI * 1) // which is half a rotation When we do this, the cube won't jump at all, and the animation remains real time, and doesn't speed up. However, would I be right in thinking this doesn't alter how hard the computer is working? I mean it still goes through the loop as fast as it can, and it still has render the whole scene, just with different smaller angles! So this a bad way to implement delta time, right? Now let's pretend the computer is taking exactly as long as it should, so 5000: When this happens, the scalar is set to 1.0: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 1; angle = (Math.PI * 2) // which is 1 rotation When we do this, everything is timsed by 1, so nothing is changed. We'd get the same result if we weren't using delta time at all! My questions are as follows Mostly importantly, have I got the right end of the stick here? How do we know to set the duration to 5000 ? Or can it be any number? I'm a bit vague about the "computer going too quickly". Is there a way loop less often rather than reduce the animation steps? Seems like a better idea. Using this method, do all of our animations need to be timesed by the scalar? Do we have to hunt down every last one and times it? Is this the best way to implement delta time? I think not, due to the fact the computer can go nuts and all we do is divide each animation step and because we need to hunt down every step and times it by the scalar. Not a very nice DSL, as it were. So what is the best way to implement delta time? Below is one way that I do not really get but may be a better way to implement delta time. Could someone explain please? // Globals INV_MAX_FPS = 1 / 60; frameDelta = 0; clock = new THREE.Clock(); // In the animation loop (the requestAnimationFrame callback)… frameDelta += clock.getDelta(); // API: "Get the seconds passed since the last call to this method." while (frameDelta >= INV_MAX_FPS) { update(INV_MAX_FPS); // calculate physics frameDelta -= INV_MAX_FPS; } How I think this works: Firstly we set INV_MAX_FPS to 0.01666666666 How we will use this number number does not jump out at me. We then intialize a frameDelta which stores how long the last loop took to run. Come the first loop frameDelta is not greater than INV_MAX_FPS so the loop is not run (0 = 0.01666666666). So nothing happens. Now I really don't know what would cause this to happen, but let's pretend that the loop we just went through took 2 seconds to complete: We set frameDelta to 2: frameDelta += clock.getDelta(); frameDelta += 2.00 Now we run an animation thanks to update(0.01666666666). Again what is relevance of 0.01666666666?? And then we take away 0.01666666666 from the frameDelta: frameDelta -= INV_MAX_FPS; frameDelta = frameDelta - INV_MAX_FPS; frameDelta = 2 - 0.01666666666 frameDelta = 1.98333333334 So let's go into the second loop. Let's say it took 2(? Why not 2? Or 12? I am a bit confused): frameDelta += clock.getDelta(); frameDelta = frameDelta + clock.getDelta(); frameDelta = 1.98333333334 + 2 frameDelta = 3.98333333334 This time we enter the while loop because 3.98333333334 = 0.01666666666 We run update We take away 0.01666666666 from frameDelta again: frameDelta -= INV_MAX_FPS; frameDelta = frameDelta - INV_MAX_FPS; frameDelta = 3.98333333334 - 0.01666666666 frameDelta = 3.96666666668 Now let's pretend the loop is super quick and runs in just 0.1 seconds and continues to do this. (Because the computer isn't busy any more). Basically, the update function will be run, and every loop we take away 0.01666666666 from the frameDelta untill the frameDelta is less than 0.01666666666. And then nothing happens until the computer runs slowly again? Could someone shed some light please? Does the update() update the scalar or something like that and we still have to times everything by the scalar like in the first example?

    Read the article

  • Scanline filling of polygons that share edges and vertices

    - by Belgin
    In this picture (a perspective projection of an icosahedron), the scanline (red) intersects that vertex at the top. In an icosahedron each edge belongs to two triangles. From edge a, only one triangle is visible, the other one is in the back. Same for edge d. Also, in order to determine what color the current pixel should be, each polygon has a flag which can either be 'in' or 'out', depending upon where on the scanline we currently are. Flags are flipped according to the intersection of the scanline with the edges. Now, as we go from a to d (because all edges are intersected with the scanline at that vertex), this happens: the triangle behind triangle 1 and triangle 1 itself are set 'in', then 2 is set in and 1 is 'out', then 3 is set 'in', 2 is 'out' and finally 3 is 'out' and the one behind it is set 'in', which is not the desired behavior because we only need the triangles which are facing us to be set 'in', the rest should be 'out'. How do process the edges in the Active Edge List (a list of edges that are currently intersected by the scanline) so the right polys are set 'in'? Also, I should mention that the edges are unique, which means there exists an array of edges in the data structure of the icosahedron which are pointed to by edge pointers in each of the triangles.

    Read the article

  • Finding direction of travel in a world with wrapped edges

    - by crazy
    I need to find the shortest distance direction from one point in my 2D world to another point where the edges are wrapped (like asteroids etc). I know how to find the shortest distance but am struggling to find which direction it's in. The shortest distance is given by: int rows = MapY; int cols = MapX; int d1 = abs(S.Y - T.Y); int d2 = abs(S.X - T.X); int dr = min(d1, rows-d1); int dc = min(d2, cols-d2); double dist = sqrt((double)(dr*dr + dc*dc)); Example of the world : : T : :--------------:--------- : : : S : : : : : : T : : : :--------------: In the diagram the edges are shown with : and -. I've shown a wrapped repeat of the world at the top right too. I want to find the direction in degrees from S to T. So the shortest distance is to the top right repeat of T. but how do I calculate the direction in degreed from S to the repeated T in the top right? I know the positions of both S and T but I suppose I need to find the position of the repeated T however there more than 1. The worlds coordinates system starts at 0,0 at the top left and 0 degrees for the direction could start at West. It seems like this shouldn’t be too hard but I haven’t been able to work out a solution. I hope somone can help? Any websites would be appreciated.

    Read the article

  • how to make HLSL effect just for lighning without texture mapping?

    - by naprox
    I'm new to XNA, i created an effect and just want to use lightning but in default effect that XNA create we should do texture mapping or the model appears 'RED', because of this lines of code in the effect file: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float4 output = float4(1,0,0,1); return output; } and if i want to see my model (appear like when i use basiceffect) must do texture mapping by UV coordinates. but my model does not have UV coordinates assigned or its UV coordinates is not exported. and if i do texture mapping i got error. (i do texture mapping by this line of code in vertexshaderfunction and other necessary codes) output.UV= input.UV i have many of this models and want to work with them.(my models are in .FBX format) when i use Bassiceffect i have no problem and model appears correctly. how can i use "just" lightnings in my custom effects? and don't do texture mapping (because i have no UV coordinates in my models) and my model be look like when i use BasicEffect? if you need my complete code Here it is: http://www.mediafire.com/?4jexhd4ulm2icm2 here is inside of my Model Using BasicEffect http://i.imgur.com/ygP2h.jpg?1 and this is my code for drawing with or without BasicEffect inside of my draw() method: Matrix baseWorld = Matrix.CreateScale(Scale) * Matrix.CreateFromYawPitchRoll(Rotation.Y, Rotation.X, Rotation.Z) * Matrix.CreateTranslation(Position); foreach(ModelMesh mesh in Model.Meshes) { Matrix localWorld = ModelTransforms[mesh.ParentBone.Index] * baseWorld; foreach(ModelMeshPart part in mesh.MeshParts) { Effect effect = part.Effect; if (effect is BasicEffect) { ((BasicEffect)effect).World = localWorld; ((BasicEffect)effect).View = View; ((BasicEffect)effect).Projection = Projection; ((BasicEffect)effect).EnableDefaultLighting(); } else { setEffectParameter(effect, "World", localWorld); setEffectParameter(effect, "View", View); setEffectParameter(effect, "Projection", Projection); setEffectParameter(effect, "CameraPosition", CameraPosition); } } mesh.Draw(); } setEffectParameter is another method that sets effect parameter if i use my custom effect.

    Read the article

  • Creating models in 3ds max and exporting as .x for XNA

    - by Sweta Dwivedi
    I have created a few models in 3DS max which contains textures, geometry and animations . .however .fbx doesnt really support textures.. So im planning to use .x format.. I have seen a few converters in pandasoft but once i unzip the file and place the .dle file in the plugins folder of 3D max gives an error saying failed to initialize.. Is there any way to convert my .max models into .x format ? ? I dont know blender so that isnt an option. . I'm currently using 3ds max 2013 After adding the .3DS object content importer. . i get the following error:

    Read the article

  • What's the most efficient way to find barycentric coordinates?

    - by bobobobo
    In my profiler, finding barycentric coordinates is apparently somewhat of a bottleneck. I am looking to make it more efficient. It follows the method in shirley, where you compute the area of the triangles formed by embedding the point P inside the triangle. Code: Vector Triangle::getBarycentricCoordinatesAt( const Vector & P ) const { Vector bary ; // The area of a triangle is real areaABC = DOT( normal, CROSS( (b - a), (c - a) ) ) ; real areaPBC = DOT( normal, CROSS( (b - P), (c - P) ) ) ; real areaPCA = DOT( normal, CROSS( (c - P), (a - P) ) ) ; bary.x = areaPBC / areaABC ; // alpha bary.y = areaPCA / areaABC ; // beta bary.z = 1.0f - bary.x - bary.y ; // gamma return bary ; } This method works, but I'm looking for a more efficient one!

    Read the article

  • Having trouble with projection matrix, need help

    - by Mr.UNOwen
    I'm having trouble with what appears to be the projection matrix. Given a wide enough of a screen, when a cube is on the left and right most edge, the left or right wall will appear stretched to the point that the front face is 1/10 the width of the side. So I do update the screen ratio along with the projection matrix and view port on screen resize, am I safe to assume all the trouble is from the matrix class? Also the cube follows the mouse, but it's only vertically aligned and ahead of the mouse when going left or right from the center of the screen. Perspective function call: * setPerspective * * @param fov: angle in radians * @param aspect: screen ratio w/h * @param near: near distance * @param far: far distance **/ void APCamera::setPerspective(GMFloat_t fov, GMFloat_t aspect, GMFloat_t near, GMFloat_t far) { GMFloat_t difZ = near - far; GMFloat_t *data; mProjection->clear(); //set to identity matrix data = mProjection->getData(); GMFloat_t v = 1.0f / tan(fov / 2.0f); data[_AP_MAA] = v / aspect; data[_AP_MBB] = v; data[_AP_MCC] = (far + near) / difZ; data[_AP_MCD] = -1.0f; data[_AP_MDD] = 0.0f; data[_AP_MDC] = 2.0f * far * near/ difZ; mRatio = aspect; mInvProjOutdated = true; mIsPerspective = true; } and... #define _AP_MAA 0 #define _AP_MAB 1 #define _AP_MAC 2 #define _AP_MAD 3 #define _AP_MBA 4 #define _AP_MBB 5 #define _AP_MBC 6 #define _AP_MBD 7 #define _AP_MCA 8 #define _AP_MCB 9 #define _AP_MCC 10 #define _AP_MCD 11 #define _AP_MDA 12 #define _AP_MDB 13 #define _AP_MDC 14 #define _AP_MDD 15

    Read the article

  • C++ OpenGL wireframe cube rendering blank

    - by caleb.breckon
    I'm just trying to draw a bunch of lines that make up a "cube". I can't for the life of me figure out why this is producing a black screen. The debugger does not break at any point. I'm sure it's a problem with my pointers, as I'm only decent at them in regular c++ and in OpenGL it gets even worse. const char* vertexSource = "#version 150\n" "in vec3 position;" "void main() {" " gl_Position = vec4(position, 1.0);" "}"; const char* fragmentSource = "#version 150\n" "out vec4 outColor;" "void main() {" " outColor = vec4(1.0, 1.0, 1.0, 1.0);" "}"; int main() { initializeGLFW(); // Initialize GLEW glewExperimental = GL_TRUE; glewInit(); // Create Vertex Array Object GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); // Create a Vertex Buffer Object and copy the vertex data to it GLuint vbo; glGenBuffers( 1, &vbo ); float vertices[] = { 1.0f, 1.0f, 1.0f, // Vertex 0 (X, Y, Z) -1.0f, 1.0f, 1.0f, // Vertex 1 (X, Y, Z) -1.0f, -1.0f, 1.0f, // Vertex 2 (X, Y, Z) 1.0f, -1.0f, 1.0f, // Vertex 3 (X, Y, Z) 1.0f, 1.0f, -1.0f, // Vertex 4 (X, Y, Z) -1.0f, 1.0f, -1.0f, // Vertex 5 (X, Y, Z) -1.0f, -1.0f, -1.0f, // Vertex 6 (X, Y, Z) 1.0f, -1.0f, -1.0f // Vertex 7 (X, Y, Z) }; GLuint indices[] = { 0, 1, 1, 2, 2, 3, 3, 0, 4, 5, 5, 6, 6, 7, 7, 4, 0, 4, 1, 5, 2, 6, 3, 7 }; glBindBuffer( GL_ARRAY_BUFFER, vbo ); glBufferData( GL_ARRAY_BUFFER, sizeof( vertices ), vertices, GL_STATIC_DRAW ); //glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, vbo); //glBufferData( GL_ELEMENT_ARRAY_BUFFER, sizeof( indices ), indices, GL_STATIC_DRAW ); // Create and compile the vertex shader GLuint vertexShader = glCreateShader( GL_VERTEX_SHADER ); glShaderSource( vertexShader, 1, &vertexSource, NULL ); glCompileShader( vertexShader ); // Create and compile the fragment shader GLuint fragmentShader = glCreateShader( GL_FRAGMENT_SHADER ); glShaderSource( fragmentShader, 1, &fragmentSource, NULL ); glCompileShader( fragmentShader ); // Link the vertex and fragment shader into a shader program GLuint shaderProgram = glCreateProgram(); glAttachShader( shaderProgram, vertexShader ); glAttachShader( shaderProgram, fragmentShader ); glBindFragDataLocation( shaderProgram, 0, "outColor" ); glLinkProgram (shaderProgram); glUseProgram( shaderProgram); // Specify the layout of the vertex data GLint posAttrib = glGetAttribLocation( shaderProgram, "position" ); glEnableVertexAttribArray( posAttrib ); glVertexAttribPointer( posAttrib, 3, GL_FLOAT, GL_FALSE, 0, 0 ); // Main loop while(glfwGetWindowParam(GLFW_OPENED)) { // Clear the screen to black glClearColor( 0.0f, 0.0f, 0.0f, 1.0f ); glClear( GL_COLOR_BUFFER_BIT ); // Draw lines from 2 vertices glDrawElements(GL_LINES, sizeof(indices), GL_UNSIGNED_INT, indices ); // Swap buffers glfwSwapBuffers(); } // Clean up glDeleteProgram( shaderProgram ); glDeleteShader( fragmentShader ); glDeleteShader( vertexShader ); //glDeleteBuffers( 1, &ebo ); glDeleteBuffers( 1, &vbo ); glDeleteVertexArrays( 1, &vao ); glfwTerminate(); exit( EXIT_SUCCESS ); }

    Read the article

  • Dynamic Dijkstra

    - by Dani
    I need dynamic dijkstra algorithm that can update itself when edge's cost is changed without a full recalculation. Full recalculation is not an option. I've tryed to "brew" my own implemantion with no success. I've also tryed to find on the Internet but found nothing. A link to an article explaining the algorithm or even it's name will be good. Edit: Thanks everyone for answering. I managed to make algorithm of my own that runs in O(V+E) time, if anyone wishes to know the algorithm just say so and I will post it.

    Read the article

< Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >