Search Results

Search found 21563 results on 863 pages for 'game testing'.

Page 399/863 | < Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >

  • Queries regarding Geometry Shaders

    - by maverick9888
    I am dealing with geometry shaders using GL_ARB_geometry_shader4 extension. My code goes like : GLfloat vertices[] = { 0.5,0.25,1.0, 0.5,0.75,1.0, -0.5,0.75,1.0, -0.5,0.25,1.0, 0.6,0.35,1.0, 0.6,0.85,1.0, -0.6,0.85,1.0, -0.6,0.35,1.0 }; glProgramParameteriEXT(psId, GL_GEOMETRY_INPUT_TYPE_EXT, GL_TRIANGLES); glProgramParameteriEXT(psId, GL_GEOMETRY_OUTPUT_TYPE_EXT, GL_TRIANGLE_STRIP); glLinkProgram(psId); glBindAttribLocation(psId,0,"Position"); glEnableVertexAttribArray (0); glVertexAttribPointer(0, 3, GL_FLOAT, 0, 0, vertices); glDrawArrays(GL_TRIANGLE_STRIP,0,4); My vertex shader is : #version 150 in vec3 Position; void main() { gl_Position = vec4(Position,1.0); } Geometry shader is : #version 150 #extension GL_EXT_geometry_shader4 : enable in vec4 pos[3]; void main() { int i; vec4 vertex; gl_Position = pos[0]; EmitVertex(); gl_Position = pos[1]; EmitVertex(); gl_Position = pos[2]; EmitVertex(); gl_Position = pos[0] + vec4(0.3,0.0,0.0,0.0); EmitVertex(); EndPrimitive(); } Nothing is rendered with this code. What exactly should be the mode in glDrawArrays() ? How does the GL_GEOMETRY_OUTPUT_TYPE_EXT parameter will affect glDrawArrays() ? What I expect is 3 vertices will be passed on to Geometry Shader and using those we construct a primitive of size 4 (assuming GL_TRIANGLE_STRIP requires 4 vertices). Can somebody please throw some light on this ?

    Read the article

  • Overriding component behavior

    - by deft_code
    I was thinking of how to implement overriding of behaviors in a component based entity system. A concrete example, an entity has a heath component that can be damaged, healed, killed etc. The entity also has an armor component that limits the amount of damage a character receives. Has anyone implemented behaviors like this in a component based system before? How did you do it? If no one has ever done this before why do you think that is. Is there anything particularly wrong headed about overriding component behaviors? Below is rough sketch up of how I imagine it would work. Components in an entity are ordered. Those at the front get a chance to service an interface first. I don't detail how that is done, just assume it uses evil dynamic_casts (it doesn't but the end effect is the same without the need for RTTI). class IHealth { public: float get_health( void ) const = 0; void do_damage( float amount ) = 0; }; class Health : public Component, public IHealth { public: void do_damage( float amount ) { m_damage -= amount; } private: float m_health; }; class Armor : public Component, public IHealth { public: float get_health( void ) const { return next<IHealth>().get_health(); } void do_damage( float amount ) { next<IHealth>().do_damage( amount / 2 ); } }; entity.add( new Health( 100 ) ); entity.add( new Armor() ); assert( entity.get<IHealth>().get_health() == 100 ); entity.get<IHealth>().do_damage( 10 ); assert( entity.get<IHealth>().get_health() == 95 ); Is there anything particularly naive about the way I'm proposing to do this?

    Read the article

  • Spherical harmonics lighting - what does it accomplish?

    - by TravisG
    From my understanding, spherical harmonics are sometimes used to approximate certain aspects of lighting (depending on the application). For example, it seems like you can approximate the diffuse lighting cause by a directional light source on a surface point, or parts of it, by calculating the SH coefficients for all bands you're using (for whatever accuracy you desire) in the direction of the surface normal and scaling it with whatever you need to scale it with (e.g. light colored intensity, dot(n,l),etc.). What I don't understand yet is what this is supposed to accomplish. What are the actual advantages of doing it this way as opposed to evaluating the diffuse BRDF the normal way. Do you save calculations somewhere? Is there some additional information contained in the SH representation that you can't get out of the scalar results of the normal evaluation?

    Read the article

  • Moving player in direciton camera is facing

    - by Samurai Fox
    I have a 3rd person camera which can rotate around the player. My problem is that wherever camera is facing, players forward is always the same direction. For example when camera is facing the right side of the player, when I press button to move forward, I want player to turn to the left and make that the "new forward". My camera script so far: using UnityEngine; using System.Collections; public class PlayerScript : MonoBehaviour { public float RotateSpeed = 150, MoveSpeed = 50; float DeltaTime; void Update() { DeltaTime = Time.deltaTime; transform.Rotate(0, Input.GetAxis("LeftX") * RotateSpeed * DeltaTime, 0); transform.Translate(0, 0, -Input.GetAxis("LeftY") * MoveSpeed * DeltaTime); } } public class CameraScript : MonoBehaviour { public GameObject Target; public float RotateSpeed = 170, FollowDistance = 20, FollowHeight = 10; float RotateSpeedPerTime, DesiredRotationAngle, DesiredHeight, CurrentRotationAngle, CurrentHeight, Yaw, Pitch; Quaternion CurrentRotation; void LateUpdate() { RotateSpeedPerTime = RotateSpeed * Time.deltaTime; DesiredRotationAngle = Target.transform.eulerAngles.y; DesiredHeight = Target.transform.position.y + FollowHeight; CurrentRotationAngle = transform.eulerAngles.y; CurrentHeight = transform.position.y; CurrentRotationAngle = Mathf.LerpAngle(CurrentRotationAngle, DesiredRotationAngle, 0); CurrentHeight = Mathf.Lerp(CurrentHeight, DesiredHeight, 0); CurrentRotation = Quaternion.Euler(0, CurrentRotationAngle, 0); transform.position = Target.transform.position; transform.position -= CurrentRotation * Vector3.forward * FollowDistance; transform.position = new Vector3(transform.position.x, CurrentHeight, transform.position.z); Yaw = Input.GetAxis("Right Horizontal") * RotateSpeedPerTime; Pitch = Input.GetAxis("Right Vertical") * RotateSpeedPerTime; transform.Translate(new Vector3(Yaw, -Pitch, 0)); transform.position = new Vector3(transform.position.x, transform.position.y, transform.position.z); transform.LookAt(Target.transform); } }

    Read the article

  • DrawIndexedPrimitives overdraws data in previous buffer if called in loop

    - by Daniel Excinsky
    I doubled the question from stackoverflow here, and will delete the opposite of a question that gave me the answer. I have the Draw method in one of my renderers, that loops through the dictionary and gets precollected and preinitialized buffers. When dictionary has only one element, everything is just fine. But with more elements what I get on the screen is only the data from the last buffer (I suppose, not sure) My Draw method: public void Draw(GameTime gameTime) { if (!_areStaticEffectsSet) { // blockEffect.Parameters["TextureAtlas"].SetValue(textureAtlas); blockEffect.Parameters["HorizonColor"].SetValue(World.HORIZONCOLOR); blockEffect.Parameters["NightColor"].SetValue(World.NIGHTCOLOR); blockEffect.Parameters["MorningTint"].SetValue(World.MORNINGTINT); blockEffect.Parameters["EveningTint"].SetValue(World.EVENINGTINT); blockEffect.Parameters["SunColor"].SetValue(World.SUNCOLOR); _areStaticEffectsSet = true; } blockEffect.Parameters["World"].SetValue(Matrix.Identity); blockEffect.Parameters["View"].SetValue(_player.CameraView); blockEffect.Parameters["Projection"].SetValue(_player.CameraProjection); blockEffect.Parameters["CameraPosition"].SetValue(_player.CameraPosition); blockEffect.Parameters["timeOfDay"].SetValue(_world.TimeOfDay); var viewFrustum = new BoundingFrustum(_player.CameraView * _player.CameraProjection); _graphicsDevice.BlendState = BlendState.Opaque; _graphicsDevice.DepthStencilState = DepthStencilState.Default; foreach (KeyValuePair<int, Texture2D> textureAtlas in textureAtlases) { blockEffect.Parameters["TextureAtlas"].SetValue(textureAtlas.Value); foreach (EffectPass pass in blockEffect.CurrentTechnique.Passes) { pass.Apply(); //TODO: ?????????? ??????????????? ?? ?????? ?? ??????? ??????? VertexBuffer ? IndexBuffer foreach (Chunk chunk in _world.Chunks.Values) { if (chunk == null || chunk.IsDisposed) { continue; } if (chunk.BoundingBox.Intersects(viewFrustum) && chunk.GetBlockIndexBuffer(textureAtlas.Key) != null) { lock (chunk) { if (chunk.GetBlockIndexBuffer(textureAtlas.Key).IndexCount > 0) { VertexBuffer vertexBuffer = chunk.GetBlockVertexBuffer(textureAtlas.Key); IndexBuffer indexBuffer = chunk.GetBlockIndexBuffer(textureAtlas.Key); //if (chunk.DrawIndex == new Vector3i(0, 0, 0)) //{ //if (textureAtlas.Key == -1) //{ //var varray = new [] //{ //new VertexPositionTextureLight(new Vector3(0,68,0), new Vector2(0,1),1,new Vector3(0,0,0), new Vector3(1,1,1)), //new VertexPositionTextureLight(new Vector3(0,68,1), new Vector2(0,1),1,new Vector3(0,0,0), new Vector3(1,1,1)), //new VertexPositionTextureLight(new Vector3(1,68,0), new Vector2(0,1),1,new Vector3(0,0,0), new Vector3(1,1,1)) //}; //var iarray = new short[] {0, 1, 2}; //vertexBuffer = new VertexBuffer(_graphicsDevice, typeof(VertexPositionTextureLight), varray.Length, BufferUsage.WriteOnly); //indexBuffer = new IndexBuffer(_graphicsDevice, IndexElementSize.SixteenBits, iarray.Length, BufferUsage.WriteOnly); //vertexBuffer.SetData(varray); //indexBuffer.SetData(iarray); } } _graphicsDevice.SetVertexBuffer(vertexBuffer); _graphicsDevice.Indices = indexBuffer; _graphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vertexBuffer.VertexCount, 0, indexBuffer.IndexCount / 3); } } } } } } } Noteworthy things about the code: XNA version is 4.0. I've commented the debugging code in the loop, but left it for it may bring some insight. I try not only to change vertices/indices in the loop, but textureAtlas also. Code in the shader about textureAtlas: Texture TextureAtlas; sampler TextureAtlasSampler = sampler_state { texture = <TextureAtlas>; magfilter = POINT; minfilter = POINT; mipfilter = POINT; AddressU = WRAP; AddressV = WRAP; }; struct VSInput { float4 Position : POSITION0; float2 TexCoords1 : TEXCOORD0; float SunLight : COLOR0; float3 LocalLight : COLOR1; float3 Normal : NORMAL0; }; VertexPositionTextureLight is my own realization of IVertexType. So, do anybody know about this problem, or see the wrongness in my code (that's far more likely)?

    Read the article

  • Where to store shaders

    - by Mark Ingram
    I have an OpenGL renderer which has a Scene member variable. The Scene object can contain N SceneObjects. I use these SceneObjects for storing the vertex position and any transforms. My question is, where should shaders be stored in this arrangement? I guess they need to be in a central location because multiple objects can use the same shader. But then each object needs access to the shader because it needs to set attributes into the shader. Does anyone have any advice?

    Read the article

  • Most efficient AABB - Ray intersection algorithm for input/output distance calculation

    - by Tobbey
    Thanks to the following thread : most efficient AABB vs Ray collision algorithms I have seen very fast algorithm for ray/AABB intersection point computation. Unfortunately, most of the recent algorithm are accelerated by omitting the "output" intersection point of the box. In my application, I would interested in getting both the the distance from source ray to input: t0 and source ray to output of bounding box: t1. I have seen for instance Eisemann designed a very fast version regarding plucker, smits, ... , but it does not compare the case when both input/output distance should be computed see: http://www.cg.cs.tu-bs.de/publications/Eisemann07FRA/ Does someone know where I can find more information on algorithm performances for the specific input/output problem ? Thank you in advance

    Read the article

  • Drawing multiple objects from one Vertex Buffer Object in OpenGL/OpenTK

    - by stoney78us
    I am trying to experimenting drawing method using VBO in OpenGL. Many people normally use 1 vbo to store one object data array. I was trying to do something quite opposite which is storing multiple object data into 1 vbo then drawing it. There is story behind why i want to do this. I want to group many of objects as a single object sometime. However my code doesn't do the justice. Following is my pseudo code: //Data double[] vertices = {line strip 1, line strip 2, line strip 3}; //series of vertices int linestrip1offset = index of the first vertex in line strip 1; int linestrip2offset = index of the first vertex in line strip 2; int linestrip3offset = index of the first vertex in line strip 3; int linestrip1VertexNum = number of vertices in linestrip 1; int linestrip2VertexNum = number of vertices in linestrip 2; int linestrip3VertexNum = number of vertices in linestrip 3; //Setting Up void init() { int[] vBO = new int[1]; GL.GenBuffer(1, vBO); GL.BindBuffer(BufferTarget.ArrayBuffer, vBO[0]); GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(_vertices.Length * sizeof(double)), _vertices, BufferUsageHint.StaticDraw); GL.EnableClientState(Array.VertexArray); } //Drawing void draw() { GL.BindBuffer(BufferTarget.ArrayBuffer, vBO[0]); GL.EnableClientState(ArrayCap.VertexArray); GL.VertexPointer(3, VertexPointerType.Double, 0, linestrip1offset); //drawing first linestrip GL.DrawArrays(drawMode, linestrip1offset , linestrip1VertexNum ); GL.VertexPointer(3, VertexPointerType.Double, 0, linestrip2offset); //drawing second linestrip GL.DrawArrays(drawMode, linestrip2offset , linestrip2VertexNum ); GL.VertexPointer(3, VertexPointerType.Double, 0, linestrip3offset); //drawing third linestrip GL.DrawArrays(drawMode, linestrip3offset , linestrip3VertexNum ); GL.DisableClientState(ArrayCap.VertexArray); GL.BindBuffer(BufferTarget.ArrayBuffer, 0); } I don't know what i did wrong but i think technically it should work where we can tell OpenGL which part of the data in the vBO to be drawn.

    Read the article

  • Turn-based JRPG battle system architecture resources

    - by BenoitRen
    The past months I've been busy programming a 2D JRPG (Japanese-style RPG) in C++ using the SDL library. The exploration mode is more or less done. Now I'm tackling the battle mode. I have been unable to find any resources about how a classic turn-based JRPG battle system is structured. All I find are discussions about damage formula. I've tried googling, searching gamedev.net's message board, and crawling through C++-related questions here on Stack Exchange. I've also tried reading source code of existing open source RPGs, but without a guide of some sort it's like trying to find a needle in a haystack. I'm not looking for a set of rules like D&D or anything similar. I'm talking purely about code and object structure design. A battle system asks the player for input using menus. Next the battle turn is executed as the heroes and the enemies execute their actions. Can anyone point me in the right direction? Thanks in advance.

    Read the article

  • OpenGL ES 2.0 example for JOGL

    - by fjdutoit
    I've scoured the internet for the last few hours looking for an example of how to run even the most basic OpenGL ES 2 example using JOGL but "by Jupiter!" it has been a total fail. I tried converting the android example from the OpenGL ES 2.0 Programming Guide examples (and at the same time looking at the WebGL example -- which worked fine) yet without any success. Are there any examples out there? If anyone else wants some extra help regarding this question see this thread on the official Jogamp forum.

    Read the article

  • Vertex Array Object (OpenGL)

    - by Shin
    I've just started out with OpenGL I still haven't really understood what Vertex Array Objects are and how they can be employed. If Vertex Buffer Object are used to store vertex data (such as their positions and texture coordinates) and the VAOs only contain status flags, where can they be used? What's their purpose? As far as I understood from the (very incomplete and unclear) GL Wiki, VAOs are used to set the flags/status for every vertex, following the order described in the Element Array Buffer, but the wiki was really ambiguous about it and I'm not really sure about what VAOs really do and how I could employ them.

    Read the article

  • OpenGL fovx question

    - by Nick
    To boil my question down to the simplest form, I fear I am oversimplifying how mat4 perspective works. I am using mat4.perspective(45, 2, 0.1, 1000.0) (the binding is WebGL fwiw). With a fovy of 45, and an aspect ratio of 2, I expect to have a fovx of 90. Thus, if I position my camera at (0, 0, 50), looking towards the origin, I expect to see a cube positioned at (50, 0, 0) (45 degrees) right at the very periphery of my screen, half on, half off,. Instead, a cube at (50, 0, 0) is totally off screen, and my actually periphery occurs at about (41.1, 0, 0). What am I missing here? Thanks, nick

    Read the article

  • Absorbtion 2d image effect

    - by Ed.
    I want to create a specyfic 2d image effect. It consists in modifying a sprite so it looks like it is being zoomed to a point or "absorbed" by that point. I'm not really sure what is the technical name of this effect so I cannot explain it correctly. Here you can see a video of what I'm talking about, it is the effect when the character absorbs the three glyphs. http://www.youtube.com/watch?v=PIo-GddsMcU&t=4m45s What is the name of this effect? How can I implement it with XNA for 2D textures/sprites?

    Read the article

  • Flickering when accessing texture by offset

    - by TravisG
    I have this simple compute shader that basically just takes the input from one image and writes it to another. Both images are 128/128/128 in size and glDispatchCompute is called with (128/8,128/8,128/8). The source images are cleared to 0 before this compute shader is executed, so no undefined values should be floating around in there. (I have the appropriate memory barrier on the C++ side set before the 3D texture is accessed). This version works fine: #version 430 layout (location = 0, rgba16f) uniform image3D ping; layout (location = 1, rgba16f) uniform image3D pong; layout (local_size_x = 8, local_size_y = 8, local_size_z = 8) in; void main() { ivec3 sampleCoord = gl_GlobalInvocationID.xyz; imageStore(pong, imageLoad(ping,sampleCoord)); } Reading values from pong shows that it's just a copy, as intended. However, when I load data from ping with an offset: #version 430 layout (location = 0, rgba16f) uniform image3D ping; layout (location = 1, rgba16f) uniform image3D pong; layout (local_size_x = 8, local_size_y = 8, local_size_z = 8) in; void main() { ivec3 sampleCoord = gl_GlobalInvocationID.xyz; imageStore(pong, imageLoad(ping,sampleCoord+ivec3(1,0,0))); } The data that is written to pong seems to depend on the order of execution of the threads within the work groups, which makes no sense to me. When reading from the pong texture, visible flickering occurs in some spots on the texture. What am I doing wrong here?

    Read the article

  • Drag Gestures - fractional delta values

    - by Den
    I have an issue with objects moving roughly twice as far as expected when dragging them. I am comparing my application to the standard TouchGestureSample sample from MSDN. For some reason in my application gesture samples have fractional positions and deltas. Both are using same Microsoft.Xna.Framework.Input.Touch.dll, v4.0.30319. I am running both apps using standard Windows Phone Emulator. I am setting my break point immediately after this line of code in a simple Update method: GestureSample gesture = TouchPanel.ReadGesture(); Typical values in my app: Delta = {X:-13.56522 Y:4.166667} Position = {X:184.6956 Y:417.7083} Typical values in sample app: Delta = {X:7 Y:16} Position = {X:497 Y:244} Have anyone seen this issue? Does anyone have any suggestions? Thank you.

    Read the article

  • How do I generate a level randomly?

    - by Charlton Santana
    I am currently hard coding 10 different instances like the code below, but but I'd like to create many more. Instead of having the same layout for the new level, I was wondering if there is anyway to generate a random X value for each block (this will be how far into the level it is). A level 100,000 pixels wide would be good enough but if anyone knows a system to make the level go on and on, I'd like to know that too. This is basically how I define a block now (with irrelevant code removed): block = new Block(R.drawable.block, 400, platformheight); block2 = new Block(R.drawable.block, 600, platformheight); block3 = new Block(R.drawable.block, 750, platformheight); The 400 is the X position, which I'd like to place randomly through the level, the platformheight variable defines the Y position which I don't want to change.

    Read the article

  • How can I chose the depth of a quadtree?

    - by Evpok
    In a 2d world, using a quadtree to prune pairs in collision detection, how can I chose the depth of said quadtree? The world I am dealing with is mostly made of moving objects¹, so the cost of dispatching the objects between the quadtree cells matter. So what I am interested in is the balance between the gain from less collision checking and the loss from more dispatching. 1. To be completely explicit, autonomous self-replicating cells competing for food sources, in an attempt to show my pupils predator-prey dynamics and genetic evolution at work

    Read the article

  • Isometric Screen View to World View

    - by Sleepy Rhino
    I am having trouble working out the math to transform the screen coordinates to the Grid coordinates. The code below is how far I have got but it is totally wrong any help or resources to fix this issue would be great, had a complete mind block with this for some reason. private Point ScreenToIso(int mouseX, int mouseY) { int offsetX = WorldBuilder.STARTX; int offsetY = WorldBuilder.STARTY; Vector2 startV = new Vector2(offsetX, offsetY); int mapX = offsetX - mouseX; int mapY = offsetY - mouseY + (WorldBuilder.tileHeight / 2); mapY = -1 * (mapY / WorldBuilder.tileHeight); mapX = (mapX / WorldBuilder.tileHeight) + mapY; return new Point(mapX, mapY); }

    Read the article

  • Most efficient way to implement delta time

    - by Starkers
    Here's one way to implement delta time: /// init /// var duration = 5000, currentTime = Date.now(); // and create cube, scene, camera ect ////// function animate() { /// determine delta /// var now = Date.now(), deltat = now - currentTime, currentTime = now, scalar = deltat / duration, angle = (Math.PI * 2) * scalar; ////// /// animate /// cube.rotation.y += angle; ////// /// update /// requestAnimationFrame(render); ////// } Could someone confirm I know how it works? Here what I think is going on: Firstly, we set duration at 5000, which how long the loop will take to complete in an ideal world. With a computer that is slow/busy, let's say the animation loop takes twice as long as it should, so 10000: When this happens, the scalar is set to 2.0: scalar = deltat / duration scalar = 10000 / 5000 scalar = 2.0 We now times all animation by twice as much: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 2.0; angle = (Math.PI * 4) // which is 2 rotations When we do this, the cube rotation will appear to 'jump', but this is good because the animation remains real-time. With a computer that is going too quickly, let's say the animation loop takes half as long as it should, so 2500: When this happens, the scalar is set to 0.5: scalar = deltat / duration scalar = 2500 / 5000 scalar = 0.5 We now times all animation by a half: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 0.5; angle = (Math.PI * 1) // which is half a rotation When we do this, the cube won't jump at all, and the animation remains real time, and doesn't speed up. However, would I be right in thinking this doesn't alter how hard the computer is working? I mean it still goes through the loop as fast as it can, and it still has render the whole scene, just with different smaller angles! So this a bad way to implement delta time, right? Now let's pretend the computer is taking exactly as long as it should, so 5000: When this happens, the scalar is set to 1.0: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 1; angle = (Math.PI * 2) // which is 1 rotation When we do this, everything is timsed by 1, so nothing is changed. We'd get the same result if we weren't using delta time at all! My questions are as follows Mostly importantly, have I got the right end of the stick here? How do we know to set the duration to 5000 ? Or can it be any number? I'm a bit vague about the "computer going too quickly". Is there a way loop less often rather than reduce the animation steps? Seems like a better idea. Using this method, do all of our animations need to be timesed by the scalar? Do we have to hunt down every last one and times it? Is this the best way to implement delta time? I think not, due to the fact the computer can go nuts and all we do is divide each animation step and because we need to hunt down every step and times it by the scalar. Not a very nice DSL, as it were. So what is the best way to implement delta time? Below is one way that I do not really get but may be a better way to implement delta time. Could someone explain please? // Globals INV_MAX_FPS = 1 / 60; frameDelta = 0; clock = new THREE.Clock(); // In the animation loop (the requestAnimationFrame callback)… frameDelta += clock.getDelta(); // API: "Get the seconds passed since the last call to this method." while (frameDelta >= INV_MAX_FPS) { update(INV_MAX_FPS); // calculate physics frameDelta -= INV_MAX_FPS; } How I think this works: Firstly we set INV_MAX_FPS to 0.01666666666 How we will use this number number does not jump out at me. We then intialize a frameDelta which stores how long the last loop took to run. Come the first loop frameDelta is not greater than INV_MAX_FPS so the loop is not run (0 = 0.01666666666). So nothing happens. Now I really don't know what would cause this to happen, but let's pretend that the loop we just went through took 2 seconds to complete: We set frameDelta to 2: frameDelta += clock.getDelta(); frameDelta += 2.00 Now we run an animation thanks to update(0.01666666666). Again what is relevance of 0.01666666666?? And then we take away 0.01666666666 from the frameDelta: frameDelta -= INV_MAX_FPS; frameDelta = frameDelta - INV_MAX_FPS; frameDelta = 2 - 0.01666666666 frameDelta = 1.98333333334 So let's go into the second loop. Let's say it took 2(? Why not 2? Or 12? I am a bit confused): frameDelta += clock.getDelta(); frameDelta = frameDelta + clock.getDelta(); frameDelta = 1.98333333334 + 2 frameDelta = 3.98333333334 This time we enter the while loop because 3.98333333334 = 0.01666666666 We run update We take away 0.01666666666 from frameDelta again: frameDelta -= INV_MAX_FPS; frameDelta = frameDelta - INV_MAX_FPS; frameDelta = 3.98333333334 - 0.01666666666 frameDelta = 3.96666666668 Now let's pretend the loop is super quick and runs in just 0.1 seconds and continues to do this. (Because the computer isn't busy any more). Basically, the update function will be run, and every loop we take away 0.01666666666 from the frameDelta untill the frameDelta is less than 0.01666666666. And then nothing happens until the computer runs slowly again? Could someone shed some light please? Does the update() update the scalar or something like that and we still have to times everything by the scalar like in the first example?

    Read the article

  • Hardware instancing for voxel engine

    - by Menno Gouw
    i just did the tutorial on Hardware Instancing from this source: http://www.float4x4.net/index.php/2011/07/hardware-instancing-for-pc-in-xna-4-with-textures/. Somewhere between 900.000 and 1.000.000 draw calls for the cube i get this error "XNA Framework HiDef profile supports a maximum VertexBuffer size of 67108863." while still running smoothly on 900k. That is slightly less then 100x100x100 which are a exactly a million. Now i have seen voxel engines with very "tiny" voxels, you easily get to 1.000.000 cubes in view with rough terrain and a decent far plane. Obviously i can optimize a lot in the geometry buffer method, like rendering only visible faces of a cube or using larger faces covering multiple cubes if the area is flat. But is a vertex buffer of roughly 67mb the max i can work with or can i create multiple?

    Read the article

  • XNA Diffuse Shader Issue. Edge lighting problem. Image Attached

    - by adtither
    As you can see in this image the diffuse shading is working correctly in some places but in other places such as the the bottom of the sphere you can see the squares/triangles of the mesh. Any idea what would be causing this? Let me know if you need anymore information related to code. I can upload my normals calculations and shader effect if required. EDIT: Here's a link to the shader I'm using http://pastebin.com/gymVc7CP Link to normals calculations: http://pastebin.com/KnMGdzHP Seems to be an issue with edge lighting. Can't seem to see where I'm going wrong with the normals calculations though.

    Read the article

  • Most efficient way to handle coordinate maps in Java

    - by glowcoder
    I have a rectangular tile-based layout. It's your typical Cartesian system. I would like to have a single class that handles two lookup styles Get me the set of players at position X,Y Get me the position of player with key K My current implementation is this: class CoordinateMap<V> { Map<Long,Set<V>> coords2value; Map<V,Long> value2coords; // convert (int x, int y) to long key - this is tested, works for all values -1bil to +1bil // My map will NOT require more than 1 bil tiles from the origin :) private Long keyFor(int x, int y) { int kx = x + 1000000000; int ky = y + 1000000000; return (long)kx | (long)ky << 32; } // extract the x and y from the keys private int[] coordsFor(long k) { int x = (int)(k & 0xFFFFFFFF) - 1000000000; int y = (int)((k >>> 32) & 0xFFFFFFFF) - 1000000000; return new int[] { x,y }; } } From there, I proceed to have other methods that manipulate or access the two maps accordingly. My question is... is there a better way to do this? Sure, I've tested my class and it works fine. And sure, something inside tells me if I want to reference the data by two different keys, I need two different maps. But I can also bet I'm not the first to run into this scenario. Thanks!

    Read the article

  • Understanding how texCUBE works and writing cubemaps properly into a cube rendertarget

    - by cubrman
    My goal is to create accurate reflections, sampled from a dynamic cubemap, for specific 3d objects (mostly lights) in XNA 4.0. To sample the cubemap I compute the 3d reflection vector in a classic way: half3 ReflectionVec = reflect(-directionToCamera, Normal.rgb); I then use the vector to get the actual reflected color: half3 ReflectionCol = texCUBElod(ReflectionSampler, float4(ReflectionVec, 0)); The cubemap I am sampling from is a RenderTarget with 6 flat faces. So my question is, given the 3d world position of an arbitrary 3d object, how can I make sure that I get accurate reflections of this object, when I re-render the cubemap. Should I build the ViewProjection matrix in a specific way? Or is there any other approach?

    Read the article

  • blender: 3D model from guide images

    - by Stefan
    In a effort to learn the blender interface, which is confusing to say the least, I've chosen to model a model from referrence pictures easily found on the web. Problem is that I can't ( and won't ) get perfect "right", "front" and "top" pictures. Blender only allows you to see the background pictures when in ortographic mode and only from right|front|top, which doesn't help me. How to I proceed to model from non-perfect guide images?

    Read the article

  • How do I adjust the origin of rotation for a group of sprites?

    - by Jon
    I am currently grouping sprites together, then applying a rotation transformation on draw: private void UpdateMatrix(ref Vector2 origin, float radians) { Vector3 matrixorigin = new Vector3(origin, 0); _rotationMatrix = Matrix.CreateTranslation(-matrixorigin) * Matrix.CreateRotationZ(radians) * Matrix.CreateTranslation(matrixorigin); } Where the origin is the Centermost point of my group of sprites. I apply this transformation to each sprite in the group. My problem is that when I adjust the point of origin, my entire sprite group will re-position itself on screen. How could I differentiate the point of rotation used in the transformation, from the position of the sprite group? Is there a better way of creating this transformation matrix? EDIT Here is the relevant part of the Draw() function: Matrix allTransforms = _rotationMatrix * camera.GetTransformation(); spriteBatch.Begin(SpriteSortMode.BackToFront, null, null, null, null, null, allTransforms); for (int i = 0; i < _map.AllParts.Count; i++) { for (int j = 0; j < _map.AllParts[0].Count; j++) { spriteBatch.Draw(_map.AllParts[i][j].Texture, _map.AllParts[i][j].Position, null, Color.White, 0, _map.AllParts[i][j].Origin, 1.0f, SpriteEffects.None, 0f); } } This all works fine, again, the problem is that when a rotation is set and the point of origin is changed, the sprite group's position is offset on screen. I am trying to figure out a way to adjust the point of origin without causing a shift in position. EDIT 2 At this point, I'm looking for workarounds as this is not working. Does anyone know of a better way to rotate a group of sprites in XNA? I need a method that will allow me to modify the point of rotation (origin) without affecting the position of the sprite group on screen.

    Read the article

< Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >