Search Results

Search found 29201 results on 1169 pages for 'game development'.

Page 526/1169 | < Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >

  • Texturing a mesh generated from voxel data

    - by Minja
    I have implemented the Marching Cubes algorithm to display an isosurface based on voxel data. Currently, it is displayed with triplanar texturing. I'm working with unity, so I have a material with the triplanar shader attached. Now, the whole isosurface is rendered using this material. And thats my problem: I want the texture to represent the voxel data. I'm storing a material value for every point in the grid, and based on this value, I want the texture of the isosurface to change. Sadly, I have no clue how to do this. So if the voxel is sand, I want sand to be displayed; if it's stone, then there should be stone. Right now, everything is displayed as sand. Thanks in advance!

    Read the article

  • Applying effects to an existing program that uses BasicEffect

    - by Fibericon
    Using the finished product from the tutorial here. Is it possible to apply the grayscale effect from here: Making entire scene fade to grayscale Or would you basically have to rewrite everything? EDIT: It's doing something now, but the whole grayscale seems extremely blue. It's like I'm looking at it through dark blue sunglasses. Here's my draw function: protected override void Draw(GameTime gameTime) { device.SetRenderTarget(renderTarget); graphics.GraphicsDevice.Clear(Color.CornflowerBlue); //Drawing models, bullets, etc. device.SetRenderTarget(null); spriteBatch.Begin(0, BlendState.Additive, SamplerState.PointWrap, DepthStencilState.Default, RasterizerState.CullNone, grayScale); Texture2D temp = (Texture2D)renderTarget; grayScale.Parameters["coloredTexture"].SetValue(temp); grayScale.CurrentTechnique = grayScale.Techniques["Grayscale"]; foreach (EffectPass pass in grayScale.CurrentTechnique.Passes) { pass.Apply(); } spriteBatch.Draw(temp, new Vector2(GraphicsDevice.PresentationParameters.BackBufferWidth/2, GraphicsDevice.PresentationParameters.BackBufferHeight/2), null, Color.White, 0f, new Vector2(renderTarget.Width/2, renderTarget.Height/2), 1.0f, SpriteEffects.None, 0f); spriteBatch.End(); base.Draw(gameTime); } Another edit: figured out what I was doing wrong. I have Blendstate.Additive in the spriteBatch.Draw() call. It should be Blendstate.Opaque, or it literally tries to add the blank blue image to the grayscale image.

    Read the article

  • List of bounding boxes?

    - by Christian Frantz
    When I create a bounding box for each object in my chunk, would it be better to store them in a list? List<BoundingBox> cubeBoundingBox Or can I just use a single variable? BoundingBox cubeBoundingBox The bounding boxes will be used for all types of things so they will be moving around. In any case, I'd be adding it to a method that gets called 2500+ times for each chunk, so either I have a giant list of them or 2500+ individual boxes. Is there any advantage to using one or the other?

    Read the article

  • adapting a Unity gravitational script to allow moons

    - by PartyMix
    I'm using this script: http://wiki.unity3d.com/index.php/Simple_planetary_orbits to get a solar system going in Unity, but it doesn't seem to support creating bodies that orbit other moving bodies (or I am using it incorrectly). Any idea about how to modify it so that it does (or just use it correctly)? I've been beating my head against this problem for a couple hours, and I really don't feel like I have any idea what I'm doing. Thanks in advance.

    Read the article

  • How to bind std::map to Lua with LuaBind

    - by MahanGM
    Is this possible in lua to achieve? player.scripts["movement"].properties["stat"] = "stand" print (player.scripts["movement"].properties["stat"]) I've done getter method in c++ with this approach: luabind::object FakeScript::getProp() { luabind::object obj = luabind::newtable(L); for(auto i = this->properties.begin(); i != this->properties.end(); i++) { obj[i->first] = i->second; } return obj; } But I'm stuck with setter. The first line in lua code which I'm trying to set value "stand" for key "stat" is not going to work and it keep redirecting me to the getter method. Setter method only works when I drop ["stat"] from properties. I can do something like this for setter in my script: player.scripts["movement"].properties = {stat = "stand"} But this isn't what I want because I have to go through my real keys in c++ to determine which key is placed in setter argument table value. This is my map in class: std::map<std::string, std::string> properties;

    Read the article

  • Xna Equivalent of Viewport.Unproject in a draw call as a matrix transformation

    - by Nick Crowther
    I am making a 2D sidescroller and I would like to draw my sprite to world space instead of client space so I do not have to lock it to the center of the screen and when the camera stops the sprite will walk off screen instead of being stuck at the center. In order to do this I wanted to make a transformation matrix that goes in my draw call. I have seen something like this: http://stackoverflow.com/questions/3570192/xna-viewport-projection-and-spritebatch I have seen Matrix.CreateOrthographic() used to go from Worldspace to client space but, how would I go about using it to go from clientspace to worldspace? I was going to try putting my returns from the viewport.unproject method I have into a scale matrix such as: blah = Matrix.CreateScale(unproject.X,unproject.Y,0); however, that doesn't seem to work correctly. Here is what I'm calling in my draw method(where X is the coordinate my camera should follow): Vector3 test = screentoworld(X, graphics); var clienttoworld = Matrix.CreateScale(test.X,test.Y, 0); animationPlayer.Draw(theSpriteBatch, new Vector2(X.X,X.Y),false,false,0,Color.White,new Vector2(1,1),clienttoworld); Here is my code in my unproject method: Vector3 screentoworld(Vector2 some, GraphicsDevice graphics): Vector2 Position =(some.X,some.Y); var project = Matrix.CreateOrthographic(5*graphicsdevice.Viewport.Width, graphicsdevice.Viewport.Height, 0, 1); var viewMatrix = Matrix.CreateLookAt( new Vector3(0, 0, -4.3f), new Vector3(X.X,X.Y,0), Vector3.Up); //I have also tried substituting (cam.Position.X,cam.Position.Y,0) in for the (0,0,-4.3f) Vector3 nearSource = new Vector3(Position, 0f); Vector3 nearPoint = graphicsdevice.Viewport.Unproject(nearSource, project, viewMatrix, Matrix.Identity); return nearPoint;

    Read the article

  • Randomly placing items script not working - sometimes items spawn in walls, sometimes items spawn in weird locations

    - by Timothy Williams
    I'm trying to figure out a way to randomly spawn items throughout my level, however I need to make sure they won't spawn inside another object (walls, etc.) Here's the code I'm currently using, it's based on the Physics.CheckSphere(); function. This runs OnLevelWasLoaded(); It spawns the items perfectly fine, but sometimes items spawn partway in walls. And sometimes items will spawn outside of the SpawnBox range (no clue why it does that.) //This is what randomly generates all the items. void SpawnItems () { if (Application.loadedLevelName == "Menu" || Application.loadedLevelName == "End Demo") return; //The bottom corner of the box we want to spawn items in. Vector3 spawnBoxBot = Vector3.zero; //Top corner. Vector3 spawnBoxTop = Vector3.zero; //If we're in the dungeon, set the box to the dungeon box and tell the items we want to spawn. if (Application.loadedLevelName == "dungeonScene") { spawnBoxBot = new Vector3 (8.857f, 0, 9.06f); spawnBoxTop = new Vector3 (-27.98f, 2.4f, -15); itemSpawn = dungeonSpawn; } //Spawn all the items. for (i = 0; i != itemSpawn.Length; i ++) { spawnedItem = null; //Zeroes out our random location Vector3 randomLocation = Vector3.zero; //Gets the meshfilter of the item we'll be spawning MeshFilter mf = itemSpawn[i].GetComponent<MeshFilter>(); //Gets it's bounds (see how big it is) Bounds bounds = mf.sharedMesh.bounds; //Get it's radius float maxRadius = new Vector3 (bounds.extents.x + 10f, bounds.extents.y + 10f, bounds.extents.z + 10f).magnitude * 5f; //Set which layer is the no walls layer var NoWallsLayer = 1 << LayerMask.NameToLayer("NoWallsLayer"); //Use that layer as your layermask. LayerMask layerMask = ~(1 << NoWallsLayer); //If we're in the dungeon, certain items need to spawn on certain halves. if (Application.loadedLevelName == "dungeonScene") { if (itemSpawn[i].name == "key2" || itemSpawn[i].name == "teddyBearLW" || itemSpawn[i].name == "teddyBearLW_Admiration" || itemSpawn[i].name == "radio") randomLocation = new Vector3(Random.Range(spawnBoxBot.x, -26.96f), Random.Range(spawnBoxBot.y, spawnBoxTop.y), Random.Range(spawnBoxBot.z, -2.141f)); else randomLocation = new Vector3(Random.Range(spawnBoxBot.x, spawnBoxTop.x), Random.Range(spawnBoxBot.y, spawnBoxTop.y), Random.Range(-2.374f, spawnBoxTop.z)); } //Otherwise just spawn them in the box. else randomLocation = new Vector3(Random.Range(spawnBoxBot.x, spawnBoxTop.x), Random.Range(spawnBoxBot.y, spawnBoxTop.y), Random.Range(spawnBoxBot.z, spawnBoxTop.z)); //This is what actually spawns the item. It checks to see if the spot where we want to instantiate it is clear, and if so it instatiates it. Otherwise we have to repeat the whole process again. if (Physics.CheckSphere(randomLocation, maxRadius, layerMask)) spawnedItem = Instantiate(itemSpawn[i], randomLocation, Random.rotation); else i --; //If we spawned something, set it's name to what it's supposed to be. Removes the (clone) addon. if (spawnedItem != null) spawnedItem.name = itemSpawn[i].name; } } What I'm asking for is if you know what's going wrong with this code that it would spawn stuff in walls. Or, if you could provide me with links/code/ideas of a better way to check if an item will spawn in a wall (some other function than Physics.CheckSphere). I've been working on this for a long time, and nothing I try seems to work. Any help is appreciated.

    Read the article

  • SpriteBatch.end() generating null pointer exception

    - by odaymichael
    I am getting a null pointer exception using libGDX that the debugger points as the SpriteBatch.end() line. I was wondering what would cause this. Here is the offending code block, specifically the batch.end() line: batch.begin(); for (int j = 0; j < 3; j++) for (int i = 0; i < 3; i++) if (zoomgrid[i][j].getPiece().getImage() != null) zoomgrid[i][j].getPiece().getImage().draw(batch); batch.end(); The top of the stack is actually a line that calls lastTexture.bind(); In the flush() method of com.badlogic.gdx.graphics.g2d.SpriteBatch. I appreciate any input, let me know if I haven't included enough information.

    Read the article

  • Rotate around the centre of the screen

    - by Dan Scott
    I want my camera to rotate around the centre of screen and I'm not sure how to achieve that. I have a rotation in the camera but I'm not sure what its rotating around. (I think it might be rotating around the position.X of camera, not sure) If you look at these two images: http://imgur.com/E9qoAM7,5qzyhGD#0 http://imgur.com/E9qoAM7,5qzyhGD#1 The first one shows how the camera is normally, and the second shows how I want the level to look when I would rotate the camera 90 degrees left or right. My camera: public class Camera { private Matrix transform; public Matrix Transform { get { return transform; } } private Vector2 position; public Vector2 Position { get { return position; } set { position = value; } } private float rotation; public float Rotation { get { return rotation; } set { rotation = value; } } private Viewport viewPort; public Camera(Viewport newView) { viewPort = newView; } public void Update(Player player) { position.X = player.PlayerPos.X + (player.PlayerRect.Width / 2) - viewPort.Width / 4; if (position.X < 0) position.X = 0; transform = Matrix.CreateTranslation(new Vector3(-position, 0)) * Matrix.CreateRotationZ(Rotation); if (Keyboard.GetState().IsKeyDown(Keys.D)) { rotation += 0.01f; } if (Keyboard.GetState().IsKeyDown(Keys.A)) { rotation -= 0.01f; } } } (I'm assuming you would need to rotate around the centre of the screen to achieve this)

    Read the article

  • Missing z-axis rotation for transforming between two vectors

    - by Steve Baughman
    I'm trying to rotate a cube so that it's facing up, but am getting hung up on the final implementation details. It now reliably will rotate the x,y axis to the correct side, but the z-axis is never rotating (See photos of before and after rotation). When I'm using the code below I always get '0' for my rotationVector.z. What am I missing here? // Define lookAt vector lookAtVector = GLKVector3Make(0,0,1); // Define axes vectors axes[0] = GLKVector3Make(0,0,1); axes[1] = GLKVector3Make(-1,0,0); axes[2] = GLKVector3Make(0,1,0); axes[3] = GLKVector3Make(1,0,0); axes[4] = GLKVector3Make(0,-1,0); axes[5] = GLKVector3Make(0,0,-1); CGFloat highest_dot = -1.0; GLKVector3 closest_axis; for(int i = 0; i < 6; i++) { // multiply cube's axes by existing matrix GLKVector3 axis = GLKMatrix4MultiplyVector3(matrix, axes[i]); CGFloat dot = GLKVector3DotProduct(axis, lookAtVector); if(dot > highest_dot) { closest_axis = axis; highest_dot = dot; } } GLKVector3 rotationVector = GLKVector3CrossProduct(closest_axis, lookAtVector); // Get angle between vectors CGFloat angle = atan2(GLKVector3Length(rotationVector), GLKVector3DotProduct(closest_axis, lookAtVector)); // normalize the rotation vector rotationVector = GLKVector3Normalize(rotationVector); // Create transform CATransform3D rotationTransform = CATransform3DMakeRotation(angle, rotationVector.x, rotationVector.y, rotationVector.z); // add rotation transform to existing transformation baseTransform = CATransform3DConcat(baseTransform, rotationTransform); return baseTransform; Before 3d Rotation After 3d Rotation Implementation based on this post

    Read the article

  • Early Z culling - Ogre

    - by teodron
    This question is concerned with how one can enable this "pixel filter" to work within an Ogre based app. Simply put, one can write two passes, the first without writing any colour values to the frame buffer lighting off colour_write off shading flat The second pass is the one that employs heavy pixel shader computations, hence it would be really nice to get rid of those hidden surface patches and not process them pixel-wise. This approach works, except for one thing: objects with alpha, such as billboard trees suffer in a peculiar way - from one side, they seem to capture the sky/background within their alpha region and ignore other trees/houses behind them, while viewed from the other side, they exhibit the desired behavior. To tackle the issue, I thought I could write a custom vertex shader in the first pass and offset the projected Z component of the vertex a little further away from its actual position, so that in the second pass there is a need to recompute correctly the pixels of the objects closest to the camera. This doesn't work at all, all surfaces are processed in the pixel shader and there is no performance gain. So, if anyone has done a similar trick with Ogre and alpha objects, kindly please help.

    Read the article

  • AI control for a ship with physics model

    - by Petteri Hietavirta
    I am looking for ideas how to implement following in 2D space. Unfortunately I don't know much about AI/path finding/autonomous control yet. Let's say this ship can move freely but it has mass and momentum. Also, external forces might affect it (explosions etc). The player can set a target for the ship at any time and it should reach that spot and stop. Without physics this would be simple, just point to the direction and go. But how to deal with existing momentum and then stopping on the spot? I don't want to modify ship's placement directly. edit: Just to make clear, the physics related math of the ship itself is not the problem.

    Read the article

  • How can I implement 2D cel shading in XNA?

    - by Artii
    So I was just wondering on how to give a scene I am rendering a hand drawn look (like say Crayon Physics). I don't really want to preprocess the sprites and was thinking of using a shader. Cel shading supplies the effect I want to achieve, but I am only aware of the 3D instances for it. So I wanted to ask if anyone knew a way to get this effect in 2D, or if cel shading would work just as fine on 2D scenes?

    Read the article

  • LWJGL glRotatef() without rotating axes?

    - by Brandon oubiub
    Okay so, I noticed when you rotate around an axis, say you do this: glRotatef(90.0f, 1.0f, 0.0f, 0.0f); That will rotate things 90 degrees around the x-axis. However, it also sort of rotates the y and z axes as well. So now the y-axis is pointing in and out of the screen, instead of up and down. So when I try to do stuff like this: glRotatef(90.0f, 1.0f, 0.0f, 0.0f); glRotatef(whatever, 0.0f, 1.0f, 0.0f); glRotatef(whatever2, 0.0f, 0.0f, 1.0f); The rotations around the y and z-axes end up not how I want them. I was wondering if there is any way I can sort of rotate just the axes back to their initial position after using glRotatef(), without rotating the object back. Or something like that, just so that when I rotate around the y-axis, it rotates around a vertical axis.

    Read the article

  • Producing a smooth mesh from density cloud and marching cubes

    - by Wardy
    Based on my results from this question I decided to build myself a 3D noise map containing float values in place of my existing boolean point values. The effect I'm trying to produce is something like this, rather than typical rolling hills; which should explain the "missing cubes" in the image below. If I render my density map in normal "minecraft mode" (1 block per point in the density map) varying the size of the cube based on the value in my density map (floats in the range 0 to 1) I get something like this: I'm now happy that I can produce a density map for the marching cubes algorithm (which will need a little tweaking) but for some reason when I run it through my implementation it's not producing what I expect. My problem is that I'm getting something like the first image in this answer to my previous question, when I want to achieve the effect in the second image. Upon further investigation I can't see how marching cubes does the "move vertex along the edge" type logic (i.e. the difference between the two images on my previous link). I see that it does do some interpolation, but I'm not convinced I have the correct understanding of what I think it should do, because the code in question appears to give the same result regardless of whether I use boolean or float values. I took the code from here which is a C# implementation of marching cubes, but instead of using the MarchingCubesPrimitive I modified it to accept an object of type IDrawable, containing lists for the various collections (vertices, normals, UVs, indices), the logic was otherwise untouched. My understanding is that given a very low isovalue the accuracy level of the surface being rendered should increase, so in short "less 45 degree slows more rolling hills" type mesh output. However this isn't what I'm seeing. Have I missed something or is the implementation flawed and need to be fixed? EDIT: A little more detail on what I am seeing when I "marching cube" the data. Ok so firstly, ignore the fact that the meshes created by the chunks don't "connect" (i'll probably raise another question about this later). Then look at the shaping of the island, it's too ... square, from the voxels rendered as boxes you get the impression there's a clean soft gradual hill and yet from the image there are sharp falling edges even in the most central areas where the gradient in the first image looks the most smooth. The data is "regenerated" each time I run this so no 2 islands come out the same, and it's purely random so not based on noise, but still, how can it look so smooth in 1 image and so not smooth in the other?

    Read the article

  • Is it important for reflection-based serialization maintain consistent field ordering?

    - by Matchlighter
    I just finished writing a packet builder that dynamically loads data into a data stream for eventual network transmission. Each builder operates by finding fields in a given class (and its superclasses) that are marked with a @data annotation. When I finishing my implementation, I remembered that getFields() does not return results in any specific order. Should reflection-based methods for serializing arbitrary data (like my packets) attempt to preserve a specific field ordering (such as alphabetical), and if so, how?

    Read the article

  • How can I import models from Blender into jMonkeyEngine?

    - by Nathan Sabruka
    I have some blender model files (Blender version 2.6) which I would like to use with the jMonkeyEngine SDK. However, when I use Blender's native .obj exporter, I can't import it in jMonkeyEngine (the model simply fails to import or looks messed up). I've tried importing .obj files or .blend files directly into the jMonkeyEngine SDK to no avail. I've also tried to use various OGRE exporters to export .scene and .material files, but only the .scene file is created. Is there a simple way to simply export files from Blender into the jMonkeyEngine SDK? EDIT: I seem to have found something in Blender. When I go under addons, there's a warning in the OGRE exporter; "'.mesh' output requires OgreCommandLineTools". However, I have already installed those tools under the C drive. Has anyone else encountered this issue?

    Read the article

  • Image with FadeIn effect blinks when added to scene

    - by Ef Es
    I am trying to add an image to the scene, but it should just be added to the scene invisible, FadeIn and then be deleted when the effect finishes. My problem is that the images blink once when they are added to the scene, then they do the intended effect. My best guess is that when they are added they show on the scene for a split second before starting the animation. I though of making them invisible for a split second before activating them, but I am not sure how to code it. const bool Sunbeams::add() { const CCSize kSceenSize = CCDirector::sharedDirector()->getWinSize(); const int nRayType = random( m_kRays.size()); const CCPoint kPosition( random( static_cast < int >( kSceenSize.width)), 0.0f); const float fDuration = random( m_fDurationVariance) + m_fDurationMin; CCSprite* pkLightBeam = CCSprite::spriteWithTexture( m_kRays[nRayType]); if ( !pkLightBeam) { msg::debug( "Sunbeams::add", "Failed to create sprite from ray '%d'!\n", m_kRays[nRayType]); return false; } pkLightBeam->setAnchorPoint( CCPointZero); pkLightBeam->setPosition( kPosition); m_kActiveBeams.push_back( pkLightBeam); CCDirector::sharedDirector()->getRunningScene()->addChild( pkLightBeam); CCActionInterval* pkAction = CCFadeIn::actionWithDuration( fDuration); CCActionInterval* pkActionBack = pkAction->reverse(); pkLightBeam->runAction( CCSequence::actions( pkAction, pkActionBack, 0)); return true; }

    Read the article

  • HLSL Pixel Shader that does palette swap

    - by derrace
    I have implemented a simple pixel shader which can replace a particular colour in a sprite with another colour. It looks something like this: sampler input : register(s0); float4 PixelShaderFunction(float2 coords: TEXCOORD0) : COLOR0 { float4 colour = tex2D(input, coords); if(colour.r == sourceColours[0].r && colour.g == sourceColours[0].g && colour.b == sourceColours[0].b) return targetColours[0]; return colour; } What I would like to do is have the function take in 2 textures, a default table, and a lookup table (both same dimensions). Grab the current pixel, and find the location XY (coords) of the matching RGB in the default table, and then substitute it with the colour found in the lookup table at XY. I have figured how to pass the Textures from C# into the function, but I am not sure how to find the coords in the default table by matching the colour. Could someone kindly assist? Thanks in advance.

    Read the article

  • Quaternion dfference + time --> angular velocity (gyroscope in physics library)

    - by AndrewK
    I am using Bullet Physic library to program some function, where I have difference between orientation from gyroscope given in quaternion and orientation of my object, and time between each frame in milisecond. All I want is set the orientation from my gyroscope to orientation of my object in 3D space. But all I can do is set angular velocity to my object. I have orientation difference and time, and from that I calculate vector of angular velocity [Wx,Wy,Wz] from that formula: W(t) = 2 * dq(t)/dt * conj(q(t)) My code is: btQuaternion diffQuater = gyroQuater - boxQuater; btQuaternion conjBoxQuater = gyroQuater.inverse(); btQuaternion velQuater = ((diffQuater * 2.0f) / d_time) * conjBoxQuater; And everything works well, till I get: 1 rotating around Y axis, angle about 60 degrees, then I have these values in 2 critical frames: x: -0.013220 y: -0.038050 z: -0.021979 w: -0.074250 - diffQuater x: 0.120094 y: 0.818967 z: 0.156797 w: -0.538782 - gyroQuater x: 0.133313 y: 0.857016 z: 0.178776 w: -0.464531 - boxQuater x: 0.207781 y: 0.290452 z: 0.245594 - diffQuater -> euler angles x: 3.153619 y: -66.947929 z: 175.936615 - gyroQuater -> euler angles x: 4.290697 y: -57.553043 z: 173.320053 - boxQuater -> euler angles x: 0.138128 y: 2.823307 z: 1.025552 w: 0.131360 - velQuater d_time: 0.058000 x: 0.211020 y: 1.595124 z: 0.303650 w: -1.143846 - diffQuater x: 0.089518 y: 0.771939 z: 0.144527 w: -0.612543 - gyroQuater x: -0.121502 y: -0.823185 z: -0.159123 w: 0.531303 - boxQuater x: nan y: nan z: nan - diffQuater -> euler angles x: 2.985240 y: -76.304405 z: -170.555054 - gyroQuater -> euler angles x: 3.269681 y: -65.977966 z: 175.639420 - boxQuater -> euler angles x: -0.730262 y: -2.882153 z: -1.294721 w: 63.325996 - velQuater d_time: 0.063000 2 rotating around X axis, angle about 120 degrees, then I have these values in 2 critical frames: x: -0.013045 y: -0.004186 z: -0.005667 w: -0.022482 - diffQuater x: -0.848030 y: -0.187985 z: 0.114400 w: 0.482099 - gyroQuater x: -0.834985 y: -0.183799 z: 0.120067 w: 0.504580 - boxQuater x: 0.036336 y: 0.002312 z: 0.020859 - diffQuater -> euler angles x: -113.129463 y: 0.731925 z: 25.415056 - gyroQuater -> euler angles x: -110.232368 y: 0.860897 z: 25.350458 - boxQuater -> euler angles x: -0.865820 y: -0.456086 z: 0.034084 w: 0.013184 - velQuater d_time: 0.055000 x: -1.721662 y: -0.387898 z: 0.229844 w: 0.910235 - diffQuater x: -0.874310 y: -0.200132 z: 0.115142 w: 0.426933 - gyroQuater x: 0.847352 y: 0.187766 z: -0.114703 w: -0.483302 - boxQuater x: -144.402298 y: 4.891629 z: 71.309158 - diffQuater -> euler angles x: -119.515343 y: 1.745076 z: 26.646086 - gyroQuater -> euler angles x: -112.974533 y: 0.738675 z: 25.411509 - boxQuater -> euler angles x: 2.086195 y: 0.676526 z: -0.424351 w: 70.104248 - velQuater d_time: 0.057000 2 rotating around Z axis, angle about 120 degrees, then I have these values in 2 critical frames: x: -0.000736 y: 0.002812 z: -0.004692 w: -0.008181 - diffQuater x: -0.003829 y: 0.012045 z: -0.868035 w: 0.496343 - gyroQuater x: -0.003093 y: 0.009232 z: -0.863343 w: 0.504524 - boxQuater x: -0.000822 y: -0.003032 z: 0.004162 - diffQuater -> euler angles x: -1.415189 y: 0.304210 z: -120.481873 - gyroQuater -> euler angles x: -1.091881 y: 0.227784 z: -119.399445 - boxQuater -> euler angles x: 0.159042 y: 0.169228 z: -0.754599 w: 0.003900 - velQuater d_time: 0.025000 x: -0.007598 y: 0.024074 z: -1.749412 w: 0.968588 - diffQuater x: -0.003769 y: 0.012030 z: -0.881377 w: 0.472245 - gyroQuater x: 0.003829 y: -0.012045 z: 0.868035 w: -0.496343 - boxQuater x: -5.645197 y: 1.148993 z: -146.507187 - diffQuater -> euler angles x: -1.418294 y: 0.270319 z: -123.638245 - gyroQuater -> euler angles x: -1.415183 y: 0.304208 z: -120.481873 - boxQuater -> euler angles x: 0.017498 y: -0.013332 z: 2.040073 w: 148.120056 - velQuater d_time: 0.027000 The problem is the most visible in diffQuater - euler angles vector. Can someone tell me why it is like that? and how to solve that problem? All suggestions are welcome.

    Read the article

  • Normal map applied as diffuse textures looks wrong

    - by KaiserJohaan
    Diffuse textures works fine, but I am having problem with normal maps, so I thought I'd tried to apply the normal maps as the diffuse map in my fragment shader so I could see everything is OK. I comment-out my normal map code and just set the diffuse map to the normal map and I get this: http://postimg.org/image/j9gudjl7r/ Looks like a smurf! This is the actual normal map of the main body: http://postimg.org/image/sbkyr6fg9/ Here is my fragment shader, notice I commented out normal map code so I could debug the normal map as a diffuse texture "#version 330 \n \ \n \ layout(std140) uniform; \n \ \n \ const int MAX_LIGHTS = 8; \n \ \n \ struct Light \n \ { \n \ vec4 mLightColor; \n \ vec4 mLightPosition; \n \ vec4 mLightDirection; \n \ \n \ int mLightType; \n \ float mLightIntensity; \n \ float mLightRadius; \n \ float mMaxDistance; \n \ }; \n \ \n \ uniform UnifLighting \n \ { \n \ vec4 mGamma; \n \ vec3 mViewDirection; \n \ int mNumLights; \n \ \n \ Light mLights[MAX_LIGHTS]; \n \ } Lighting; \n \ \n \ uniform UnifMaterial \n \ { \n \ vec4 mDiffuseColor; \n \ vec4 mAmbientColor; \n \ vec4 mSpecularColor; \n \ vec4 mEmissiveColor; \n \ \n \ bool mHasDiffuseTexture; \n \ bool mHasNormalTexture; \n \ bool mLightingEnabled; \n \ float mSpecularShininess; \n \ } Material; \n \ \n \ uniform sampler2D unifDiffuseTexture; \n \ uniform sampler2D unifNormalTexture; \n \ \n \ in vec3 frag_position; \n \ in vec3 frag_normal; \n \ in vec2 frag_texcoord; \n \ in vec3 frag_tangent; \n \ in vec3 frag_bitangent; \n \ \n \ out vec4 finalColor; " " \n \ \n \ void CalcGaussianSpecular(in vec3 dirToLight, in vec3 normal, out float gaussianTerm) \n \ { \n \ vec3 viewDirection = normalize(Lighting.mViewDirection); \n \ vec3 halfAngle = normalize(dirToLight + viewDirection); \n \ \n \ float angleNormalHalf = acos(dot(halfAngle, normalize(normal))); \n \ float exponent = angleNormalHalf / Material.mSpecularShininess; \n \ exponent = -(exponent * exponent); \n \ \n \ gaussianTerm = exp(exponent); \n \ } \n \ \n \ vec4 CalculateLighting(in Light light, in vec4 diffuseTexture, in vec3 normal) \n \ { \n \ if (light.mLightType == 1) // point light \n \ { \n \ vec3 positionDiff = light.mLightPosition.xyz - frag_position; \n \ float dist = max(length(positionDiff) - light.mLightRadius, 0); \n \ \n \ float attenuation = 1 / ((dist/light.mLightRadius + 1) * (dist/light.mLightRadius + 1)); \n \ attenuation = max((attenuation - light.mMaxDistance) / (1 - light.mMaxDistance), 0); \n \ \n \ vec3 dirToLight = normalize(positionDiff); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (attenuation * angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (attenuation * gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 2) // directional light \n \ { \n \ vec3 dirToLight = normalize(light.mLightDirection.xyz); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 4) // ambient light \n \ return diffuseTexture * Material.mAmbientColor * light.mLightIntensity * light.mLightColor; \n \ else \n \ return vec4(0.0); \n \ } \n \ \n \ void main() \n \ { \n \ vec4 diffuseTexture = vec4(1.0); \n \ if (Material.mHasDiffuseTexture) \n \ diffuseTexture = texture(unifDiffuseTexture, frag_texcoord); \n \ \n \ vec3 normal = frag_normal; \n \ if (Material.mHasNormalTexture) \n \ { \n \ diffuseTexture = vec4(normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0), 1.0); \n \ // vec3 normalTangentSpace = normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0); \n \ //mat3 tangentToWorldSpace = mat3(normalize(frag_tangent), normalize(frag_bitangent), normalize(frag_normal)); \n \ \n \ // normal = tangentToWorldSpace * normalTangentSpace; \n \ } \n \ \n \ if (Material.mLightingEnabled) \n \ { \n \ vec4 accumLighting = vec4(0.0); \n \ \n \ for (int lightIndex = 0; lightIndex < Lighting.mNumLights; lightIndex++) \n \ accumLighting += Material.mEmissiveColor * diffuseTexture + \n \ CalculateLighting(Lighting.mLights[lightIndex], diffuseTexture, normal); \n \ \n \ finalColor = pow(accumLighting, Lighting.mGamma); \n \ } \n \ else { \n \ finalColor = pow(diffuseTexture, Lighting.mGamma); \n \ } \n \ } \n"; Here is my wrapper around a texture OpenGLTexture::OpenGLTexture(const std::vector<uint8_t>& textureData, uint32_t textureWidth, uint32_t textureHeight, TextureFormat textureFormat, TextureType textureType, Logger& logger) : mLogger(logger), mTextureID(gNextTextureID++), mTextureType(textureType) { glGenTextures(1, &mTexture); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, mTexture); CHECK_GL_ERROR(mLogger); GLint glTextureFormat = (textureFormat == TextureFormat::TEXTURE_FORMAT_RGB ? GL_RGB : textureFormat == TextureFormat::TEXTURE_FORMAT_RGBA ? GL_RGBA : GL_RED); glTexImage2D(GL_TEXTURE_2D, 0, glTextureFormat, textureWidth, textureHeight, 0, glTextureFormat, GL_UNSIGNED_BYTE, &textureData[0]); CHECK_GL_ERROR(mLogger); glGenerateMipmap(GL_TEXTURE_2D); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, 0); CHECK_GL_ERROR(mLogger); } OpenGLTexture::~OpenGLTexture() { glDeleteBuffers(1, &mTexture); CHECK_GL_ERROR(mLogger); } And here is the sampler I create which is shared between Diffuse and normal textures // texture sampler setup glGenSamplers(1, &mTextureSampler); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MAG_FILTER, GL_LINEAR); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_S, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_T, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameterf(mTextureSampler, GL_TEXTURE_MAX_ANISOTROPY_EXT, mCurrentAnisotropy); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifDiffuseTexture"), OpenGLTexture::TEXTURE_UNIT_DIFFUSE); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifNormalTexture"), OpenGLTexture::TEXTURE_UNIT_NORMAL); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_DIFFUSE, mTextureSampler); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_NORMAL, mTextureSampler); CHECK_GL_ERROR(mLogger); SetAnisotropicFiltering(mCurrentAnisotropy); The diffuse textures looks like they should, but the normal looks so wierd. Why is this?

    Read the article

  • How to make unit selection circles merge?

    - by MaT
    I would like to know how to make this effect of merged circle selection. Here are images to illustrate: Basically I'm looking for this effect: How the merge effect of the circles can be achieved ? I didn't found any explanation concerning this effect. I know that to project those texture I can develop a decal system but I don't know how to create the merging effect. If possible, I'm looking for purely shaders solution.

    Read the article

  • running GL ES 2.0 code under Linux ( no Android no iOS )

    - by user827992
    I need to code OpenGL ES 2.0 bits and i would like to do this and run the programs on my desktop for practical reasons. Now, i already have tried the official GLES SDK from ATI for my videocard but it not even runs the examples that comes with the SDK itself, i'm not looking for performance here, even a software based rendering pipeline could be enough, i just need full support for GLES 2.0 and GLSL to code and run GL stuff. There is a reliable solution for this under Ubuntu Linux ?

    Read the article

  • ray collision with rectangle and floating point accuracy

    - by phq
    I'm trying to solve a problem with a ray bouncing on a box. Actually it is a sphere but for simplicity the box dimensions are expanded by the sphere radius when doing the collision test making the sphere a single ray. It is done by projecting the ray onto all faces of the box and pick the one that is closest. However because I'm using floating point variables I fear that the projected point onto the surface might be interpreted as being below in the next iteration, also I will later allow the sphere to move which might make that scenario more likely. Also the bounce coefficient might be as low as zero, making the sphere continue along the surface. So my naive solution is to project not only forwards but backwards to catch those cases. That is where I got into problems shown in the figure: In the first iteration the first black arrow is calculated and we end up at a point on the surface of the box. In the second iteration the "back projection" hits the other surface making the second black arrow bounce on the wrong surface. If there are several boxes close to each other this has further consequences making the sphere fall through them all. So my main question is how to handle possible floating point accuracy when placing the sphere on the box surface so it does not fall through. In writing this question I got the idea to have a threshold to only accept back projections a certain amount much smaller than the box but larger than the possible accuracy limitation, this would only cause the "false" back projection when the sphere hit the box on an edge which would appear naturally. To clarify my original approach, the arrows shown in the image is not only the path the sphere travels but is also representing a single time step in the simulation. In reality the time step is much smaller about 0.05 of the box size. The path traveled is projected onto possible sides to avoid traveling past a thinner object at higher speeds. In normal situations the floating point accuracy is not an issue but there are two situations where I have the concern. When the new position at the end of the time step is located very close to the surface, very unlikely though. When using a bounce factor of 0, here it happens every time the sphere hit a box. To add some loss of accuracy, the motivation for my concern, is that the sphere and box are in different coordinate systems and thus the sphere location is transformed for every test. This last one is why I'm not willing to stand on luck that one floating point value lying on top of the box always will be interpreted the same. I did not know voronoi regions by name, but looking at it I'm not sure how it would be used in a projection scenario that I'm using here.

    Read the article

  • Restoring projection matrix

    - by brainydexter
    I am learning to use FBOs and one of the things that I need to do when rendering something onto user defined FBO, I have to setup the projection, modelview and viewport for it. Once I am done rendering to the FBO, I need to restore these matrices. I found: glPushAttrib(GL_VIEWPORT_BIT); glPopAttrib(); to restore the viewport to its old state. Is there a way to restore the projection and modelview matrix to whatever it was earlier ? Tech: C++/OpenGL Thanks!

    Read the article

< Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >