Search Results

Search found 21563 results on 863 pages for 'game testing'.

Page 397/863 | < Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >

  • Multi Pass Blend

    - by Kirk Patrick
    I am seeking the simplest working example of a two pass HLSL pixel shader. It can do anything really, but the main idea is to perform "ping ponging" to take the output of the first pass and then send it for the second pass. In my example I want to draw to the R channel and then draw to the G channel and produce a simple Venn Diagram in the shader, but need to detect overlap. I can currently detect one or the other but not overlap. There are a red and green circle overlapping, and I want to put a dynamic texture map in the overlap region. I can currently put it in either or. Below is how it looks in the shader. -------------------------------- Texture2D shaderTexture; SamplerState SampleType; ////////////// // TYPEDEFS // ////////////// struct PixelInputType { float4 position : SV_POSITION; float2 tex0 : TEXCOORD0; float2 tex1 : TEXCOORD1; float4 color : COLOR; }; //////////////////////////////////////////////////////////////////////////////// // Pixel Shader //////////////////////////////////////////////////////////////////////////////// float4 main(PixelInputType input) : SV_TARGET { float4 textureColor0; float4 textureColor1; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor0 = shaderTexture.Sample(SampleType, input.tex0); textureColor1 = shaderTexture.Sample(SampleType, input.tex1); if (input.color[0]==1.0f && input.color[1]==1.0f) // Requires multi-pass textureColor0 = textureColor1; return textureColor0; } Here is the calling code (that needs to be modified) m_d3dContext->IASetVertexBuffers(0, 2, vbs, strides, offsets); m_d3dContext->IASetIndexBuffer(m_indexBuffer.Get(), DXGI_FORMAT_R32_UINT,0); m_d3dContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); m_d3dContext->IASetInputLayout(m_inputLayout.Get()); m_d3dContext->VSSetShader(m_vertexShader.Get(), nullptr, 0); m_d3dContext->VSSetConstantBuffers(0, 1, m_constantBuffer.GetAddressOf()); m_d3dContext->PSSetShader(m_pixelShader.Get(), nullptr, 0); m_d3dContext->PSSetShaderResources(0, 1, m_SRV.GetAddressOf()); m_d3dContext->PSSetSamplers(0, 1, m_QuadsTexSamplerState.GetAddressOf());

    Read the article

  • Having trouble with projection matrix, need help

    - by Mr.UNOwen
    I'm having trouble with what appears to be the projection matrix. Given a wide enough of a screen, when a cube is on the left and right most edge, the left or right wall will appear stretched to the point that the front face is 1/10 the width of the side. So I do update the screen ratio along with the projection matrix and view port on screen resize, am I safe to assume all the trouble is from the matrix class? Also the cube follows the mouse, but it's only vertically aligned and ahead of the mouse when going left or right from the center of the screen. Perspective function call: * setPerspective * * @param fov: angle in radians * @param aspect: screen ratio w/h * @param near: near distance * @param far: far distance **/ void APCamera::setPerspective(GMFloat_t fov, GMFloat_t aspect, GMFloat_t near, GMFloat_t far) { GMFloat_t difZ = near - far; GMFloat_t *data; mProjection->clear(); //set to identity matrix data = mProjection->getData(); GMFloat_t v = 1.0f / tan(fov / 2.0f); data[_AP_MAA] = v / aspect; data[_AP_MBB] = v; data[_AP_MCC] = (far + near) / difZ; data[_AP_MCD] = -1.0f; data[_AP_MDD] = 0.0f; data[_AP_MDC] = 2.0f * far * near/ difZ; mRatio = aspect; mInvProjOutdated = true; mIsPerspective = true; } and... #define _AP_MAA 0 #define _AP_MAB 1 #define _AP_MAC 2 #define _AP_MAD 3 #define _AP_MBA 4 #define _AP_MBB 5 #define _AP_MBC 6 #define _AP_MBD 7 #define _AP_MCA 8 #define _AP_MCB 9 #define _AP_MCC 10 #define _AP_MCD 11 #define _AP_MDA 12 #define _AP_MDB 13 #define _AP_MDC 14 #define _AP_MDD 15

    Read the article

  • Adding sub-entities to existing entities. Should it be done in the Entity and Component classes?

    - by Coyote
    I'm in a situation where a player can be given the control of small parts of an entity (i.e. Left missile battery). Therefore I started implementing sub entities as follow. Entities are Objects with 3 arrays: pointers to components pointers to sub entities communication subscribers (temporary implementation) Now when an entity is built it has a few components as you might expect and also I can attach sub entities which are handled with some dedicated code in the Entity and Component classes. I noticed sub entities are sharing data in 3 parts: position: the sub entities are using the parent's position and their own as an offset. scrips: sub entities are draining ammo and energy from the parent. physics: sub entities add weight to the parent I made this to quickly go forward, but as I'm slowly fixing current implementations I wonder if this wasn't a mistake. Is my current implementation something commonly done? Will this implementation put me in a corner? I thought it might be a better thing to create some sort of SubEntityComponent where sub entities are attached and handled. But before changing anything I wanted to seek the community's wisdom.

    Read the article

  • Platform jumping problems with AABB collisions

    - by Vee
    See the diagram first: When my AABB physics engine resolves an intersection, it does so by finding the axis where the penetration is smaller, then "push out" the entity on that axis. Considering the "jumping moving left" example: If velocityX is bigger than velocityY, AABB pushes the entity out on the Y axis, effectively stopping the jump (result: the player stops in mid-air). If velocityX is smaller than velocitY (not shown in diagram), the program works as intended, because AABB pushes the entity out on the X axis. How can I solve this problem? Source code: public void Update() { Position += Velocity; Velocity += World.Gravity; List<SSSPBody> toCheck = World.SpatialHash.GetNearbyItems(this); for (int i = 0; i < toCheck.Count; i++) { SSSPBody body = toCheck[i]; body.Test.Color = Color.White; if (body != this && body.Static) { float left = (body.CornerMin.X - CornerMax.X); float right = (body.CornerMax.X - CornerMin.X); float top = (body.CornerMin.Y - CornerMax.Y); float bottom = (body.CornerMax.Y - CornerMin.Y); if (SSSPUtils.AABBIsOverlapping(this, body)) { body.Test.Color = Color.Yellow; Vector2 overlapVector = SSSPUtils.AABBGetOverlapVector(left, right, top, bottom); Position += overlapVector; } if (SSSPUtils.AABBIsCollidingTop(this, body)) { if ((Position.X >= body.CornerMin.X && Position.X <= body.CornerMax.X) && (Position.Y + Height/2f == body.Position.Y - body.Height/2f)) { body.Test.Color = Color.Red; Velocity = new Vector2(Velocity.X, 0); } } } } } public static bool AABBIsOverlapping(SSSPBody mBody1, SSSPBody mBody2) { if(mBody1.CornerMax.X <= mBody2.CornerMin.X || mBody1.CornerMin.X >= mBody2.CornerMax.X) return false; if (mBody1.CornerMax.Y <= mBody2.CornerMin.Y || mBody1.CornerMin.Y >= mBody2.CornerMax.Y) return false; return true; } public static bool AABBIsColliding(SSSPBody mBody1, SSSPBody mBody2) { if (mBody1.CornerMax.X < mBody2.CornerMin.X || mBody1.CornerMin.X > mBody2.CornerMax.X) return false; if (mBody1.CornerMax.Y < mBody2.CornerMin.Y || mBody1.CornerMin.Y > mBody2.CornerMax.Y) return false; return true; } public static bool AABBIsCollidingTop(SSSPBody mBody1, SSSPBody mBody2) { if (mBody1.CornerMax.X < mBody2.CornerMin.X || mBody1.CornerMin.X > mBody2.CornerMax.X) return false; if (mBody1.CornerMax.Y < mBody2.CornerMin.Y || mBody1.CornerMin.Y > mBody2.CornerMax.Y) return false; if(mBody1.CornerMax.Y == mBody2.CornerMin.Y) return true; return false; } public static Vector2 AABBGetOverlapVector(float mLeft, float mRight, float mTop, float mBottom) { Vector2 result = new Vector2(0, 0); if ((mLeft > 0 || mRight < 0) || (mTop > 0 || mBottom < 0)) return result; if (Math.Abs(mLeft) < mRight) result.X = mLeft; else result.X = mRight; if (Math.Abs(mTop) < mBottom) result.Y = mTop; else result.Y = mBottom; if (Math.Abs(result.X) < Math.Abs(result.Y)) result.Y = 0; else result.X = 0; return result; }

    Read the article

  • geomipmapping using displacement mapping (and glVertexAttribDivisor)

    - by Will
    I wake up with a clear vision, but sadly my laptop card doesn't do displacement mapping nor glVertexAttribDivisor so I can't test it out; I'm left sharing here: With geomipmapping, the grid at any factor is transposable - if you pass in an offset - say as a uniform - you can reuse the same vertex and index array again and again. If you also pass in the offset into the heightmap as a uniform, the vertex shader can do displacement mapping. If the displacement map is mipmapped, you get the advantages of trilinear filtering for distant maps. And, if the scenery is closer, rather than exposing that the you have a world made out of quads, you can use your transposable grid vertex array and indices to do vertex-shader interpolation (fancy splines) to do super-smooth infinite zoom? So I have some questions: does it work? In theory, in practice? does anyone do it? Does this technique have a name? Papers, demos, anything I can look at? does glVertexAttribDivisor mean that you can have a single glMultiDrawElementsEXT or similar approach to draw all your terrain tiles in one call rather than setting up the uniforms and emitting each tile? Would this offer any noticeable gains? does a heightmap that is GL_LUMINANCE take just one byte per pixel(=vertex)? (On mainstream cards, obviously. Does storage vary in practice?) Does going to the effort of reusing the same vertices and indices mean that you can basically fill the GPU RAM with heightmap and not a lot else, giving you either bigger landscapes or more detailed landscapes/meshes for the same bang? is mipmapping the displacement map going to work? On future cards? Is it going to introduce unsurmountable inaccuracies if it is enabled?

    Read the article

  • how to add water effect to an image

    - by brainydexter
    This is what I am trying to achieve: A given image would occupy say 3/4th height of the screen. The remaining 1/4th area would be a reflection of it with some waves (water effect) on it. I'm not sure how to do this. But here's my approach: render the given texture to another texture called mirror texture (maybe FBOs can help me?) invert mirror texture (scale it by -1 along Y) render mirror texture at height = 3/4 of the screen add some sense of noise to it OR using pixel shader and time, put pixel.z = sin(time) to make it wavy (Tech: C++/OpenGL/glsl) Is my approach correct ? Is there a better way to do this ? Also, can someone please recommend me if using FrameBuffer Objects would be the right thing here ? Thanks

    Read the article

  • Preferred way to render text in OpenGL

    - by dukeofgaming
    Hi, I'm about tu pick up computer graphics once again for an university project. For a previous project I used a library called ftgl that didn't leave me quite satisfied as it felt kind of heavy (I tried all rendering techniques, text rendering didn't scale very well). My question is, is there a good and efficient library for this?, if not, what would be the way to implement fast but nice looking text?. Some intended uses are: Floating object/character labels Dialogues Menus HUD Regards and thanks in advance. EDIT: Preferrably that it can load fonts

    Read the article

  • how to make HLSL effect just for lighning without texture mapping?

    - by naprox
    I'm new to XNA, i created an effect and just want to use lightning but in default effect that XNA create we should do texture mapping or the model appears 'RED', because of this lines of code in the effect file: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float4 output = float4(1,0,0,1); return output; } and if i want to see my model (appear like when i use basiceffect) must do texture mapping by UV coordinates. but my model does not have UV coordinates assigned or its UV coordinates is not exported. and if i do texture mapping i got error. (i do texture mapping by this line of code in vertexshaderfunction and other necessary codes) output.UV= input.UV i have many of this models and want to work with them.(my models are in .FBX format) when i use Bassiceffect i have no problem and model appears correctly. how can i use "just" lightnings in my custom effects? and don't do texture mapping (because i have no UV coordinates in my models) and my model be look like when i use BasicEffect? if you need my complete code Here it is: http://www.mediafire.com/?4jexhd4ulm2icm2 here is inside of my Model Using BasicEffect http://i.imgur.com/ygP2h.jpg?1 and this is my code for drawing with or without BasicEffect inside of my draw() method: Matrix baseWorld = Matrix.CreateScale(Scale) * Matrix.CreateFromYawPitchRoll(Rotation.Y, Rotation.X, Rotation.Z) * Matrix.CreateTranslation(Position); foreach(ModelMesh mesh in Model.Meshes) { Matrix localWorld = ModelTransforms[mesh.ParentBone.Index] * baseWorld; foreach(ModelMeshPart part in mesh.MeshParts) { Effect effect = part.Effect; if (effect is BasicEffect) { ((BasicEffect)effect).World = localWorld; ((BasicEffect)effect).View = View; ((BasicEffect)effect).Projection = Projection; ((BasicEffect)effect).EnableDefaultLighting(); } else { setEffectParameter(effect, "World", localWorld); setEffectParameter(effect, "View", View); setEffectParameter(effect, "Projection", Projection); setEffectParameter(effect, "CameraPosition", CameraPosition); } } mesh.Draw(); } setEffectParameter is another method that sets effect parameter if i use my custom effect.

    Read the article

  • XNA Diffuse Shader Issue. Edge lighting problem. Image Attached

    - by adtither
    As you can see in this image the diffuse shading is working correctly in some places but in other places such as the the bottom of the sphere you can see the squares/triangles of the mesh. Any idea what would be causing this? Let me know if you need anymore information related to code. I can upload my normals calculations and shader effect if required. EDIT: Here's a link to the shader I'm using http://pastebin.com/gymVc7CP Link to normals calculations: http://pastebin.com/KnMGdzHP Seems to be an issue with edge lighting. Can't seem to see where I'm going wrong with the normals calculations though.

    Read the article

  • Camera Collision inside the room model

    - by sanddy
    I am having a problem in Calculating the camera collision for my Room model which consists of sofa, tables and other models. The users shall be moving the camera front, back, rotating so i need to make sure that the camera does not collide with any of the models with in the room. I have treated all my models inside the room by BoundingBox[] and the camera by BoundingSphere. So, far i have implemented collision by looking into the tutorial from http://www.toymaker.info/Games/XNA/html/xna_model_collisions.html which was great. But, I guess the problem lies in the Transformation part. I debugged and found some points to be at Vector(-XXX,-XXX,-XXX) where X is digit. Also i found my radius of some models where too large(in thousand, i just looked into its radius value before converting to BoundingBox). Do I need to scale the model for collision??? Below are my code:- On My LoadContent(): Matrix[] transforms = new Matrix[myModel.Bones.Count]; myModel.CopyAbsoluteBoneTransformsTo(transforms); int index = 0; box = new List<BoundingBox>(); BoundingBox worldModel = Utility.CalculateBoundingBox(myModel); foreach (ModelMesh mesh in myModel.Meshes) { Vector3[] obb = new Vector3[8]; worldModel.GetCorners(obb); Vector3[] asdf = (Vector3[])obb.Clone(); Vector3.Transform(obb, ref transforms[mesh.ParentBone.Index], obb); BoundingBox worldBox = BoundingBox.CreateFromPoints(obb); box.Add(worldBox); index++; } On CameraPosition Update: BoundingSphere bs = new BoundingSphere(this.cameraPos, 5.0f); if (RoomWalkthrough.Utility.CheckCollision(bs, bb)) { // Do Something } Please Help.

    Read the article

  • Why does calling CreateDXGIFactory prevent my program from exiting?

    - by smoth190
    I'm using CreateDXGIFactory to get the graphics adapters and display modes. When I call it, it works fine and I get all the data. However, when I exit my program, the main Win32 thread exits, but something stays open because it keeps debugging. Does CreateDXGIFactory create an extra thread and I'm not closing it? I don't understand. The only thing I would suspect is that in the documentation it says it doesn't work if it's called from DllMain. It is in a DLL, but it's not called from DllMain. And it doesn't fail, either. I'm using DirectX 11. Here is the function that initializes DirectX. I haven't gotten past retrieving the refresh rate because of this problem. I commented everything out to pinpoint the problem. bool CGraphicsManager::InitDirectX(HWND hWnd, int width, int height) { HRESULT result; IDXGIFactory* factory; IDXGIOutput* output; IDXGIAdapter* adapter; DXGI_MODE_DESC* displayModes; DXGI_ADAPTER_DESC adapterDesc; unsigned int modeCount = 0; unsigned int refreshNum = 0; unsigned int refreshDen = 0; //First, we need to get the monitors refresh rater result = CreateDXGIFactory(__uuidof(IDXGIFactory), (void**)&factory); //if(FAILED(result)) //{ //MemoryUtil::MessageBoxError(TEXT("InitDirectX"), 0, 0, TEXT("Failed to create DXGI factory\nError:\n%s"), DXGetErrorDescription(result)); //return false; //} /*//Create a graphics card adapter result = factory->EnumAdapters(0, &adapter); if(FAILED(result)) { MemoryUtil::MessageBoxError(TEXT("InitDirectX"), 0, 0, TEXT("Failed to get graphics adapters\nError:\n%s"), DXGetErrorDescription(result)); return false; } //Get the output result = adapter->EnumOutputs(0, &output); if(FAILED(result)) { MemoryUtil::MessageBoxError(TEXT("InitDirectX"), 0, 0, TEXT("Failed to get adapter output\nError:\n%s"), DXGetErrorDescription(result)); return false; } //Get the modes result = output->GetDisplayModeList(DXGI_FORMAT_R8G8B8A8_UNORM, DXGI_ENUM_MODES_INTERLACED, &modeCount, 0); if(FAILED(result)) { MemoryUtil::MessageBoxError(TEXT("InitDirectX"), 0, 0, TEXT("Failed to get mode count\nError:\n%s"), DXGetErrorDescription(result)); return false; } displayModes = new DXGI_MODE_DESC[modeCount]; result = output->GetDisplayModeList(DXGI_FORMAT_R8G8B8A8_UNORM, DXGI_ENUM_MODES_INTERLACED, &modeCount, displayModes); if(FAILED(result)) { MemoryUtil::MessageBoxError(TEXT("InitDirectX"), 0, 0, TEXT("Failed to get display modes\nError:\n%s"), DXGetErrorDescription(result)); return false; } //Now we need to find one for our screen size for(unsigned int i = 0; i < modeCount; i++) { if(displayModes[i].Width == (unsigned int)width) { if(displayModes[i].Height == (unsigned int)height) { refreshNum = displayModes[i].RefreshRate.Numerator; refreshDen = displayModes[i].RefreshRate.Denominator; break; } } } //Store the video card data result = adapter->GetDesc(&adapterDesc); if(FAILED(result)) { MemoryUtil::MessageBoxError(TEXT("InitDirectX"), 0, 0, TEXT("Failed to get adapter description\nError:\n%s"), DXGetErrorDescription(result)); return false; } m_videoCard = new CVideoCard(); MemoryUtil::CreateGameObject(m_videoCard); m_videoCard->VideoCardMemory = (unsigned int)(adapterDesc.DedicatedVideoMemory); wcstombs_s(0, m_videoCard->VideoCardDescription, 128, adapterDesc.Description, 128);*/ //ReleaseCOM(output); //ReleaseCOM(adapter); ReleaseCOM(factory); //DeletePointerArray(displayModes); return true; } Also, I don't know if this means anything, but this is some of the output log when the function is commented out: //... 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\msvcr100d.dll', Symbols loaded. 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\imm32.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\msctf.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\uxtheme.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Program Files (x86)\Common Files\microsoft shared\ink\tiptsf.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\ole32.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\oleaut32.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\clbcatq.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\oleacc.dll', Cannot find or open the PDB file The program '[6560] LostRock.exe: Native' has exited with code 0 (0x0). And when it isn't commented out... //... 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\cfgmgr32.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\devobj.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\wintrust.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\crypt32.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\msasn1.dll', Cannot find or open the PDB file 'LostRock.exe': Unloaded 'C:\Windows\SysWOW64\setupapi.dll' 'LostRock.exe': Unloaded 'C:\Windows\SysWOW64\devobj.dll' 'LostRock.exe': Unloaded 'C:\Windows\SysWOW64\cfgmgr32.dll' 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\clbcatq.dll', Cannot find or open the PDB file 'LostRock.exe': Loaded 'C:\Windows\SysWOW64\oleacc.dll', Cannot find or open the PDB file The thread 'Win32 Thread' (0xb94) has exited with code 0 (0x0). The program '[8096] LostRock.exe: Native' has exited with code 0 (0x0). //This is called when I click "Stop Debugging" P.S. I know it is CreateDXGIFactory because if I comment it out, the program exits correctly.

    Read the article

  • How to remove seams from a tile map in 3D?

    - by Grimshaw
    I am using my OpenGL custom engine to render a tilemap made with Tiled, using a well spread tileset from the web. There is nothing fancy going on. I load the TMX file from Tiled and generate vertex arrays and index arrays to render the tilemap. I am rendering this tilemap as a wall in my 3D world, meaning that I move around with a fly camera in my 3D world and at Z=0 there is a plane showing me my tiles. Everything is working correctly but I get ugly seems between the tiles. I've tried orthographic and perspective cameras and with either I found particular sets of parameters for the projection and view matrices where the artifacts did not show, but otherwise they are there 99% of the time in multiple patterns, depending on the zoom and camera parameters like field of view. Here's a screenshot of the artifact being shown: http://i.imgur.com/HNV1g4M.png Here's the tileset I am using (which Tiled also uses and renders correctly): http://i.imgur.com/SjjHK4q.png My tileset has no mipmaps and is set to GL_NEAREST and GL_CLAMP_TO_EDGE values. I've looked around many articles in the internet and nothing helped. I tried uv correction so the uv fall at half of the texel, rather than the end of the texel to prevent interpolating with the neighbour value(which is transparency). I tried debugging with my geometry and I verified that with no texture and a random color in each tile, I don't seem to see any seams. All vertices have integer coordinates, i.e, the first tile is a quad from (0,0) to (1,1) and so on. Tried adding a little offset both to the UV and to the vertices to see if the gaps cease to exist. Disabled multisampling too. Nothing fixed it so far. Thanks.

    Read the article

  • Light shaped like a line

    - by Michael
    I am trying to figure out how line-shaped lights fit into the standard point light/spotlight/directional light scheme. The way I see it, there are two options: Seed the line with regular point lights and just deal with the artifacts. Easy, but seems wasteful. Make the line some kind of emissive material and apply a bloom effect. Sounds like it could work, but I can't test it in my engine yet. Is there a standard way to do this? Or for non-point lights in general?

    Read the article

  • Simulating smooth movement along a line after calculating a collision containing a restitution of zero in 2D

    - by Casey
    [for tl;dr see after listing] //...Code to determine shapes types involved in collision here... //...Rectangle-Line collision detected. if(_rbTest->GetCollisionShape()->Intersects(*_ground->GetCollisionShape())) { //Convert incoming shape to a line. a2de::Line l(*dynamic_cast<a2de::Line*>(_ground->GetCollisionShape())); //Get line's normal. a2de::Vector2D normal_vector(l.GetSlope().GetY(), -l.GetSlope().GetX()); a2de::Vector2D::Normalize(normal_vector); //Accumulate forces involved. a2de::Vector2D intermediate_forces; a2de::Vector2D normal_force = normal_vector * _rbTest->GetMass() * _world->GetGravityHandler()->GetGravityValue(); intermediate_forces += normal_force; //Calculate final velocity: See [1] double Ma = _rbTest->GetMass(); a2de::Vector2D Ua = _rbTest->GetVelocity(); double Mb = _ground->GetMass(); a2de::Vector2D Ub = _ground->GetVelocity(); double mCr = Mb * _ground->GetRestitution(); a2de::Vector2D collision_velocity( ((Ma * Ua) + (Mb * Ub) + ((mCr * Ub) - (mCr * Ua))) / (Ma + Mb)); //Calculate reflection vector: See [2] a2de::Vector2D reflect_velocity( -collision_velocity + 2 * (a2de::Vector2D::DotProduct(collision_velocity, normal_vector)) * normal_vector ); //Affect velocity to account for restitution of colliding bodies. reflect_velocity *= (_ground->GetRestitution() * _rbTest->GetRestitution()); _rbTest->SetVelocity(reflect_velocity); //THE ULTIMATE ISSUE STEMS FROM THE FOLLOWING LINE: //Move object away from collision one pixel to prevent constant collision. _rbTest->SetPosition(_rbTest->GetPosition() + normal_vector); _rbTest->ApplyImpulse(intermediate_forces); } Sources: (1) Wikipedia: Coefficient of Restitution: Speeds after impact (2) Wikipedia: Specular Reflection: Direction of reflection First, I have a system in place to account for friction (that is, a coefficient of friction) but is not used right now (in addition, it is zero, which should not affect the math anyway). I'll deal with that after I get this working. Anyway, when the restitution of either object involved in the collision is zero the object stops as required, but if movement along the same direction (again, irrespective of the friction value that isn't used) as the line is attempted the object moves as if slogging through knee deep snow. If I remove the line of code in question and the object is not push away one pixel the object barely moves at all. All because the object collides, is stopped, is pushed up, collides, is stopped...etc. OR collides, is stopped, collides, is stopped, etc... TL;DR How do I only account for a collision ONCE for restitution purposes (BONUS: but CONTINUALLY for frictional purposes, to be implemented later)

    Read the article

  • How to transform gameObjects in array?

    - by user1764781
    I have an array of available gameObjects in the scene. An array of GO should be transformed according to received floats through UDP connection. I know it is simple, but can't figure it out how to accomplish the transformation an array of GO according to received unique floats for each GO, any help will be appreciated. Here is a transformation floats, it might be helpful I guess: x =ReadSingleBigEndian(data, signalOffset); signalOffset+=4; y= ReadSingleBigEndian(data, signalOffset); signalOffset+=4; z= ReadSingleBigEndian(data, signalOffset); signalOffset+=4; alpha= ReadSingleBigEndian(data, signalOffset); signalOffset+=4; theta= ReadSingleBigEndian(data,signalOffset); signalOffset+=4; phi= ReadSingleBigEndian(data,signalOffset); signalOffset+=4;

    Read the article

  • Unity 5.1 audio issues (no sound in back channels)

    - by N0xus
    I've trying to bring in surround sound audio into my project. I've set my computer up to run in 5.1 and when I play a 6 channel audio through windows media player (it's a test audio that does left speaker, right speaker etc) it works fine. However, when I run it through Unity, all I get is the front 3 channels. I've set it in the Edit - project settings - audio to be 5.1 in there. I even set it in code with following: void Start() { AudioSettings.speakerMode = AudioSpeakerMode.Mode5point1; } How ever, when I run a debug line of: print ( AudioSettings.driverCaps); It tells me that Unity is only playing in stereo. Is there something I'm still not doing? I should also add I've ran 10 different tests using the 3D audio pan and spread options. I've set both to either being fully off, half way on and full. Still the same results.

    Read the article

  • Can't read .cso files but I can read their .hlsl versions?

    - by Jader J Rivera
    Well I've been trying to read a .cso file to use as a shader for a DirectX program I'm currently making. Problem is no matter how I implemented a way to read the file it never worked. And after fidgeting around I discover that it's only the .cso files I can't read. I can read anything else (which means it works) even their .hlsl files. Which is strange because the .hlsl (high level shader language) files are supposed to turn into .cso (compiled shader object) files. What I'm currently doing is: vector<byte> Read(string File){ vector<byte> Text; fstream file(File, ios::in | ios::ate | ios::binary); if(file.is_open()){ Text.resize(file.tellg()); file.seekg(0 , ios::beg); file.read(reinterpret_cast<char*>(&Text[0]), Text.size()); file.close(); } return Text; }; If I then implement it. Read("VertexShader.hlsl"); //Works Read("VertexShader.cso"); //Doesn't Works?!?! And I need the .cso version of the shader to draw my sexy triangles. Without it my life and application will never continue and I have no idea what could be wrong. (I've also asked this at stack overflow but still no answers.)

    Read the article

  • How can I prevent seams from showing up on objects using lower mipmap levels?

    - by Shivan Dragon
    Disclaimer: kindly right click on the images and open them separately so that they're at full size, as there are fine details which don't show up otherwise. Thank you. I made a simple Blender model, it's a cylinder with the top cap removed: I've exported the UVs: Then imported them into Photoshop, and painted the inner area in yellow and the outer area in red. I made sure I cover well the UV lines: I then save the image and load it as texture on the model in Blender. Actually, I just reload it as the image where the UVs are exported, and change the viewport view mode to textured. When I look at the mesh up-close, there's yellow everywhere, everything seems fine: However, if I start zooming out, I start seeing red (literally and metaphorically) where the texture edges are: And the more I zoom, the more I see it: Same thing happends in Unity, though the effect seems less pronounced. Up close is fine and yellow: Zoom out and you see red at the seams: Now, obviously, for this simple example a workaround is to spread the yellow well outside the UV margins, and its fine from all distances. However this is an issue when you try making a complex texture that should tile seamlessly at the edges. In this situation I either make a few lines of pixels overlap (in which case it looks bad from upclose and ok from far away), or I leave them seamless and then I have those seams when seeing it from far away. So my question is, is there something I'm missing, or some extra thing I must do to have my texture look seamless from all distances?

    Read the article

  • Level of detail algorithm not functioning correctly

    - by Darestium
    I have been working on this problem for months; I have been creating Planet Generator of sorts, after more than 6 months of work I am no closer to finishing it then I was 4 months ago. My problem; The terrain does not subdivide in the correct locations properly, it almost seems as if there is a ghost camera next to me, and the quads subdivide based on the position of this "ghost camera". Here is a video of the broken program: http://www.youtube.com/watch?v=NF_pHeMOju8 The best example of the problem occurs around 0:36. For detail limiting, I am going for a chunked LOD approach, which subdivides the terrain based on how far you are away from it. I use a "depth table" to determine how many subdivisions should take place. void PQuad::construct_depth_table(float distance) { tree[0] = -1; for (int i = 1; i < MAX_DEPTH; i++) { tree[i] = distance; distance /= 2.0f; } } The chuncked LOD relies on the child/parent structure of quads, the depth is determined by a constant e.g: if the constant is 6, there are six levels of detail. The quads which should be drawn go through a distance test from the player to the centre of the quad. void PQuad::get_recursive(glm::vec3 player_pos, std::vector<PQuad*>& out_children) { for (size_t i = 0; i < children.size(); i++) { children[i].get_recursive(player_pos, out_children); } if (this->should_draw(player_pos) || this->depth == 0) { out_children.emplace_back(this); } } bool PQuad::should_draw(glm::vec3 player_position) { float distance = distance3(player_position, centre); if (distance < tree[depth]) { return true; } return false; } The root quad has four children which could be visualized like the following: [] [] [] [] Where each [] is a child. Each child has the same amount of children up until the detail limit, the quads which have are 6 iterations deep are leaf nodes, these nodes have no children. Each node has a corresponding Mesh, each Mesh structure has 16x16 Quad-shapes, each Mesh's Quad-shapes halves in size each detail level deeper - creating more detail. void PQuad::construct_children() { // Calculate the position of the Quad based on the parent's location calculate_position(); if (depth < (int)MAX_DEPTH) { children.reserve((int)NUM_OF_CHILDREN); for (int i = 0; i < (int)NUM_OF_CHILDREN; i++) { children.emplace_back(PQuad(this->face_direction, this->radius)); PQuad *child = &children.back(); child->set_depth(depth + 1); child->set_child_index(i); child->set_parent(this); child->construct_children(); } } else { leaf = true; } } The following function creates the vertices for each quad, I feel that it may play a role in the problem - I just can't determine what is causing the problem. void PQuad::construct_vertices(std::vector<glm::vec3> *vertices, std::vector<Color3> *colors) { vertices->reserve(quad_width * quad_height); for (int y = 0; y < quad_height; y++) { for (int x = 0; x < quad_width; x++) { switch (face_direction) { case YIncreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, quad_height - 1.0f, -(position.y + y * element_width))); break; case YDecreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, 0.0f, -(position.y + y * element_width))); break; case XIncreasing: vertices->emplace_back(glm::vec3(quad_width - 1.0f, position.y + y * element_width, -(position.x + x * element_width))); break; case XDecreasing: vertices->emplace_back(glm::vec3(0.0f, position.y + y * element_width, -(position.x + x * element_width))); break; case ZIncreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, position.y + y * element_width, 0.0f)); break; case ZDecreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, position.y + y * element_width, -(quad_width - 1.0f))); break; } // Position the bottom, right, front vertex of the cube from being (0,0,0) to (-16, -16, 16) (*vertices)[vertices->size() - 1] -= glm::vec3(quad_width / 2.0f, quad_width / 2.0f, -(quad_width / 2.0f)); colors->emplace_back(Color3(255.0f, 255.0f, 255.0f, false)); } } switch (face_direction) { case YIncreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, quad_height - 1.0f, -(position.y + quad_height / 2.0f)); break; case YDecreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, 0.0f, -(position.y + quad_height / 2.0f)); break; case XIncreasing: this->centre = glm::vec3(quad_width - 1.0f, position.y + quad_height / 2.0f, -(position.x + quad_width / 2.0f)); break; case XDecreasing: this->centre = glm::vec3(0.0f, position.y + quad_height / 2.0f, -(position.x + quad_width / 2.0f)); break; case ZIncreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, position.y + quad_height / 2.0f, 0.0f); break; case ZDecreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, position.y + quad_height / 2.0f, -(quad_height - 1.0f)); break; } this->centre -= glm::vec3(quad_width / 2.0f, quad_width / 2.0f, -(quad_width / 2.0f)); } Any help in discovering what is causing this "subdivding in the wrong place" would be greatly appreciated.

    Read the article

  • How can I chose the depth of a quadtree?

    - by Evpok
    In a 2d world, using a quadtree to prune pairs in collision detection, how can I chose the depth of said quadtree? The world I am dealing with is mostly made of moving objects¹, so the cost of dispatching the objects between the quadtree cells matter. So what I am interested in is the balance between the gain from less collision checking and the loss from more dispatching. 1. To be completely explicit, autonomous self-replicating cells competing for food sources, in an attempt to show my pupils predator-prey dynamics and genetic evolution at work

    Read the article

  • How does Minecraft renders its sunset and sky?

    - by Nick
    In Minecraft, the sunset looks really beautiful and I've always wanted to know how they do it. Do they use several skyboxes rendered over eachother? That is, one for the sky (which can turn dark and light depending on the time of the day), one for the sun and moon, and one for the orange horizon effect? I was hoping someone could enlighten me... I wish I could enter wireframe or something like that but as far as I know that is not possible.

    Read the article

  • Huge procedurally generated 'wilderness' worlds

    - by The Communist Duck
    I'm sure you all know of games like Dwarf Fortress - massive, procedural generated wilderness and land. Something like this, taken from this very useful article. However, I was wondering how I could apply this to a much larger scale; the scale of Minecraft comes to mind (isn't that something like 8x the size of the Earth's surface?). Pseudo-infinite, I think the best term would be. The article talks about fractal perlin noise. I am no way an expert on it, but I get the general idea (it's some kind of randomly generated noise which is semi-coherent, so not just random pixel values). I could just define regions X by X in size, add some region loading type stuff, and have one bit of noise generating a region. But this would result in just huge amounts of islands. On the other extreme, I don't think I can really generate a supermassive sheet of perlin noise. And it would just be one big island, I think. I am pretty sure Perlin noise, or some noise, would be the answer in some way. I mean, the map is really nice looking. And you could replace the ascii with tiles, and get something very nice looking. Anyone have any ideas? Thanks. :D

    Read the article

  • Understanding how texCUBE works and writing cubemaps properly into a cube rendertarget

    - by cubrman
    My goal is to create accurate reflections, sampled from a dynamic cubemap, for specific 3d objects (mostly lights) in XNA 4.0. To sample the cubemap I compute the 3d reflection vector in a classic way: half3 ReflectionVec = reflect(-directionToCamera, Normal.rgb); I then use the vector to get the actual reflected color: half3 ReflectionCol = texCUBElod(ReflectionSampler, float4(ReflectionVec, 0)); The cubemap I am sampling from is a RenderTarget with 6 flat faces. So my question is, given the 3d world position of an arbitrary 3d object, how can I make sure that I get accurate reflections of this object, when I re-render the cubemap. Should I build the ViewProjection matrix in a specific way? Or is there any other approach?

    Read the article

  • Most efficient way to handle coordinate maps in Java

    - by glowcoder
    I have a rectangular tile-based layout. It's your typical Cartesian system. I would like to have a single class that handles two lookup styles Get me the set of players at position X,Y Get me the position of player with key K My current implementation is this: class CoordinateMap<V> { Map<Long,Set<V>> coords2value; Map<V,Long> value2coords; // convert (int x, int y) to long key - this is tested, works for all values -1bil to +1bil // My map will NOT require more than 1 bil tiles from the origin :) private Long keyFor(int x, int y) { int kx = x + 1000000000; int ky = y + 1000000000; return (long)kx | (long)ky << 32; } // extract the x and y from the keys private int[] coordsFor(long k) { int x = (int)(k & 0xFFFFFFFF) - 1000000000; int y = (int)((k >>> 32) & 0xFFFFFFFF) - 1000000000; return new int[] { x,y }; } } From there, I proceed to have other methods that manipulate or access the two maps accordingly. My question is... is there a better way to do this? Sure, I've tested my class and it works fine. And sure, something inside tells me if I want to reference the data by two different keys, I need two different maps. But I can also bet I'm not the first to run into this scenario. Thanks!

    Read the article

  • Translate extrinsic rotations to intrinsic rotations ( Euler angles )

    - by MineMan287
    The problem I have is very frustrating: I am using the Jitter Physics library which gives Quaternion rotations, you can extract the extrinsic rotations but I need intrinsic rotations to rotate in OpenTK (There are other reasons as well so I don't want to make OpenTK use a Matrix) GL.Rotate(xr, 1, 0, 0) GL.Rotate(yr, 0, 1, 0) GL.Rotate(zr, 0, 0, 1) EDIT : Response to the first answer Like This? GL.Rotate(zr, 0, 0, 1) GL.Rotate(yr, 0, 1, 0) GL.Rotate(xr, 1, 0, 0) Or This? GL.Rotate(xr, 1, 0, 0) GL.Rotate(yr, 0, 1, 0) GL.Rotate(zr, 0, 0, 1) GL.Rotate(zr, 0, 0, 1) GL.Rotate(yr, 0, 1, 0) GL.Rotate(xr, 1, 0, 0) GL.Rotate(xr, 1, 0, 0) GL.Rotate(yr, 0, 1, 0) GL.Rotate(zr, 0, 0, 1) I'm confused, please give an example

    Read the article

< Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >