Search Results

Search found 839 results on 34 pages for 'vertex'.

Page 23/34 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • My PC suddenly doesn't detect the primary drive (SSD)

    - by smoth190
    My computer has been working fine for months, and it worked today, but tonight I went to start it up to find that my OCZ Vertex 2 isn't being found. When I turn on my computer, the loading screen gets stuck at "Detecting IDE drives...". After a while, it keeps going and lists the drives it finds. The first one in the list should be my Vertex 2, but it just says "None". The computer proceeds to get stuck on "Loading operating system...", which is understandable because the drive with the OS is "gone". My first thought was drive failure, but every time drives have crashed on me, they're still detected--they just don't work. This drive is an SSD, it's pretty new, and I had no problems beforehand. I find it hard to believe it failed. I'm sure it's possible, but I hope this isn't the case. There has been nothing strange going on at all with my PC, it's been running perfect until now. I was just about to do my monthly dskchk and defrag today. I popped in my Windows 7 Home Premium disk and booted from it. When I launched the repair tool, it didn't list any operating systems (because the drive is 100% missing...). When I've had disks crash before, it still listed the OS, you just couldn't do anything with it. I tried to restore from an image, but I don't have any of those, either. I opened the command console and listed the drivers with wmic logicaldisk get name. Only C: and D: came up. C: was my 1TB storage driver (luckily, all my stuff is here--only the OS is on the SSD!) and D: was the disk driver. So I still had an MIA drive... The SSD didn't come with any driver disks, so I can't install drivers. If there's a way to do this from a CD I can burn with my other PC, please let me know. What the heck do I do? Although only the OS is on my SSD, a new SSD is expensive. I'll probably also have to buy a new copy of Windows (an upgrade would be nice, though...) because I've found it eats my registration key when my PC crashes (and my thousands of dollars of Adobe programs, I'll be on the phone with tech support for a week to get those keys back). And I'll lose my registry, all my settings, all sorts of other stuff that I'll spend weeks restoring. My computer is a pain in the butt to take out and open up, so if I can't fix it, I'll try fiddling with the plug or putting it into a new computer, but not right now. Any help is greatly appreciated! The day when they make crash-less drives will be the day I live without worry.

    Read the article

  • UV Atlas Generation and Seam Removal

    - by P. Avery
    I'm generating light maps for scene mesh objects using DirectX's UV Atlas Tool( D3DXUVAtlasCreate() ). I've succeeded in generating an atlas, however, when I try to render the mesh object using the atlas the seams are visible on the mesh. Below are images of a lightmap generated for a cube. Here is the code I use to generate a uv atlas for a cube: struct sVertexPosNormTex { D3DXVECTOR3 vPos, vNorm; D3DXVECTOR2 vUV; sVertexPosNormTex(){} sVertexPosNormTex( D3DXVECTOR3 v, D3DXVECTOR3 n, D3DXVECTOR2 uv ) { vPos = v; vNorm = n; vUV = uv; } ~sVertexPosNormTex() { } }; // create a light map texture to fill programatically hr = D3DXCreateTexture( pd3dDevice, 128, 128, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &pLightmap ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXCreateTexture( lightmap )", __LINE__, hr ); return hr; } // get the zero level surface from the texture IDirect3DSurface9 *pS = NULL; pLightmap->GetSurfaceLevel( 0, &pS ); // clear surface pd3dDevice->ColorFill( pS, NULL, D3DCOLOR_XRGB( 0, 0, 0 ) ); // load a sample mesh DWORD dwcMaterials = 0; LPD3DXBUFFER pMaterialBuffer = NULL; V_RETURN( D3DXLoadMeshFromX( L"cube3.x", D3DXMESH_MANAGED, pd3dDevice, &pAdjacency, &pMaterialBuffer, NULL, &dwcMaterials, &g_pMesh ) ); // generate adjacency DWORD *pdwAdjacency = new DWORD[ 3 * g_pMesh->GetNumFaces() ]; g_pMesh->GenerateAdjacency( 1e-6f, pdwAdjacency ); // create light map coordinates LPD3DXMESH pMesh = NULL; LPD3DXBUFFER pFacePartitioning = NULL, pVertexRemapArray = NULL; FLOAT resultStretch = 0; UINT numCharts = 0; hr = D3DXUVAtlasCreate( g_pMesh, 0, 0, 128, 128, 3.5f, 0, pdwAdjacency, NULL, NULL, NULL, NULL, NULL, 0, &pMesh, &pFacePartitioning, &pVertexRemapArray, &resultStretch, &numCharts ); if( SUCCEEDED( hr ) ) { // release and set mesh SAFE_RELEASE( g_pMesh ); g_pMesh = pMesh; // write mesh to file hr = D3DXSaveMeshToX( L"cube4.x", g_pMesh, 0, ( const D3DXMATERIAL* )pMaterialBuffer->GetBufferPointer(), NULL, dwcMaterials, D3DXF_FILEFORMAT_TEXT ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXSaveMeshToX() at OnD3D9CreateDevice()", __LINE__, hr ); } // fill the the light map hr = BuildLightmap( pS, g_pMesh ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to BuildLightmap()", __LINE__, hr ); } } else { DebugStringDX( "Main", "Failed to D3DXUVAtlasCreate() at OnD3D9CreateDevice()", __LINE__, hr ); } SAFE_RELEASE( pS ); SAFE_DELETE_ARRAY( pdwAdjacency ); SAFE_RELEASE( pFacePartitioning ); SAFE_RELEASE( pVertexRemapArray ); SAFE_RELEASE( pMaterialBuffer ); Here is code to fill lightmap texture: HRESULT BuildLightmap( IDirect3DSurface9 *pS, LPD3DXMESH pMesh ) { HRESULT hr = S_OK; // validate lightmap texture surface and mesh if( !pS || !pMesh ) return E_POINTER; // lock the mesh vertex buffer sVertexPosNormTex *pV = NULL; pMesh->LockVertexBuffer( D3DLOCK_READONLY, ( void** )&pV ); // lock the mesh index buffer WORD *pI = NULL; pMesh->LockIndexBuffer( D3DLOCK_READONLY, ( void** )&pI ); // get the lightmap texture surface description D3DSURFACE_DESC desc; pS->GetDesc( &desc ); // lock the surface rect to fill with color data D3DLOCKED_RECT rct; hr = pS->LockRect( &rct, NULL, 0 ); if( FAILED( hr ) ) { DebugStringDX( "main.cpp:", "Failed to IDirect3DTexture9::LockRect()", __LINE__, hr ); return hr; } // iterate the pixels of the lightmap texture // check each pixel to see if it lies between the uv coordinates of a cube face BYTE *pBuffer = ( BYTE* )rct.pBits; for( UINT y = 0; y < desc.Height; ++y ) { BYTE* pBufferRow = ( BYTE* )pBuffer; for( UINT x = 0; x < desc.Width * 4; x+=4 ) { // determine the pixel's uv coordinate D3DXVECTOR2 p( ( ( float )x / 4.0f ) / ( float )desc.Width + 0.5f / 128.0f, y / ( float )desc.Height + 0.5f / 128.0f ); // for each face of the mesh // check to see if the pixel lies within the face's uv coordinates for( UINT i = 0; i < 3 * pMesh->GetNumFaces(); i +=3 ) { sVertexPosNormTex v[ 3 ]; v[ 0 ] = pV[ pI[ i + 0 ] ]; v[ 1 ] = pV[ pI[ i + 1 ] ]; v[ 2 ] = pV[ pI[ i + 2 ] ]; if( TexcoordIsWithinBounds( v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ) ) { // the pixel lies b/t the uv coordinates of a cube face // light contribution functions aren't needed yet //D3DXVECTOR3 vPos = TexcoordToPos( v[ 0 ].vPos, v[ 1 ].vPos, v[ 2 ].vPos, v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ); //D3DXVECTOR3 vNormal = v[ 0 ].vNorm; // set the color of this pixel red( for demo ) BYTE ba[] = { 0, 0, 255, 255, }; //ComputeContribution( vPos, vNormal, g_sLight, ba ); // copy the byte array into the light map texture memcpy( ( void* )&pBufferRow[ x ], ( void* )ba, 4 * sizeof( BYTE ) ); } } } // go to next line of the texture pBuffer += rct.Pitch; } // unlock the surface rect pS->UnlockRect(); // unlock mesh vertex and index buffers pMesh->UnlockIndexBuffer(); pMesh->UnlockVertexBuffer(); // write the surface to file hr = D3DXSaveSurfaceToFile( L"LightMap.jpg", D3DXIFF_JPG, pS, NULL, NULL ); if( FAILED( hr ) ) DebugStringDX( "Main.cpp", "Failed to D3DXSaveSurfaceToFile()", __LINE__, hr ); return hr; } bool TexcoordIsWithinBounds( const D3DXVECTOR2 &t0, const D3DXVECTOR2 &t1, const D3DXVECTOR2 &t2, const D3DXVECTOR2 &p ) { // compute vectors D3DXVECTOR2 v0 = t1 - t0, v1 = t2 - t0, v2 = p - t0; float f00 = D3DXVec2Dot( &v0, &v0 ); float f01 = D3DXVec2Dot( &v0, &v1 ); float f02 = D3DXVec2Dot( &v0, &v2 ); float f11 = D3DXVec2Dot( &v1, &v1 ); float f12 = D3DXVec2Dot( &v1, &v2 ); // Compute barycentric coordinates float invDenom = 1 / ( f00 * f11 - f01 * f01 ); float fU = ( f11 * f02 - f01 * f12 ) * invDenom; float fV = ( f00 * f12 - f01 * f02 ) * invDenom; // Check if point is in triangle if( ( fU >= 0 ) && ( fV >= 0 ) && ( fU + fV < 1 ) ) return true; return false; } Screenshot Lightmap I believe the problem comes from the difference between the lightmap uv coordinates and the pixel center coordinates...for example, here are the lightmap uv coordinates( generated by D3DXUVAtlasCreate() ) for a specific face( tri ) within the mesh, keep in mind that I'm using the mesh uv coordinates to write the pixels for the texture: v[ 0 ].uv = D3DXVECTOR2( 0.003581, 0.295631 ); v[ 1 ].uv = D3DXVECTOR2( 0.003581, 0.003581 ); v[ 2 ].uv = D3DXVECTOR2( 0.295631, 0.003581 ); the lightmap texture size is 128 x 128 pixels. The upper-left pixel center coordinates are: float halfPixel = 0.5 / 128 = 0.00390625; D3DXVECTOR2 pixelCenter = D3DXVECTOR2( halfPixel, halfPixel ); will the mapping and sampling of the lightmap texture will require that an offset be taken into account or that the uv coordinates are snapped to the pixel centers..? ...Any ideas on the best way to approach this situation would be appreciated...What are the common practices?

    Read the article

  • Access violation writing location, in my loop

    - by numerical25
    The exact error I am getting is First-chance exception at 0x0096234a in chp2.exe: 0xC0000005: Access violation writing location 0x002b0000. Windows has triggered a breakpoint in chp2.exe. And the breakpoint stops here for(DWORD i = 0; i < m; ++i) { //we are start at the top of z float z = halfDepth - i*dx; for(DWORD j = 0; j < n; ++j) { //to the left of us float x = -halfWidth + j*dx; float y = 0.0f; vertices[i*n+j].pos = D3DXVECTOR3(x, y, z); //<----- Right here vertices[i*n+j].color = D3DXVECTOR4(1.0f, 0.0f, 0.0f, 0.0f); } } I am not sure what I am doing wrong. below is the code in its entirety #include "MyGame.h" //#include "CubeVector.h" /* This code sets a projection and shows a turning cube. What has been added is the project, rotation and a rasterizer to change the rasterization of the cube. The issue that was going on was something with the effect file which was causing the vertices not to be rendered correctly.*/ typedef struct { ID3D10Effect* pEffect; ID3D10EffectTechnique* pTechnique; //vertex information ID3D10Buffer* pVertexBuffer; ID3D10Buffer* pIndicesBuffer; ID3D10InputLayout* pVertexLayout; UINT numVertices; UINT numIndices; }ModelObject; ModelObject modelObject; // World Matrix D3DXMATRIX WorldMatrix; // View Matrix D3DXMATRIX ViewMatrix; // Projection Matrix D3DXMATRIX ProjectionMatrix; ID3D10EffectMatrixVariable* pProjectionMatrixVariable = NULL; //grid information #define NUM_COLS 16 #define NUM_ROWS 16 #define CELL_WIDTH 32 #define CELL_HEIGHT 32 #define NUM_VERTSX (NUM_COLS + 1) #define NUM_VERTSY (NUM_ROWS + 1) bool MyGame::InitDirect3D() { if(!DX3dApp::InitDirect3D()) { return false; } D3D10_RASTERIZER_DESC rastDesc; rastDesc.FillMode = D3D10_FILL_WIREFRAME; rastDesc.CullMode = D3D10_CULL_FRONT; rastDesc.FrontCounterClockwise = true; rastDesc.DepthBias = false; rastDesc.DepthBiasClamp = 0; rastDesc.SlopeScaledDepthBias = 0; rastDesc.DepthClipEnable = false; rastDesc.ScissorEnable = false; rastDesc.MultisampleEnable = false; rastDesc.AntialiasedLineEnable = false; ID3D10RasterizerState *g_pRasterizerState; mpD3DDevice->CreateRasterizerState(&rastDesc, &g_pRasterizerState); mpD3DDevice->RSSetState(g_pRasterizerState); // Set up the World Matrix //The first line of code creates your identity matrix. Second line of code //second combines your camera position, target location, and which way is up respectively D3DXMatrixIdentity(&WorldMatrix); D3DXMatrixLookAtLH(&ViewMatrix, new D3DXVECTOR3(200.0f, 60.0f, -20.0f), new D3DXVECTOR3(200.0f, 50.0f, 0.0f), new D3DXVECTOR3(0.0f, 1.0f, 0.0f)); // Set up the projection matrix D3DXMatrixPerspectiveFovLH(&ProjectionMatrix, (float)D3DX_PI * 0.5f, (float)mWidth/(float)mHeight, 0.1f, 100.0f); if(!CreateObject()) { return false; } return true; } //These are actions that take place after the clearing of the buffer and before the present void MyGame::GameDraw() { static float rotationAngle = 0.0f; // create the rotation matrix using the rotation angle D3DXMatrixRotationY(&WorldMatrix, rotationAngle); rotationAngle += (float)D3DX_PI * 0.0f; // Set the input layout mpD3DDevice->IASetInputLayout(modelObject.pVertexLayout); // Set vertex buffer UINT stride = sizeof(VertexPos); UINT offset = 0; mpD3DDevice->IASetVertexBuffers(0, 1, &modelObject.pVertexBuffer, &stride, &offset); mpD3DDevice->IASetIndexBuffer(modelObject.pIndicesBuffer, DXGI_FORMAT_R32_UINT, 0); // Set primitive topology mpD3DDevice->IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST); // Combine and send the final matrix to the shader D3DXMATRIX finalMatrix = (WorldMatrix * ViewMatrix * ProjectionMatrix); pProjectionMatrixVariable->SetMatrix((float*)&finalMatrix); // make sure modelObject is valid // Render a model object D3D10_TECHNIQUE_DESC techniqueDescription; modelObject.pTechnique->GetDesc(&techniqueDescription); // Loop through the technique passes for(UINT p=0; p < techniqueDescription.Passes; ++p) { modelObject.pTechnique->GetPassByIndex(p)->Apply(0); // draw the cube using all 36 vertices and 12 triangles mpD3DDevice->DrawIndexed(modelObject.numIndices,0,0); } } //Render actually incapsulates Gamedraw, so you can call data before you actually clear the buffer or after you //present data void MyGame::Render() { DX3dApp::Render(); } bool MyGame::CreateObject() { //dx will represent the width and the height of the spacing of each vector float dx = 1; //Below are the number of vertices //m is the vertices of each row. n is the columns DWORD m = 30; DWORD n = 30; //This get the width of the entire land //30 - 1 = 29 rows * 1 = 29 * 0.5 = 14.5 float halfWidth = (n-1)*dx*0.5f; float halfDepth = (m-1)*dx*0.5f; float vertexsize = m * n; VertexPos vertices[80]; for(DWORD i = 0; i < m; ++i) { //we are start at the top of z float z = halfDepth - i*dx; for(DWORD j = 0; j < n; ++j) { //to the left of us float x = -halfWidth + j*dx; float y = 0.0f; vertices[i*n+j].pos = D3DXVECTOR3(x, y, z); vertices[i*n+j].color = D3DXVECTOR4(1.0f, 0.0f, 0.0f, 0.0f); } } int k = 0; DWORD indices[540]; for(DWORD i = 0; i < n-1; ++i) { for(DWORD j = 0; j < n-1; ++j) { indices[k] = (i * n) + j; indices[k + 1] = (i * n) + j + 1; indices[k + 2] = (i + 1) * n + j; indices[k + 3] = (i + 1) * n + j; indices[k + 4] = (i * n) + j + 1; indices[k + 5] = (i + 1) * n + j+ 1; k += 6; } } //Create Layout D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12, D3D10_INPUT_PER_VERTEX_DATA, 0} }; UINT numElements = (sizeof(layout)/sizeof(layout[0])); modelObject.numVertices = sizeof(vertices)/sizeof(VertexPos); //Create buffer desc D3D10_BUFFER_DESC bufferDesc; bufferDesc.Usage = D3D10_USAGE_DEFAULT; bufferDesc.ByteWidth = sizeof(VertexPos) * modelObject.numVertices; bufferDesc.BindFlags = D3D10_BIND_VERTEX_BUFFER; bufferDesc.CPUAccessFlags = 0; bufferDesc.MiscFlags = 0; D3D10_SUBRESOURCE_DATA initData; initData.pSysMem = vertices; //Create the buffer HRESULT hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &modelObject.pVertexBuffer); if(FAILED(hr)) return false; modelObject.numIndices = sizeof(indices)/sizeof(DWORD); bufferDesc.ByteWidth = sizeof(DWORD) * modelObject.numIndices; bufferDesc.BindFlags = D3D10_BIND_INDEX_BUFFER; initData.pSysMem = indices; hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &modelObject.pIndicesBuffer); if(FAILED(hr)) return false; ///////////////////////////////////////////////////////////////////////////// //Set up fx files LPCWSTR effectFilename = L"effect.fx"; modelObject.pEffect = NULL; hr = D3DX10CreateEffectFromFile(effectFilename, NULL, NULL, "fx_4_0", D3D10_SHADER_ENABLE_STRICTNESS, 0, mpD3DDevice, NULL, NULL, &modelObject.pEffect, NULL, NULL); if(FAILED(hr)) return false; pProjectionMatrixVariable = modelObject.pEffect->GetVariableByName("Projection")->AsMatrix(); //Dont sweat the technique. Get it! LPCSTR effectTechniqueName = "Render"; modelObject.pTechnique = modelObject.pEffect->GetTechniqueByName(effectTechniqueName); if(modelObject.pTechnique == NULL) return false; //Create Vertex layout D3D10_PASS_DESC passDesc; modelObject.pTechnique->GetPassByIndex(0)->GetDesc(&passDesc); hr = mpD3DDevice->CreateInputLayout(layout, numElements, passDesc.pIAInputSignature, passDesc.IAInputSignatureSize, &modelObject.pVertexLayout); if(FAILED(hr)) return false; return true; }

    Read the article

  • OpenGL extension vs OpenGL core

    - by user209347
    I was doubting: I'm writing a cross-platform engine OpenGL C++, I figured out windows forces the developers to access OpenGL features above 1.1 through extensions. Now the thing is, on Linux, I know that I can directly access functions if the version supports it through glext.h and opengl version. The problem is that if on Linux, the core doesn't support it, is it possible there is an extensions that supports the same functionality, in my case vertex buffer objects? I'm doing something like this: Windows: (hashdeck) define glFunction functionpointer_to_the_extension (apparently the layout changes font size if I use #) Linux: Since glext already defined glFunction, I can write in client code glFunction, and compile it both on Windows AND Linux without changing a single line in my client code using the engine (my goal). Now the thing is, I saw a tutorial use only the extension on Linux, and not checking for the opengl implementation version. If the functionality is available in the core, is it also available as extension (VBO's e.g.)? Or is an extension something you never know is available? I want to write an engine that gets all the possibilities on hardware, so I need to check (on Linux) for extensions as well as core version for possible functionality implementation.

    Read the article

  • Need help transforming DirectX 9 skybox hlsl shader to DirectX 11

    - by J2V
    I am in the middle of implementing a skybox to my game. I have been following this tutorial http://rbwhitaker.wikidot.com/skyboxes-2. I am using MonoGame as a framework and in order to support both Windows and Windows 8 metro I need to compile the shader with pixel and vertex shader 4. compile vs_4_0_level_9_1 compile ps_4_0_level_9_1 However some of the hlsl syntax has been updated with DX10 and DX11. I need to update this hlsl code: float4x4 World; float4x4 View; float4x4 Projection; float3 CameraPosition; Texture SkyBoxTexture; samplerCUBE SkyBoxSampler = sampler_state { texture = <SkyBoxTexture>; magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = Mirror; AddressV = Mirror; }; struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float3 TextureCoordinate : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); float4 VertexPosition = mul(input.Position, World); output.TextureCoordinate = VertexPosition - CameraPosition; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { return texCUBE(SkyBoxSampler, normalize(input.TextureCoordinate)); } technique Skybox { pass Pass1 { VertexShader = compile vs_2_0 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } I quess I need to change Texture into TextureCube, change sampler, swap texCUBE() with TextureCube.Sample() and change PixelShader return semantic to SV_Target0. I'm very new in shader languages and any help is appreciated!

    Read the article

  • "const char *" is incompatible with parameter of type "LPCWSTR" error

    - by N0xus
    I'm trying to incorporate some code from Programming an RTS Game With Direct3D into my game. Before anyone says it, I know the book is kinda old, but it's the particle effects system he creates that I'm trying to use. With his shader class, he intialise it thusly: void SHADER::Init(IDirect3DDevice9 *Dev, const char fName[], int typ) { m_pDevice = Dev; m_type = typ; if(m_pDevice == NULL)return; // Assemble and set the pixel or vertex shader HRESULT hRes; LPD3DXBUFFER Code = NULL; LPD3DXBUFFER ErrorMsgs = NULL; if(m_type == PIXEL_SHADER) hRes = D3DXCompileShaderFromFile(fName, NULL, NULL, "Main", "ps_2_0", D3DXSHADER_DEBUG, &Code, &ErrorMsgs, &m_pConstantTable); else hRes = D3DXCompileShaderFromFile(fName, NULL, NULL, "Main", "vs_2_0", D3DXSHADER_DEBUG, &Code, &ErrorMsgs, &m_pConstantTable); } How ever, this generates the following error: Error 1 error C2664: 'D3DXCompileShaderFromFileW' : cannot convert parameter 1 from 'const char []' to 'LPCWSTR' The compiler states the issue is with fName in the D3DXCompileShaderFromFile line. I know this has something to do with the character set, and my program was already running with a Unicode Character set on the go. I read that to solve the above problem, I need to switch to a multi-byte character set. But, if I do that, I get other errors in my code, like so: Error 2 error C2664: 'D3DXCreateEffectFromFileA' : cannot convert parameter 2 from 'const wchar_t *' to 'LPCSTR' With it being accredited to the following line of code: if(FAILED(D3DXCreateEffectFromFile(m_pD3DDevice9,effectFileName.c_str(),NULL,NULL,0,NULL,&m_pCurrentEffect,&pErrorBuffer))) This if is nested within another if statement checking my effectmap list. Though it is the FAILED word with the red line. Like wise I get the another error with the following line of code: wstring effectFileName = TEXT("Sky.fx"); With the error message being: Error 1 error C2440: 'initializing' : cannot convert from 'const char [7]' to 'std::basic_string<_Elem,_Traits,_Ax' If I change it back to a Uni code character set, I get the original (fewer) errors. Leaving as a multi-byte, I get more errors. Does anyone know of a way I can fix this issue?

    Read the article

  • Box 2d basic questions

    - by philipp
    I am a bit new to box2d and I am developing an game with type and letters. I am using an svg font and generate the box2d bodies direct from the glyphs path definition, using the convex hull of them. I also have an decomposition routine the decomposes this hull if necessary. All this it is more or less working, except that I got some strange errors which definitely are caused by the scale factors. The problem is caused by two factors: first: the world scale of box2d, second: the the precision of curve-approximation of the glyph vectors. So through scaling down the input vertices for box2d, it happens that they become equal caused by numerical precision, what causes errors in box2d. Through scaling the my glyphs a bit up, this goes away. I also goes away if I chose a different world scale factor, but this slows down the whole animation quite much! So if my view port is about 990px * 600px and i want to animate Glyphs in box2d which should have a size from about 50px * 50px up to 300px * 300px, which scale factor of the b2world should i choose? How small should the smallest distance from on vertex to another be, while approximating the glyph vectors? Thanks for help greetings philipp EDIT:: I continued reading the docs of box2d and after rethinking of the units system, which is designed to handle object from 0.1 up to 10 meters, I calculated a scale factor of 75. So Objects 600px width will are 8 meters wide in box2d and even small objects of about 20px width will become 0.26 meters width in box2d. I will go on trying with this values, but if there is somebody out there who could give me a clever advice i would be happy!

    Read the article

  • How do I draw an OpenGL point sprite using libgdx for Android?

    - by nbolton
    Here's a few snippets of what I have so far... void create() { renderer = new ImmediateModeRenderer(); tiles = Gdx.graphics.newTexture( Gdx.files.getFileHandle("res/tiles2.png", FileType.Internal), TextureFilter.MipMap, TextureFilter.Linear, TextureWrap.ClampToEdge, TextureWrap.ClampToEdge); } void render() { Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); Gdx.gl.glClearColor(0.6f, 0.7f, 0.9f, 1); } void renderSprite() { int handle = tiles.getTextureObjectHandle(); Gdx.gl.glBindTexture(GL.GL_TEXTURE_2D, handle); Gdx.gl.glEnable(GL.GL_POINT_SPRITE); Gdx.gl11.glTexEnvi(GL.GL_POINT_SPRITE, GL.GL_COORD_REPLACE, GL.GL_TRUE); renderer.begin(GL.GL_POINTS); renderer.vertex(pos.x, pos.y, pos.z); renderer.end(); } create() is called once when the program starts, and renderSprites() is called for each sprite (so, pos is unique to each sprite) where the sprites are arranged in a sort-of 3D cube. Unfortunately though, this just renders a few white dots... I suppose that the texture isn't being bound which is why I'm getting white dots. Also, when I draw my sprites on anything other than 0 z-axis, they do not appear -- I read that I need to crease my zfar and znear, but I have no idea how to do this using libgdx (perhaps it's because I'm using ortho projection? What do I use instead?). I know that the texture is usable, since I was able to render it using a SpriteBatch, but I guess I'm not using it properly with OpenGL.

    Read the article

  • Doubt about texture waves in CG Ocean Shader

    - by Alexandre
    I'm new on graphical programming, and I'm having some trouble understanding the Ocean Shader described on "Effective Water Simulation from Physical Models" from GPU Gems. The source code associated to this article is here. My problem has been to understand the concept of texture waves. First of all, what is achieved by texture waves? I'm having a hard time trying to figure out it's usefulness. In the section 1.2.4 of the article, it does say that the waves summed into the texture have the same parametrization as the waves used for vertex positioning. Does it mean that I can't use the texture provided by the source code if I change the parameters of the waves, or add more waves to sum? And in the section 1.4.1, is said that we can assume that there is no rotation between texture space and world space if the texture coordinates for our normal map are implicit. What does mean that the "normal map are implicit'? And why do I need a rotation between texture and world spaces if the normal map are not implicit? I would be very grateful for any help on this.

    Read the article

  • Manually writing a dx11 tessellation shader

    - by Tudor
    I am looking for resources on what are the steps of manually implementing tessellation (I'm using Unity cg). Today it seems that it is all the rage to hide most of the gpu code far away and use rather rigid simplifications such as unity's SURFace shaders. And it seems useless unless you're doing supeficial stuff. A little background: I have procedurally generated meshes (using marching cubes) which have quality normals but no UVs and no Tangents. I have successfully written a custom vertex and fragment shader to do triplanar texture and bumpmap projection as well as some custom stuff (custom lighting, procedurally warping the texture for variation etc). I am using the GPU Gems book as reference. Now I need to implement tessellation, but It seems I must calculate the tangents at runtime by swizzling normals (ctrl+f this in gems: <normal.z, normal.y, -normal.x>) before the tessellator gets them. And I also need to keep my custom vert+frag setup (with my custom parameters/textures being passed between them) - so apparently I cannot use surface shaders. Can anyone provide some guidence?

    Read the article

  • GLSL compile error when accessing an array with compile-time constant index

    - by Benlitz
    I have this shader that works well on my computer (using an ATI HD 5700). I have a loop iterating between two constant values, which is, afaik, acceptable in a glsl shader. I write stuff in two arrays in this loop. #define NB_POINT_LIGHT 2 ... varying vec3 vVertToLight[NB_POINT_LIGHT]; varying vec3 vVertToLightWS[NB_POINT_LIGHT]; ... void main() { ... for (int i = 0; i < NB_POINT_LIGHT; ++i) { if (bPointLightUse[i]) { vVertToLight[i] = ConvertToTangentSpace(ShPointLightData[i].Position - WorldPos.xyz); vVertToLightWS[i] = ShPointLightData[i].Position - WorldPos.xyz; } } ... } I tried my program on another computer equipped with an nVidia GTX 560 Ti, and it fails to compile my shader. I get the following errors (94 and 95 are the lines of the two affectations) when calling glLinkProgram: Vertex info ----------- 0(94) : error C5025: lvalue in assignment too complex 0(95) : error C5025: lvalue in assignment too complex I think my code is valid, I don't know if this comes from a compiler bug, a conversion of my shader to another format from the compiler (nvidia looks to convert it to CG), or if I just missed something. I already tried to remove the if (bPointLightUse[i]) statement and I still have the same error. However, if I just write this: vVertToLight[0] = ConvertToTangentSpace(ShPointLightData[0].Position - WorldPos.xyz); vVertToLightWS[0] = ShPointLightData[0].Position - WorldPos.xyz; vVertToLight[1] = ConvertToTangentSpace(ShPointLightData[1].Position - WorldPos.xyz); vVertToLightWS[1] = ShPointLightData[1].Position - WorldPos.xyz; Then I don't have the error anymore, but it's really unconvenient so I would prefer to keep something loop-based. Here is the more detailled config that works: Vendor: ATI Technologies Inc. Renderer: ATI Radeon HD 5700 Series Version: 4.1.10750 Compatibility Profile Context Shading Language version: 4.10 And here is the more detailed config that doesn't work (should also be compatibility profile, although not indicated): Vendor: NVIDIA Corporation Renderer: GeForce GTX 560 Ti/PCI/SSE2 Version: 4.1.0 Shading Language version: 4.10 NVIDIA via Cg compiler

    Read the article

  • gl_PointCoord always zero

    - by Jonathan
    I am trying to draw point sprites in OpenGL with a shader but gl_PointCoord is always zero. Here is my code Setup: //Shader creation..(includes glBindAttribLocation(program, ATTRIB_P, "p");) glEnableVertexAttribArray(ATTRIB_P); In the rendering loop: glUseProgram(shader_particles); float vertices[]={0.0f,0.0f,0.0f}; glEnable(GL_TEXTURE_2D); glEnable(GL_POINT_SPRITE); glEnable(GL_VERTEX_PROGRAM_POINT_SIZE); //glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE);(tried with this on/off, doesn't work) glVertexAttribPointer(ATTRIB_P, 3, GL_FLOAT, GL_FALSE, 0, vertices); glDrawArrays(GL_POINTS, 0, 1); Vertex Shader: attribute highp vec4 p; void main() { gl_PointSize = 40.0f; gl_Position = p; } Fragment Shader: void main() { gl_FragColor = vec4(gl_PointCoord.st, 0, 1);//if the coords range from 0-1, this should draw a square with black,red,green,yellow corners } But this only draws a black square with a size of 40. What am I doing wrong? Edit: Point sprites work when i use the fixed function, but I need to use shaders because in the end the code will be for opengl es 2.0 glUseProgram(0); glEnable(GL_TEXTURE_2D); glEnable(GL_POINT_SPRITE); glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE); glPointSize(40); glBegin(GL_POINTS); glVertex3f(0.0f,0.0f,0.0f); glEnd(); Is anyone able to get point sprites working with shader? If so, please share some code.

    Read the article

  • exporting bind and keyframe bone poses from blender to use in OpenGL

    - by SaldaVonSchwartz
    I'm having a hard time trying to understand how exactly Blender's concept of bone transforms maps to the usual math of skinning (which I'm implementing in an OpenGL-based engine of sorts). Or I'm missing out something in the math.. It's gonna be long, but here's as much background as I can think of. First, a few notes and assumptions: I'm using column-major order and multiply from right to left. So for instance, vertex v transformed by matrix A and then further transformed by matrix B would be: v' = BAv. This also means whenever I export a matrix from blender through python, I export it (in text format) in 4 lines, each representing a column. This is so I can then I can read them back into my engine like this: if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[0], &skeleton.joints[currentJointIndex].inverseBindTransform.m[1], &skeleton.joints[currentJointIndex].inverseBindTransform.m[2], &skeleton.joints[currentJointIndex].inverseBindTransform.m[3])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[4], &skeleton.joints[currentJointIndex].inverseBindTransform.m[5], &skeleton.joints[currentJointIndex].inverseBindTransform.m[6], &skeleton.joints[currentJointIndex].inverseBindTransform.m[7])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[8], &skeleton.joints[currentJointIndex].inverseBindTransform.m[9], &skeleton.joints[currentJointIndex].inverseBindTransform.m[10], &skeleton.joints[currentJointIndex].inverseBindTransform.m[11])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[12], &skeleton.joints[currentJointIndex].inverseBindTransform.m[13], &skeleton.joints[currentJointIndex].inverseBindTransform.m[14], &skeleton.joints[currentJointIndex].inverseBindTransform.m[15])) { I'm simplifying the code I show because otherwise it would make things unnecessarily harder (in the context of my question) to explain / follow. Please refrain from making remarks related to optimizations. This is not final code. Having said that, if I understand correctly, the basic idea of skinning/animation is: I have a a mesh made up of vertices I have the mesh model-world transform W I have my joints, which are really just transforms from each joint's space to its parent's space. I'll call these transforms Bj meaning matrix which takes from joint j's bind pose to joint j-1's bind pose. For each of these, I actually import their inverse to the engine, Bj^-1. I have keyframes each containing a set of current poses Cj for each joint J. These are initially imported to my engine in TQS format but after (S)LERPING them I compose them into Cj matrices which are equivalent to the Bjs (not the Bj^-1 ones) only that for the current spacial configurations of each joint at that frame. Given the above, the "skeletal animation algorithm is" On each frame: check how much time has elpased and compute the resulting current time in the animation, from 0 meaning frame 0 to 1, meaning the end of the animation. (Oh and I'm looping forever so the time is mod(total duration)) for each joint: 1 -calculate its world inverse bind pose, that is Bj_w^-1 = Bj^-1 Bj-1^-1 ... B0^-1 2 -use the current animation time to LERP the componets of the TQS and come up with an interpolated current pose matrix Cj which should transform from the joints current configuration space to world space. Similar to what I did to get the world version of the inverse bind poses, I come up with the joint's world current pose, Cj_w = C0 C1 ... Cj 3 -now that I have world versions of Bj and Cj, I store this joint's world- skinning matrix K_wj = Cj_w Bj_w^-1. The above is roughly implemented like so: - (void)update:(NSTimeInterval)elapsedTime { static double time = 0; time = fmod((time + elapsedTime),1.); uint16_t LERPKeyframeNumber = 60 * time; uint16_t lkeyframeNumber = 0; uint16_t lkeyframeIndex = 0; uint16_t rkeyframeNumber = 0; uint16_t rkeyframeIndex = 0; for (int i = 0; i < aClip.keyframesCount; i++) { uint16_t keyframeNumber = aClip.keyframes[i].number; if (keyframeNumber <= LERPKeyframeNumber) { lkeyframeIndex = i; lkeyframeNumber = keyframeNumber; } else { rkeyframeIndex = i; rkeyframeNumber = keyframeNumber; break; } } double lTime = lkeyframeNumber / 60.; double rTime = rkeyframeNumber / 60.; double blendFactor = (time - lTime) / (rTime - lTime); GLKMatrix4 bindPosePalette[aSkeleton.jointsCount]; GLKMatrix4 currentPosePalette[aSkeleton.jointsCount]; for (int i = 0; i < aSkeleton.jointsCount; i++) { F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.jointPoses[i]; F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.jointPoses[i]; GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor); GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor); GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor); GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation); currentTransform = GLKMatrix4Multiply(currentTransform, GLKMatrix4MakeTranslation(LERPTranslation.x, LERPTranslation.y, LERPTranslation.z)); currentTransform = GLKMatrix4Multiply(currentTransform, GLKMatrix4MakeScale(LERPScaling.x, LERPScaling.y, LERPScaling.z)); if (aSkeleton.joints[i].parentIndex == -1) { bindPosePalette[i] = aSkeleton.joints[i].inverseBindTransform; currentPosePalette[i] = currentTransform; } else { bindPosePalette[i] = GLKMatrix4Multiply(aSkeleton.joints[i].inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]); currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform); } aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]); } } At this point, I should have my skinning palette. So on each frame in my vertex shader, I do: uniform mat4 modelMatrix; uniform mat4 projectionMatrix; uniform mat3 normalMatrix; uniform mat4 skinningPalette[6]; attribute vec4 position; attribute vec3 normal; attribute vec2 tCoordinates; attribute vec4 jointsWeights; attribute vec4 jointsIndices; varying highp vec2 tCoordinatesVarying; varying highp float lIntensity; void main() { vec3 eyeNormal = normalize(normalMatrix * normal); vec3 lightPosition = vec3(0., 0., 2.); lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition))); tCoordinatesVarying = tCoordinates; vec4 skinnedVertexPosition = vec4(0.); for (int i = 0; i < 4; i++) { skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position; } gl_Position = projectionMatrix * modelMatrix * skinnedVertexPosition; } The result: The mesh parts that are supposed to animate do animate and follow the expected motion, however, the rotations are messed up in terms of orientations. That is, the mesh is not translated somewhere else or scaled in any way, but the orientations of rotations seem to be off. So a few observations: In the above shader notice I actually did not multiply the vertices by the mesh modelMatrix (the one which would take them to model or world or global space, whichever you prefer, since there is no parent to the mesh itself other than "the world") until after skinning. This is contrary to what I implied in the theory: if my skinning matrix takes vertices from model to joint and back to model space, I'd think the vertices should already be premultiplied by the mesh transform. But if I do so, I just get a black screen. As far as exporting the joints from Blender, my python script exports for each armature bone in bind pose, it's matrix in this way: def DFSJointTraversal(file, skeleton, jointList): for joint in jointList: poseJoint = skeleton.pose.bones[joint.name] jointTransform = poseJoint.matrix.inverted() file.write('Joint ' + joint.name + ' Transform {\n') for col in jointTransform.col: file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3])) DFSJointTraversal(file, skeleton, joint.children) file.write('}\n') And for current / keyframe poses (assuming I'm in the right keyframe): def exportAnimations(filepath): # Only one skeleton per scene objList = [object for object in bpy.context.scene.objects if object.type == 'ARMATURE'] if len(objList) == 0: return elif len(objList) > 1: return #raise exception? dialog box? skeleton = objList[0] jointNames = [bone.name for bone in skeleton.data.bones] for action in bpy.data.actions: # One animation clip per action in Blender, named as the action animationClipFilePath = filepath[0 : filepath.rindex('/') + 1] + action.name + ".aClip" file = open(animationClipFilePath, 'w') file.write('target skeleton: ' + skeleton.name + '\n') file.write('joints count: {:d}'.format(len(jointNames)) + '\n') skeleton.animation_data.action = action keyframeNum = max([len(fcurve.keyframe_points) for fcurve in action.fcurves]) keyframes = [] for fcurve in action.fcurves: for keyframe in fcurve.keyframe_points: keyframes.append(keyframe.co[0]) keyframes = set(keyframes) keyframes = [kf for kf in keyframes] keyframes.sort() file.write('keyframes count: {:d}'.format(len(keyframes)) + '\n') for kfIndex in keyframes: bpy.context.scene.frame_set(kfIndex) file.write('keyframe: {:d}\n'.format(int(kfIndex))) for i in range(0, len(skeleton.data.bones)): file.write('joint: {:d}\n'.format(i)) joint = skeleton.pose.bones[i] jointCurrentPoseTransform = joint.matrix translationV = jointCurrentPoseTransform.to_translation() rotationQ = jointCurrentPoseTransform.to_3x3().to_quaternion() scaleV = jointCurrentPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) file.write('\n') file.close() Which I believe follow the theory explained at the beginning of my question. But then I checked out Blender's directX .x exporter for reference.. and what threw me off was that in the .x script they are exporting bind poses like so (transcribed using the same variable names I used so you can compare): if joint.parent: jointTransform = poseJoint.parent.matrix.inverted() else: jointTransform = Matrix() jointTransform *= poseJoint.matrix and exporting current keyframe poses like this: if joint.parent: jointCurrentPoseTransform = joint.parent.matrix.inverted() else: jointCurrentPoseTransform = Matrix() jointCurrentPoseTransform *= joint.matrix why are they using the parent's transform instead of the joint in question's? isn't the join transform assumed to exist in the context of a parent transform since after all it transforms from this joint's space to its parent's? Why are they concatenating in the same order for both bind poses and keyframe poses? If these two are then supposed to be concatenated with each other to cancel out the change of basis? Anyway, any ideas are appreciated.

    Read the article

  • OpenGL ES 2 shaders for drawing buildings and roads like Google Maps does

    - by Pris
    I'm trying to create a shader that'll give me an effect similar to what buildings and roads look like on 3D Google Maps. You can see the effect interactively if you enable WebGL at maps.google.com, and I also found a couple of screenshots that illustrate what I'm trying to achieve: Thing I noticed: There's some kind of transparency thing going on with the roads/ground and the buildings, but not between the buildings themselves. It might be that they're rendering the ground and roads after the buildings with the right blend functions to achieve that effect. If you look closely, you'll see parts of the building profiles have an outline. The roads also have nice clean outlines. There are a lot of techniques for outlining things with shaders... but I'm curious to find out what might have been used in this case considering mobile hardware and a large number of entities with outlines (roads and buildings) I'm assuming that for the lighting, some sort of simple diffuse per-vertex shader is being used for the buildings though I could be wrong. I'm especially curious about the 'look' they achieved with buildings (clean, precise outlines/shading). It reminds me a little of what you'd see when designing stuff with CAD applications like SolidWorks: I'd appreciate any advice on achieving this kind of look with ES 2 shaders.

    Read the article

  • DirectX9 / HLSL Shader Model 3 - Passing Doubles between Shaders

    - by P. Avery
    I need higher precision on a few values within my vertex and pixel shaders...I'm currently using floats, so I would like to use doubles...I've read that HLSL Model 4 has two functions to convert a double into two unsigned integers and back again( asuint() and asdouble() ). These functions are only supported on HLSL 4 and I am using DirectX 9 which will only compile HLSL Model 3 and below... How can I pass a double between shaders? here is implementation for HLSL 4: struct VS_INPUT { float2 v; }; struct PS_INPUT { uint a; uint b; uint c; uint d; }; PS_INPUT VertexShader( VS_INPUT Input ) { PS_INPUT Output = ( PS_INPUT )0; double2 vPos = mul( Input.v, mWorld ).xy; asuint( vPos.x, Output.a, Output.b ); asuint( vPos.y, Output.c, Output.d ); return Output; } float4 PixelShader( PS_INPUT Input ) { double2 vPos; vPos.x = asdouble( Input.a, Input.b ); vPos.y = asdouble( Input.c, Input.d ); ... return 1; }

    Read the article

  • Transforming object world space matrix to a position in world space

    - by Fredrik Boston Westman
    Im trying to make a function for picking objects with a bounding sphere however I have run in to a problem. First I check against my my bounding sphere, then if it checks out then I test against the vertexes. I have already tested my vertex picking method and it work fine, however when I check first with my bounding sphere method it dosnt register anything. My conclusion is that when im transform my sphere position in to the position of the object in world space, the transformation goes wrong ( I base this on the fact the the x coordinate always becomes 1, even tho i translate non of my meshes along the x-axis to 1). So my question is: What is the proper way to transform a objects world space matrix to a position vector ? This is how i do it now: First i set my position vector to 0. XMVECTOR meshPos = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); Then I trannsform it with my object space matrix, and then add the offset to the center of the mesh. meshPos = XMVector3TransformCoord(meshPos, meshWorld) + centerOffset;

    Read the article

  • How can I run the pixel shader effect?

    - by Yashwinder
    Stated below is the code for my pixel shader which I am rendering after the vertex shader. I have set the wordViewProjection matrix in my program but I don't know to set the progress variable i.e in my pixel shader file which will make the image displayed by the help of a quad to give out transition effect. Here is the code for my pixel shader program::: As my pixel shader is giving a static effect and now I want to use it to give some effect. So for this I have to add a progress variable in my pixel shader and initialize to the Constant table function i.e constantTable.SetValue(D3DDevice,"progress",progress ); I am having the problem in using this function for progress in my program. Anybody know how to set this variable in my program. And my new pixel shader code is float progress : register(C0); sampler2D implicitInput : register(s0); sampler2D oldInput : register(s1); struct VS_OUTPUT { float4 Position : POSITION; float4 Color : COLOR0; float2 UV : TEXCOORD 0; }; float4 Blinds(float2 uv) { if(frac(uv.y * 5) < progress) { return tex2D(implicitInput, uv); } else { return tex2D(oldInput, uv); } } // Pixel Shader { return Blinds(input.UV); }

    Read the article

  • Understanding DeviceContext and Shaders in Direct3D/SlimDX

    - by Carson Myers
    I've been working through this tutorial about drawing triangles with SlimDX, and while it works, I've been trying to structure my program differently than in the tutorial. The tutorial just has everything in the main method, I'm trying to separate components into their own classes. But I'm not sure where certain components belong: namely, contexts and shaders. The tutorial (as it's just rendering one triangle) has one device, one swapchain, one device context and one set of shaders. intuition says that there is only one device/swapchain for one game, but with contexts I don't know. I made a Triangle class and put the vertex stuff in there. Should it also create a context? Should it load its own shaders? Or should I pass some global context and shaders to the triangle class when it is constructed? Or pass the shaders and construct a new context? I'm just getting started with 3D programming, so in addition to answering this question, if anyone knows of a tutorial or article or something about the larger-scale structure of a game, I'd be interested in seeing that as well.

    Read the article

  • how do I match movement of an object from 2d video into a 3d package ?

    - by George Profenza
    I'm trying to add objects in a 3d package(Blender) using recorded footage. I've played with Icarus and it's great to capture the camera movement. Also the Blender 2.41 importer script works in Blender 2.49 as well. The problem is I can't seem to get 3d coordinates for objects. I have tried Autodesk(RealVIZ) MatchMover 2011 and gone through the tutorials. Tutorial 3 shows how to link a vertex from a 3d mesh to a 2d trackpoint, but the setup is for camera movement. Tutorial 4 goes into Motion capture, but it uses 2 videos of the same motion taken with 2 cameras from different viewpoints. I've tried to bypass that using the same footage twice, but that failed, as the 3d coordinate system ends up messed up. What software do you recommend for this (mapping 3d coordinates to 2d tracked points and importing them into a 3d package) ? What is the recommended technique ? Any good examples out there ? Thanks, George

    Read the article

  • Writing the correct value in the depth buffer when using ray-casting

    - by hidayat
    I am doing a ray-casting in a 3d texture until I hit a correct value. I am doing the ray-casting in a cube and the cube corners are already in world coordinates so I don't have to multiply the vertices with the modelviewmatrix to get the correct position. Vertex shader world_coordinate_ = gl_Vertex; Fragment shader vec3 direction = (world_coordinate_.xyz - cameraPosition_); direction = normalize(direction); for (float k = 0.0; k < steps; k += 1.0) { .... pos += direction*delta_step; float thisLum = texture3D(texture3_, pos).r; if(thisLum > surface_) ... } Everything works as expected, what I now want is to sample the correct value to the depth buffer. The value that is now written to the depth buffer is the cube coordinate. But I want the value of pos in the 3d texture to be written. So lets say the cube is placed 10 away from origin in -z and the size is 10*10*10. My solution that does not work correctly is this: pos *= 10; pos.z += 10; pos.z *= -1; vec4 depth_vec = gl_ProjectionMatrix * vec4(pos.xyz, 1.0); float depth = ((depth_vec.z / depth_vec.w) + 1.0) * 0.5; gl_FragDepth = depth;

    Read the article

  • Installed on a machine with EFI and after installation, it says the disk is not bootable

    - by Roy Hocknull
    I installed Ubuntu 11.10 and the installation runs through fine. It then says reboot, and the machine says 'inserts a boot disk' which means the hard disk isn't bootable. The primary hard disk is an EFI device, and nothing seems to work. The machine in question is an Acer Aspire M3970 desktop. Core i5 2300, with 8Gb Ram. Main boot drive is an SSD (Vertex 2E 60Gb). I am trying to install the 11.10 x64 version. The installation I have tried from CD and USB stick. It goes through the install, allows you to partition the drives then installs all the packages. At the end it goes for a reboot, and asks you to remove the installation media. The PC then restarts and says no bootable disk. I tried it many times. In the end I have installed Fedora 15 x64 which works straight away with no messing. Unless this issues is fixed I have to drop 11.10 as a viable option. From my experience F15 isn't quite as polished as Ubuntu, but in this case - it works!! Is this a widespread problem or am I unique?

    Read the article

  • How to improve batching performance

    - by user4241
    Hello, I am developing a sprite based 2D game for mobile platform(s) and I'm using OpenGL (well, actually Irrlicht) to render graphics. First I implemented sprite rendering in a simple way: every game object is rendered as a quad with its own GPU draw call, meaning that if I had 200 game objects, I made 200 draw calls per frame. Of course this was a bad choice and my game was completely CPU bound because there is a little CPU overhead assosiacted in every GPU draw call. GPU stayed idle most of the time. Now, I thought I could improve performance by collecting objects into large batches and rendering these batches with only a few draw calls. I implemented batching (so that every game object sharing the same texture is rendered in same batch) and thought that my problems are gone... only to find out that my frame rate was even lower than before. Why? Well, I have 200 (or more) game objects, and they are updated 60 times per second. Every frame I have to recalculate new position (translation and rotation) for vertices in CPU (GPU on mobile platforms does not support instancing so I can't do it there), and doing this calculation 48000 per second (200*60*4 since every sprite has 4 vertices) simply seems to be too slow. What I could do to improve performance? All game objects are moving/rotating (almost) every frame so I really have to recalculate vertex positions. Only optimization that I could think of is a look-up table for rotations so that I wouldn't have to calculate them. Would point sprites help? Any nasty hacks? Anything else? Thanks.

    Read the article

  • VBO and shaders confusion, what's their connection?

    - by Jeffrey
    Considering OpenGL 2.1 VBOs and 1.20 GLSL shaders: When creating an entity like "Zombie", is it good to initialize just the VBO buffer with the data once and do N glDrawArrays() calls per each N zombies? Is there a more efficient way? (With a single call we cannot pass different uniforms to the shader to calculate an offset, see point 3) When dealing with logical object (player, tree, cube etc), should I always use the same shader or should I customize (or be able to customize) the shaders per each object? Considering an entity class, should I create and define the shader at object initialization? When having a movable object such as a human, is there any more powerful way to deal with its coordinates than to initialize its VBO object at 0,0 and define an uniform offset to pass to the shader to calculate its real position? Could you make an example of the Data Oriented Design on creating a generic zombie class? Is the following good? Zombielist class: class ZombieList { GLuint vbo; // generic zombie vertex model std::vector<color>; // object default color std::vector<texture>; // objects textures std::vector<vector3D>; // objects positions public: unsigned int create(); // return object id void move(unsigned int objId, vector3D offset); void rotate(unsigned int objId, float angle); void setColor(unsigned int objId, color c); void setPosition(unsigned int objId, color c); void setTexture(unsigned int, unsigned int); ... void update(Player*); // move towards player, attack if near } Example: Player p; Zombielist zl; unsigned int first = zl.create(); zl.setPosition(first, vector3D(50, 50)); zl.setTexture(first, texture("zombie1.png")); ... while (running) { // main loop ... zl.update(&p); zl.draw(); // draw every zombie }

    Read the article

  • bash process uses 90% CPU, comes back on computer restart

    - by Sano
    I’ve replaced the old HDD of my late 2008 unibody MacBook (8 GB of RAM, running OS X 10.7.4) with an OCZ Vertex 3 SSD. After doing this, I've installed Lion and restored my data from a Time Machine backup. Everything is fine, except for a process named “bash” that permanently uses about 90 % CPU. If I kill it via Activity Monitor, everything goes back to normal, but unfortunately the process comes back every time I restart the computer. I've tried do zap the PRAM, reinstall 10.7.4 from the combo package, and even simply wait for more than 2 hours, but the problem is still here.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >