Search Results

Search found 12182 results on 488 pages for 'game boy'.

Page 273/488 | < Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >

  • Queries regarding Geometry Shaders

    - by maverick9888
    I am dealing with geometry shaders using GL_ARB_geometry_shader4 extension. My code goes like : GLfloat vertices[] = { 0.5,0.25,1.0, 0.5,0.75,1.0, -0.5,0.75,1.0, -0.5,0.25,1.0, 0.6,0.35,1.0, 0.6,0.85,1.0, -0.6,0.85,1.0, -0.6,0.35,1.0 }; glProgramParameteriEXT(psId, GL_GEOMETRY_INPUT_TYPE_EXT, GL_TRIANGLES); glProgramParameteriEXT(psId, GL_GEOMETRY_OUTPUT_TYPE_EXT, GL_TRIANGLE_STRIP); glLinkProgram(psId); glBindAttribLocation(psId,0,"Position"); glEnableVertexAttribArray (0); glVertexAttribPointer(0, 3, GL_FLOAT, 0, 0, vertices); glDrawArrays(GL_TRIANGLE_STRIP,0,4); My vertex shader is : #version 150 in vec3 Position; void main() { gl_Position = vec4(Position,1.0); } Geometry shader is : #version 150 #extension GL_EXT_geometry_shader4 : enable in vec4 pos[3]; void main() { int i; vec4 vertex; gl_Position = pos[0]; EmitVertex(); gl_Position = pos[1]; EmitVertex(); gl_Position = pos[2]; EmitVertex(); gl_Position = pos[0] + vec4(0.3,0.0,0.0,0.0); EmitVertex(); EndPrimitive(); } Nothing is rendered with this code. What exactly should be the mode in glDrawArrays() ? How does the GL_GEOMETRY_OUTPUT_TYPE_EXT parameter will affect glDrawArrays() ? What I expect is 3 vertices will be passed on to Geometry Shader and using those we construct a primitive of size 4 (assuming GL_TRIANGLE_STRIP requires 4 vertices). Can somebody please throw some light on this ?

    Read the article

  • Stencil buffer appears to not be decrementing values correctly

    - by Alex Ames
    I'm attempting to use the stencil buffer as a clipper for my UI system, but I'm having trouble debugging a problem I'm running in to. This is what I'm doing: A widget can pass a rectangle to the the stencil clipper functions, which will increment the stencil buffer values that it covers. Then it will draw its children, which will only get drawn in the stencilled area (so that if they extend outside they'll be clipped). After a widget is done drawing its children, it pops that rectangle from the stack and in the process decrements the values in the stencil buffer that it has previously incremented. The slightly simplified code is below: static void drawStencil(Rect& rect, unsigned int ref) { // Save previous values of the color and depth masks GLboolean colorMask[4]; GLboolean depthMask; glGetBooleanv(GL_COLOR_WRITEMASK, colorMask); glGetBooleanv(GL_DEPTH_WRITEMASK, &depthMask); // Turn off drawing glColorMask(0, 0, 0, 0); glDepthMask(0); // Draw vertices here ... // Turn everything back on glColorMask(colorMask[0], colorMask[1], colorMask[2], colorMask[3]); glDepthMask(depthMask); // Only render pixels in areas where the stencil buffer value == ref glStencilFunc(GL_EQUAL, ref, 0xFF); glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); } void pushScissor(Rect rect) { // increment things only at the current stencil stack level glStencilFunc(GL_EQUAL, s_scissorStack.size(), 0xFF); glStencilOp(GL_KEEP, GL_INCR, GL_INCR); s_scissorStack.push_back(rect); drawStencil(rect, states, s_ScissorStack.size()); } void popScissor() { // undo what was done in the previous push, // decrement things only at the current stencil stack level glStencilFunc(GL_EQUAL, s_scissorStack.size(), 0xFF); glStencilOp(GL_KEEP, GL_DECR, GL_DECR); Rect rect = s_scissorStack.back(); s_scissorStack.pop_back(); drawStencil(rect, states, s_scissorStack.size()); } And this is how it's being used by the Widgets if (m_clip) pushScissor(m_rect); drawInternal(target, states); for (auto child : m_children) target.draw(*child, states); if (m_clip) popScissor(); This is the result of the above code: There are two things on the screen, a giant test button, and a window with some buttons and text areas on it. The text area scroll box is set to clip its children (so that the text doesn't extend outside the scroll box). The button is drawn after the window and should be on top of it completely. However, for some reason the text area is appearing on top of the button. The only reason I can think of that this would happen is if the stencil values were not getting decremented in the pop, and when it comes time to render the button, since those pixels don't have the right stencil value it doesn't draw over. But I can't figure out whats wrong with my code that would cause that to happen.

    Read the article

  • Is chess-like AI really inapplicable in turn-based strategy games?

    - by Joh
    Obviously, trying to apply the min-max algorithm on the complete tree of moves works only for small games (I apologize to all chess enthusiasts, by "small" I do not mean "simplistic"). For typical turn-based strategy games where the board is often wider than 100 tiles and all pieces in a side can move simultaneously, the min-max algorithm is inapplicable. I was wondering if a partial min-max algorithm which limits itself to N board configurations at each depth couldn't be good enough? Using a genetic algorithm, it might be possible to find a number of board configurations that are good wrt to the evaluation function. Hopefully, these configurations might also be good wrt to long-term goals. I would be surprised if this hasn't been thought of before and tried. Has it? How does it work?

    Read the article

  • Map and fill texture using PBO (OpenGL 3.3)

    - by NtscCobalt
    I'm learning OpenGL 3.3 trying to do the following (as it is done in D3D)... Create Texture of Width, Height, Pixel Format Map texture memory Loop write pixels Unmap texture memory Set Texture Render Right now though it renders as if the entire texture is black. I can't find a reliable source for information on how to do this though. Almost every tutorial I've found just uses glTexSubImage2D and passes a pointer to memory. Here is basically what my code does... (In this case it is generating an 1-byte Alpha Only texture but it is rendering it as the red channel for debugging) GLuint pixelBufferID; glGenBuffers(1, &pixelBufferID); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pixelBufferID); glBufferData(GL_PIXEL_UNPACK_BUFFER, 512 * 512 * 1, nullptr, GL_STREAM_DRAW); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0); GLuint textureID; glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, 512, 512, 0, GL_RED, GL_UNSIGNED_BYTE, nullptr); glBindTexture(GL_TEXTURE_2D, 0); glBindTexture(GL_TEXTURE_2D, textureID); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pixelBufferID); void *Memory = glMapBuffer(GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY); // Memory copied here, I know this is valid because it is the same loop as in my working D3D version glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0); And then here is the render loop. // This chunk left in for completeness glUseProgram(glProgramId); glBindVertexArray(glVertexArrayId); glBindBuffer(GL_ARRAY_BUFFER, glVertexBufferId); glEnableVertexAttribArray(0); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 20, 0); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 20, 12); GLuint transformLocationID = glGetUniformLocation(3, 'transform'); glUniformMatrix4fv(transformLocationID , 1, true, somematrix) // Not sure if this is all I need to do glBindTexture(GL_TEXTURE_2D, pTex->glTextureId); GLuint textureLocationID = glGetUniformLocation(glProgramId, "texture"); glUniform1i(textureLocationID, 0); glDrawArrays(GL_TRIANGLES, Offset*3, Triangles*3); Vertex Shader #version 330 core in vec3 Position; in vec2 TexCoords; out vec2 TexOut; uniform mat4 transform; void main() { TexOut = TexCoords; gl_Position = vec4(Position, 1.0) * transform; } Pixel Shader #version 330 core uniform sampler2D texture; in vec2 TexCoords; out vec4 fragColor; void main() { // Output color fragColor.r = texture2D(texture, TexCoords).r; fragColor.g = 0.0f; fragColor.b = 0.0f; fragColor.a = 1.0; }

    Read the article

  • Pixel alignment algorithm

    - by user42325
    I have a set of square blocks, I want to draw them in a window. I am sure the coordinates calculation is correct. But on the screen, some squares' edge overlap with other, some are not. I remember the problem is caused by accuracy of pixels. I remember there's a specific topic related to this kind of problem in 2D image rendering. But I don't remember what exactly it is, and how to solve it. Look at this screenshot. Each block should have a fixed width margin. But in the image, the vertical white line have different width.Though, the horizontal lines looks fine.

    Read the article

  • Convex Hull for Concave Objects

    - by Lighthink
    I want to implement GJK and I want it to handle concave shapes too (almost all my shapes are concave). I've thought of decomposing the concave shape into convex shapes and then building a hierarchical tree out of convex shapes, but I do not know how to do it. Nothing I could find on the Internet about it wasn't satisfying my needs, so maybe someone can point me in the right direction or give a full explanation.

    Read the article

  • Per-pixel displacement mapping GLSL

    - by Chris
    Im trying to implement a per-pixel displacement shader in GLSL. I read through several papers and "tutorials" I found and ended up with trying to implement the approach NVIDIA used in their Cascade Demo (http://www.slideshare.net/icastano/cascades-demo-secrets) starting at Slide 82. At the moment I am completly stuck with following problem: When I am far away the displacement seems to work. But as more I move closer to my surface, the texture gets bent in x-axis and somehow it looks like there is a little bent in general in one direction. EDIT: I added a video: click I added some screen to illustrate the problem: Well I tried lots of things already and I am starting to get a bit frustrated as my ideas run out. I added my full VS and FS code: VS: #version 400 layout(location = 0) in vec3 IN_VS_Position; layout(location = 1) in vec3 IN_VS_Normal; layout(location = 2) in vec2 IN_VS_Texcoord; layout(location = 3) in vec3 IN_VS_Tangent; layout(location = 4) in vec3 IN_VS_BiTangent; uniform vec3 uLightPos; uniform vec3 uCameraDirection; uniform mat4 uViewProjection; uniform mat4 uModel; uniform mat4 uView; uniform mat3 uNormalMatrix; out vec2 IN_FS_Texcoord; out vec3 IN_FS_CameraDir_Tangent; out vec3 IN_FS_LightDir_Tangent; void main( void ) { IN_FS_Texcoord = IN_VS_Texcoord; vec4 posObject = uModel * vec4(IN_VS_Position, 1.0); vec3 normalObject = (uModel * vec4(IN_VS_Normal, 0.0)).xyz; vec3 tangentObject = (uModel * vec4(IN_VS_Tangent, 0.0)).xyz; //vec3 binormalObject = (uModel * vec4(IN_VS_BiTangent, 0.0)).xyz; vec3 binormalObject = normalize(cross(tangentObject, normalObject)); // uCameraDirection is the camera position, just bad named vec3 fvViewDirection = normalize( uCameraDirection - posObject.xyz); vec3 fvLightDirection = normalize( uLightPos.xyz - posObject.xyz ); IN_FS_CameraDir_Tangent.x = dot( tangentObject, fvViewDirection ); IN_FS_CameraDir_Tangent.y = dot( binormalObject, fvViewDirection ); IN_FS_CameraDir_Tangent.z = dot( normalObject, fvViewDirection ); IN_FS_LightDir_Tangent.x = dot( tangentObject, fvLightDirection ); IN_FS_LightDir_Tangent.y = dot( binormalObject, fvLightDirection ); IN_FS_LightDir_Tangent.z = dot( normalObject, fvLightDirection ); gl_Position = (uViewProjection*uModel) * vec4(IN_VS_Position, 1.0); } The VS just builds the TBN matrix, from incoming normal, tangent and binormal in world space. Calculates the light and eye direction in worldspace. And finally transforms the light and eye direction into tangent space. FS: #version 400 // uniforms uniform Light { vec4 fvDiffuse; vec4 fvAmbient; vec4 fvSpecular; }; uniform Material { vec4 diffuse; vec4 ambient; vec4 specular; vec4 emissive; float fSpecularPower; float shininessStrength; }; uniform sampler2D colorSampler; uniform sampler2D normalMapSampler; uniform sampler2D heightMapSampler; in vec2 IN_FS_Texcoord; in vec3 IN_FS_CameraDir_Tangent; in vec3 IN_FS_LightDir_Tangent; out vec4 color; vec2 TraceRay(in float height, in vec2 coords, in vec3 dir, in float mipmap){ vec2 NewCoords = coords; vec2 dUV = - dir.xy * height * 0.08; float SearchHeight = 1.0; float prev_hits = 0.0; float hit_h = 0.0; for(int i=0;i<10;i++){ SearchHeight -= 0.1; NewCoords += dUV; float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r; float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0); hit_h += first_hit * SearchHeight; prev_hits += first_hit; } NewCoords = coords + dUV * (1.0-hit_h) * 10.0f - dUV; vec2 Temp = NewCoords; SearchHeight = hit_h+0.1; float Start = SearchHeight; dUV *= 0.2; prev_hits = 0.0; hit_h = 0.0; for(int i=0;i<5;i++){ SearchHeight -= 0.02; NewCoords += dUV; float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r; float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0); hit_h += first_hit * SearchHeight; prev_hits += first_hit; } NewCoords = Temp + dUV * (Start - hit_h) * 50.0f; return NewCoords; } void main( void ) { vec3 fvLightDirection = normalize( IN_FS_LightDir_Tangent ); vec3 fvViewDirection = normalize( IN_FS_CameraDir_Tangent ); float mipmap = 0; vec2 NewCoord = TraceRay(0.1,IN_FS_Texcoord,fvViewDirection,mipmap); //vec2 ddx = dFdx(NewCoord); //vec2 ddy = dFdy(NewCoord); vec3 BumpMapNormal = textureLod(normalMapSampler, NewCoord.xy, mipmap).xyz; BumpMapNormal = normalize(2.0 * BumpMapNormal - vec3(1.0, 1.0, 1.0)); vec3 fvNormal = BumpMapNormal; float fNDotL = dot( fvNormal, fvLightDirection ); vec3 fvReflection = normalize( ( ( 2.0 * fvNormal ) * fNDotL ) - fvLightDirection ); float fRDotV = max( 0.0, dot( fvReflection, fvViewDirection ) ); vec4 fvBaseColor = textureLod( colorSampler, NewCoord.xy,mipmap); vec4 fvTotalAmbient = fvAmbient * fvBaseColor; vec4 fvTotalDiffuse = fvDiffuse * fNDotL * fvBaseColor; vec4 fvTotalSpecular = fvSpecular * ( pow( fRDotV, fSpecularPower ) ); color = ( fvTotalAmbient + (fvTotalDiffuse + fvTotalSpecular) ); } The FS implements the displacement technique in TraceRay method, while always using mipmap level 0. Most of the code is from NVIDIA sample and another paper I found on the web, so I guess there cannot be much wrong in here. At the end it uses the modified UV coords for getting the displaced normal from the normal map and the color from the color map. I looking forward for some ideas. Thanks in advance! Edit: Here is the code loading the heightmap: glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, mImageData); glGenerateMipmap(GL_TEXTURE_2D); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); Maybe something wrong in here?

    Read the article

  • Problem with Ogre::Camera lookAt function when target is directly below.

    - by PigBen
    I am trying to make a class which controls a camera. It's pretty basic right now, it looks like this: class HoveringCameraController { public: void init(Ogre::Camera & camera, AnimatedBody & target, Ogre::Real height); void update(Ogre::Real time_delta); private: Ogre::Camera * camera_; AnimatedBody * target_; Ogre::Real height_; }; HoveringCameraController.cpp void HoveringCameraController::init(Ogre::Camera & camera, AnimatedBody & target, Ogre::Real height) { camera_ = &camera; target_ = &target; height_ = height; update(0.0); } void HoveringCameraController::update(Ogre::Real time_delta) { auto position = target_->getPosition(); position.y += height_; camera_->setPosition(position); camera_->lookAt(target_->getPosition()); } AnimatedBody is just a class that encapsulates an entity, it's animations and a scene node. The getPosition function is simply forwarded to it's scene node. What I want(for now) is for the camera to simply follow the AnimatedBody overhead at the distance given(the height parameter), and look down at it. It follows the object around, but it doesn't look straight down, it's tilted quite a bit in the positive Z direction. Does anybody have any idea why it would do that? If I change this line: position.y += height_; to this: position.x += height_; or this: position.z += height_; it does exactly what I would expect. It follows the object from the side or front, and looks directly at it.

    Read the article

  • Changing State in PlayerControler from PlayerInput

    - by Jeremy Talus
    In my player input I wanna change the the "State" of my player controller but I have some trouble to do it my player input is declared like that : class ResistancePlayerInput extends PlayerInput within ResistancePlayerController config(ResistancePlayerInput); and in my playerControler I have that : class ResistancePlayerController extends GamePlayerController; var name PreviousState; DefaultProperties { CameraClass = class 'ResistanceCamera' //Telling the player controller to use your custom camera script InputClass = class'ResistanceGame.ResistancePlayerInput' DefaultFOV = 90.f //Telling the player controller what the default field of view (FOV) should be } simulated event PostBeginPlay() { Super.PostBeginPlay(); } auto state Walking { event BeginState(name PreviousStateName) { Pawn.GroundSpeed = 200; `log("Player Walking"); } } state Running extends Walking { event BeginState(name PreviousStateName) { Pawn.GroundSpeed = 350; `log("Player Running"); } } state Sprinting extends Walking { event BeginState(name PreviousStateName) { Pawn.GroundSpeed = 800; `log("Player Sprinting"); } } I have tried to use PCOwner.GotoState(); and ResistancePlayerController(PCOwner).GotoState(); but won't work. I have also tried a simple GotoState, and nothing happen how can I call GotoState for the PC Class from my player input ?

    Read the article

  • Simple project - make a 3D box tumble and fall to the ground [closed]

    - by Dominic Bou-Samra
    Possible Duplicate: Resources to learn programming rigid body simulation Hi guys, I want to try learning rigid-body dynamic simulation. I have done a fluid and cloth simulation before, but never anything rigid. My maths knowledge is limited in that I don't know the notation that well. Are there any good cliff-notes, tutorials, guides on how I would accomplish a simple task like this? I don't want a super complex pdf that's only a little relevant. Thanks.

    Read the article

  • UV Atlas Generation and Seam Removal

    - by P. Avery
    I'm generating light maps for scene mesh objects using DirectX's UV Atlas Tool( D3DXUVAtlasCreate() ). I've succeeded in generating an atlas, however, when I try to render the mesh object using the atlas the seams are visible on the mesh. Below are images of a lightmap generated for a cube. Here is the code I use to generate a uv atlas for a cube: struct sVertexPosNormTex { D3DXVECTOR3 vPos, vNorm; D3DXVECTOR2 vUV; sVertexPosNormTex(){} sVertexPosNormTex( D3DXVECTOR3 v, D3DXVECTOR3 n, D3DXVECTOR2 uv ) { vPos = v; vNorm = n; vUV = uv; } ~sVertexPosNormTex() { } }; // create a light map texture to fill programatically hr = D3DXCreateTexture( pd3dDevice, 128, 128, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &pLightmap ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXCreateTexture( lightmap )", __LINE__, hr ); return hr; } // get the zero level surface from the texture IDirect3DSurface9 *pS = NULL; pLightmap->GetSurfaceLevel( 0, &pS ); // clear surface pd3dDevice->ColorFill( pS, NULL, D3DCOLOR_XRGB( 0, 0, 0 ) ); // load a sample mesh DWORD dwcMaterials = 0; LPD3DXBUFFER pMaterialBuffer = NULL; V_RETURN( D3DXLoadMeshFromX( L"cube3.x", D3DXMESH_MANAGED, pd3dDevice, &pAdjacency, &pMaterialBuffer, NULL, &dwcMaterials, &g_pMesh ) ); // generate adjacency DWORD *pdwAdjacency = new DWORD[ 3 * g_pMesh->GetNumFaces() ]; g_pMesh->GenerateAdjacency( 1e-6f, pdwAdjacency ); // create light map coordinates LPD3DXMESH pMesh = NULL; LPD3DXBUFFER pFacePartitioning = NULL, pVertexRemapArray = NULL; FLOAT resultStretch = 0; UINT numCharts = 0; hr = D3DXUVAtlasCreate( g_pMesh, 0, 0, 128, 128, 3.5f, 0, pdwAdjacency, NULL, NULL, NULL, NULL, NULL, 0, &pMesh, &pFacePartitioning, &pVertexRemapArray, &resultStretch, &numCharts ); if( SUCCEEDED( hr ) ) { // release and set mesh SAFE_RELEASE( g_pMesh ); g_pMesh = pMesh; // write mesh to file hr = D3DXSaveMeshToX( L"cube4.x", g_pMesh, 0, ( const D3DXMATERIAL* )pMaterialBuffer->GetBufferPointer(), NULL, dwcMaterials, D3DXF_FILEFORMAT_TEXT ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXSaveMeshToX() at OnD3D9CreateDevice()", __LINE__, hr ); } // fill the the light map hr = BuildLightmap( pS, g_pMesh ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to BuildLightmap()", __LINE__, hr ); } } else { DebugStringDX( "Main", "Failed to D3DXUVAtlasCreate() at OnD3D9CreateDevice()", __LINE__, hr ); } SAFE_RELEASE( pS ); SAFE_DELETE_ARRAY( pdwAdjacency ); SAFE_RELEASE( pFacePartitioning ); SAFE_RELEASE( pVertexRemapArray ); SAFE_RELEASE( pMaterialBuffer ); Here is code to fill lightmap texture: HRESULT BuildLightmap( IDirect3DSurface9 *pS, LPD3DXMESH pMesh ) { HRESULT hr = S_OK; // validate lightmap texture surface and mesh if( !pS || !pMesh ) return E_POINTER; // lock the mesh vertex buffer sVertexPosNormTex *pV = NULL; pMesh->LockVertexBuffer( D3DLOCK_READONLY, ( void** )&pV ); // lock the mesh index buffer WORD *pI = NULL; pMesh->LockIndexBuffer( D3DLOCK_READONLY, ( void** )&pI ); // get the lightmap texture surface description D3DSURFACE_DESC desc; pS->GetDesc( &desc ); // lock the surface rect to fill with color data D3DLOCKED_RECT rct; hr = pS->LockRect( &rct, NULL, 0 ); if( FAILED( hr ) ) { DebugStringDX( "main.cpp:", "Failed to IDirect3DTexture9::LockRect()", __LINE__, hr ); return hr; } // iterate the pixels of the lightmap texture // check each pixel to see if it lies between the uv coordinates of a cube face BYTE *pBuffer = ( BYTE* )rct.pBits; for( UINT y = 0; y < desc.Height; ++y ) { BYTE* pBufferRow = ( BYTE* )pBuffer; for( UINT x = 0; x < desc.Width * 4; x+=4 ) { // determine the pixel's uv coordinate D3DXVECTOR2 p( ( ( float )x / 4.0f ) / ( float )desc.Width + 0.5f / 128.0f, y / ( float )desc.Height + 0.5f / 128.0f ); // for each face of the mesh // check to see if the pixel lies within the face's uv coordinates for( UINT i = 0; i < 3 * pMesh->GetNumFaces(); i +=3 ) { sVertexPosNormTex v[ 3 ]; v[ 0 ] = pV[ pI[ i + 0 ] ]; v[ 1 ] = pV[ pI[ i + 1 ] ]; v[ 2 ] = pV[ pI[ i + 2 ] ]; if( TexcoordIsWithinBounds( v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ) ) { // the pixel lies b/t the uv coordinates of a cube face // light contribution functions aren't needed yet //D3DXVECTOR3 vPos = TexcoordToPos( v[ 0 ].vPos, v[ 1 ].vPos, v[ 2 ].vPos, v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ); //D3DXVECTOR3 vNormal = v[ 0 ].vNorm; // set the color of this pixel red( for demo ) BYTE ba[] = { 0, 0, 255, 255, }; //ComputeContribution( vPos, vNormal, g_sLight, ba ); // copy the byte array into the light map texture memcpy( ( void* )&pBufferRow[ x ], ( void* )ba, 4 * sizeof( BYTE ) ); } } } // go to next line of the texture pBuffer += rct.Pitch; } // unlock the surface rect pS->UnlockRect(); // unlock mesh vertex and index buffers pMesh->UnlockIndexBuffer(); pMesh->UnlockVertexBuffer(); // write the surface to file hr = D3DXSaveSurfaceToFile( L"LightMap.jpg", D3DXIFF_JPG, pS, NULL, NULL ); if( FAILED( hr ) ) DebugStringDX( "Main.cpp", "Failed to D3DXSaveSurfaceToFile()", __LINE__, hr ); return hr; } bool TexcoordIsWithinBounds( const D3DXVECTOR2 &t0, const D3DXVECTOR2 &t1, const D3DXVECTOR2 &t2, const D3DXVECTOR2 &p ) { // compute vectors D3DXVECTOR2 v0 = t1 - t0, v1 = t2 - t0, v2 = p - t0; float f00 = D3DXVec2Dot( &v0, &v0 ); float f01 = D3DXVec2Dot( &v0, &v1 ); float f02 = D3DXVec2Dot( &v0, &v2 ); float f11 = D3DXVec2Dot( &v1, &v1 ); float f12 = D3DXVec2Dot( &v1, &v2 ); // Compute barycentric coordinates float invDenom = 1 / ( f00 * f11 - f01 * f01 ); float fU = ( f11 * f02 - f01 * f12 ) * invDenom; float fV = ( f00 * f12 - f01 * f02 ) * invDenom; // Check if point is in triangle if( ( fU >= 0 ) && ( fV >= 0 ) && ( fU + fV < 1 ) ) return true; return false; } Screenshot Lightmap I believe the problem comes from the difference between the lightmap uv coordinates and the pixel center coordinates...for example, here are the lightmap uv coordinates( generated by D3DXUVAtlasCreate() ) for a specific face( tri ) within the mesh, keep in mind that I'm using the mesh uv coordinates to write the pixels for the texture: v[ 0 ].uv = D3DXVECTOR2( 0.003581, 0.295631 ); v[ 1 ].uv = D3DXVECTOR2( 0.003581, 0.003581 ); v[ 2 ].uv = D3DXVECTOR2( 0.295631, 0.003581 ); the lightmap texture size is 128 x 128 pixels. The upper-left pixel center coordinates are: float halfPixel = 0.5 / 128 = 0.00390625; D3DXVECTOR2 pixelCenter = D3DXVECTOR2( halfPixel, halfPixel ); will the mapping and sampling of the lightmap texture will require that an offset be taken into account or that the uv coordinates are snapped to the pixel centers..? ...Any ideas on the best way to approach this situation would be appreciated...What are the common practices?

    Read the article

  • Overriding component behavior

    - by deft_code
    I was thinking of how to implement overriding of behaviors in a component based entity system. A concrete example, an entity has a heath component that can be damaged, healed, killed etc. The entity also has an armor component that limits the amount of damage a character receives. Has anyone implemented behaviors like this in a component based system before? How did you do it? If no one has ever done this before why do you think that is. Is there anything particularly wrong headed about overriding component behaviors? Below is rough sketch up of how I imagine it would work. Components in an entity are ordered. Those at the front get a chance to service an interface first. I don't detail how that is done, just assume it uses evil dynamic_casts (it doesn't but the end effect is the same without the need for RTTI). class IHealth { public: float get_health( void ) const = 0; void do_damage( float amount ) = 0; }; class Health : public Component, public IHealth { public: void do_damage( float amount ) { m_damage -= amount; } private: float m_health; }; class Armor : public Component, public IHealth { public: float get_health( void ) const { return next<IHealth>().get_health(); } void do_damage( float amount ) { next<IHealth>().do_damage( amount / 2 ); } }; entity.add( new Health( 100 ) ); entity.add( new Armor() ); assert( entity.get<IHealth>().get_health() == 100 ); entity.get<IHealth>().do_damage( 10 ); assert( entity.get<IHealth>().get_health() == 95 ); Is there anything particularly naive about the way I'm proposing to do this?

    Read the article

  • Detecting if an object is following a path

    - by justin.m.chase
    I am attempting to take GPS data and track it on a map and see if it follows a given path. I have the path as a set of points and the GPS data streams in as a similar set of points. I am attempting to track the progression of the current position across the path and I am wondering if there are any well known algorithms for this. I have come up with my own that works ok but it is a complex enough problem that I would like to minimize the amount of re-inventing of the wheel that I do. What approach or algorithm would you recommend taking for this problem?

    Read the article

  • Unexpected results for projection on to plane

    - by ravenspoint
    I want to use this projection matrix: GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; It should cast object shadows onto the y = 0 plane from a point light at 1,1,-1. I create a rectangle in the x = 0.5 plane glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); Now if I manually multiply these vertices with the matrix, I get. glBegin( GL_QUADS ); glVertex3f( 0.375,0,-0.375); glVertex3f( 0.375,0,-1.625); glVertex3f( 0,0,-2); glVertex3f( 0,0,0); glEnd(); Which produces a reasonable display ( camera at 0,5,0 looking down y axis ) So rather than do the calculation manually, I should be able to use the opengl model transormation. I write this code: glMatrixMode (GL_MODELVIEW); GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; glLoadMatrixf( shadow ); glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); But this produces a blank screen! What am I doing wrong? Is there some debug mode where I can print out the transformed vertices, so I can see where they are ending up? Note: People have suggested that using glMultMatrixf() might make a difference. It doesn't. Replacing glLoadMatrixf( shadow ); with glLoadIdentity(); glMultMatrixf( shadow ); gives the identical result ( of course! )

    Read the article

  • Transforming a primitive tetrahedron into a primitive icosahedron?

    - by Djentleman
    I've created a tetrahedron by creating a BoundingBox and building the faces of the tetrahedron within the bounding box as follows (see image as well): VertexPositionNormalTexture[] vertices = new VertexPositionNormalTexture[12]; BoundingBox box = new BoundingBox(new Vector3(-1f, 1f, 1f), new Vector3(1f, -1f, -1f)); vertices[0].Position = box.GetCorners()[0]; vertices[1].Position = box.GetCorners()[2]; vertices[2].Position = box.GetCorners()[7]; vertices[3].Position = box.GetCorners()[0]; vertices[4].Position = box.GetCorners()[5]; vertices[5].Position = box.GetCorners()[2]; vertices[6].Position = box.GetCorners()[5]; vertices[7].Position = box.GetCorners()[7]; vertices[8].Position = box.GetCorners()[2]; vertices[9].Position = box.GetCorners()[5]; vertices[10].Position = box.GetCorners()[0]; vertices[11].Position = box.GetCorners()[7]; What would I then have to do to transform this tetrahedron into an icosahedron? Similar to this image: I understand the concept but applying it is another thing entirely for me.

    Read the article

  • What does Skyrim Creation Kit's NPC class do?

    - by pseudoname
    I'm trying to change it with the setclass console command Based on the UESP wiki it looks like it just governs stat gain for leveling, but based on the Elder Scrolls wiki it seems to only control their combat AI. Obviously it does at least one or both of those - what does it actually do, and does it do anything else? __ Ex: if change Lydia from warrior1handed to vigilantcombat1h with the console command 000A2C94.setclass 0010bfef will it have any unintended side effects that aren't immediately apparent other than letting her use the alteration and healing spells I just gave her with console and setting her stats in a way that works for that? Will it do something weird like mess with her factions or ability to join as my follower? Or mess with her health scaling as she levels? Something hard to notice until alot of time went by? @desaivv* I was trying to do it with 000A2C94.setclass 0010bfef wasn't sure if it'd cause hidden issues only showing after hours of play or if i made a new character with the same bat file. but the creation kit sounds like an idea too, i'll have to see how complicated it is. it might just show what it'd do or have some easier way to change her behavior to add spells. I'll try it and see if anything obvious shows up short term just wasn't sure if it had known long term problems

    Read the article

  • geomipmapping using displacement mapping (and glVertexAttribDivisor)

    - by Will
    I wake up with a clear vision, but sadly my laptop card doesn't do displacement mapping nor glVertexAttribDivisor so I can't test it out; I'm left sharing here: With geomipmapping, the grid at any factor is transposable - if you pass in an offset - say as a uniform - you can reuse the same vertex and index array again and again. If you also pass in the offset into the heightmap as a uniform, the vertex shader can do displacement mapping. If the displacement map is mipmapped, you get the advantages of trilinear filtering for distant maps. And, if the scenery is closer, rather than exposing that the you have a world made out of quads, you can use your transposable grid vertex array and indices to do vertex-shader interpolation (fancy splines) to do super-smooth infinite zoom? So I have some questions: does it work? In theory, in practice? does anyone do it? Does this technique have a name? Papers, demos, anything I can look at? does glVertexAttribDivisor mean that you can have a single glMultiDrawElementsEXT or similar approach to draw all your terrain tiles in one call rather than setting up the uniforms and emitting each tile? Would this offer any noticeable gains? does a heightmap that is GL_LUMINANCE take just one byte per pixel(=vertex)? (On mainstream cards, obviously. Does storage vary in practice?) Does going to the effort of reusing the same vertices and indices mean that you can basically fill the GPU RAM with heightmap and not a lot else, giving you either bigger landscapes or more detailed landscapes/meshes for the same bang? is mipmapping the displacement map going to work? On future cards? Is it going to introduce unsurmountable inaccuracies if it is enabled?

    Read the article

  • glColor3f Setting colour

    - by Aaron
    This draws a white vertical line from 640 to 768 at x512: glDisable(GL_TEXTURE_2D); glBegin(GL_LINES); glColor3f((double)R/255,(double)G/255,(double)B/255); glVertex3f(SX, -SPosY, 0); // origin of the line glVertex3f(SX, -EPosY, 0); // ending point of the line glEnd(); glEnable(GL_TEXTURE_2D); This works, but after having a problem where it wouldn't draw it white (Or to any colour passed) I discovered that disabling GL_TEXTURE_2D Before drawing the line, and the re-enabling it afterwards for other things, fixed it. I want to know, is this a normal step a programmer might take? Or is it highly inefficient? I don't want to be causing any slow downs due to a mistake =) Thanks

    Read the article

  • Lwjgl or opengl double pixels

    - by Philippe Paré
    I'm working in java with LWJGL and trying to double all my pixels. I'm trying to draw in an area of 800x450 and then stretch all the frame image to the complete 1600x900 pixels without them getting blured. I can't figure out how to do that in java, everything I find is in c++... A hint would be great! Thanks a lot. EDIT : I've tried drawing to a texture created in opengl by setting it to the framebuffer, but I can't find a way to use glGenTextures() in java... so this is not working... also I though about using a shader but I would not be able to draw only in the smaller region...

    Read the article

  • Circular motion on low powered hardware

    - by Akroy
    I was thinking about platforms and enemies moving in circles in old 2D games, and I was wondering how that was done. I understand parametric equations, and it's trivial to use sin and cos to do it, but could an NES or SNES make real time trig calls? I admit heavy ignorance, but I thought those were expensive operations. Is there some clever way to calculate that motion more cheaply? I've been working on deriving an algorithm from trig sum identities that would only use precalculated trig, but that seems convoluted.

    Read the article

  • Tweaking AStar to find closest location to unreachable destination

    - by Shivan Dragon
    I've implemented AStar in Java and it works ok for an area with obstacles where the chosen destination is reachable. However, when the destination is unreachable, the calculated "path" is in no way to the closest location (to the unreachable location) but is instead some random path. Is there a feasible way to tweak AStar into finding the path to the closest location to an unreachable destination?

    Read the article

  • The practical cost of swapping effects

    - by sebf
    I use XNA for my projects and on those forums I sometimes see references to the fact that swapping an effect for a mesh has a relatively high cost, which surprises me as I thought to swap an effect was simply a case of copying the replacement shader program to the GPU along with appropriate parameters. I wondered if someone could explain exactly what is costly about this process? And put, if possible, 'relatively' into context? For example say I wanted to use a short shader to help with picking, I would: Change the effect on every object, calculting a unique color to identify it and providing it to the shader. Draw all the objects to a render target in memory. Get the color from the target and use it to look up the selected object. What portion of the total time taken to complete that process would be spent swapping the shaders? My instincts would say that rendering the scene again, no matter how simple the shader, would be an order of magnitude slower than any other part of the process so why all the concern over effects?

    Read the article

  • Spherical harmonics lighting - what does it accomplish?

    - by TravisG
    From my understanding, spherical harmonics are sometimes used to approximate certain aspects of lighting (depending on the application). For example, it seems like you can approximate the diffuse lighting cause by a directional light source on a surface point, or parts of it, by calculating the SH coefficients for all bands you're using (for whatever accuracy you desire) in the direction of the surface normal and scaling it with whatever you need to scale it with (e.g. light colored intensity, dot(n,l),etc.). What I don't understand yet is what this is supposed to accomplish. What are the actual advantages of doing it this way as opposed to evaluating the diffuse BRDF the normal way. Do you save calculations somewhere? Is there some additional information contained in the SH representation that you can't get out of the scalar results of the normal evaluation?

    Read the article

  • XNA calculate normals for linesegment

    - by Gerhman
    I am quite new to 3D graphical programming and thus far only understand that normal somehow define the direction in which a vertex faces and therefore the direction in which light is reflected. I have now idea how they are calculated though, only that they are defined by a Vector3. For a visualizer that I am creating I am importing a bunch of coordinate which represent layer upon layer of line segments. At the moment I am only using a vertex buffer and adding the start and end point of each line and then rendering a linelist. The thing is now that I need to calculate the normal for the vertices of these line segments so that I can get some realistic lighting. I have no idea how to calculate these normal but I know they all face sideways and not up or down. To calculate them all I have are the start and end positions of each line segment. The below image is a representation of what I think I need to do in the case of an example layer: The red arrows represent the normal that should be calculates, the blue text represent the coordinates of the vertices and the green numbers represent their indices. I would greatly appreciate it if someone could please explain to me how I should calculate these normal.

    Read the article

  • Seek Steering Behavior with Target Direction for Group of Fighters

    - by SebastianStehle
    I am implementing steering algorithms with group management for spaceships (fighters). I select a leader and assign the target positions for the other spaceships based on the target position of the leader and an offset. This works well. But when my spaceships arrive they all have a different direction. I want them to keep to look in the same direction (target - start). I also want to combine this behavior with a minimum turning radius that is based on the speed. The only idea I have is to calculate a path for each spaceship with an point before the target position, so the ships have some time left to turn into the right position. But I dont know if this is a good idea. I guess there will be a lot of rare cases where this can cause a problem. So the question is, if anybody knows how to solve this problem and has some (simple code) or pseudocode for me or at least some good explanation.

    Read the article

< Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >