Search Results

Search found 839 results on 34 pages for 'vertex'.

Page 14/34 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • How to transform mesh components?

    - by Lea Hayes
    I am attempting to transform the components of a mesh directly using a 4x4 matrix. This is working for the vertex positions, but it is not working for the normals (and probably not the tangents either). Here is what I have: // Transform vertex positions - Works like a charm! vertices = mesh.vertices; for (int i = 0; i < vertices.Length; ++i) vertices[i] = transform.MultiplyPoint(vertices[i]); // Does not work, lighting is messed up on mesh normals = mesh.normals; for (int i = 0; i < normals.Length; ++i) normals[i] = transform.MultiplyVector(normals[i]); Note: The input matrix converts from local to world space and is needed to combine multiple meshes together.

    Read the article

  • GLSL, all in one or many shader programs?

    - by stjepano
    I am doing some 3D demos using OpenGL and I noticed that GLSL is somewhat "limited" (or is it just me?). Anyway I have many different types of materials. Some materials have ambient and diffuse color, some materials have ambient occlusion map, some have specular map and bump map etc. Is it better to support everything in one vertex/fragment shader pair or is it better to create many vertex/fragment shaders and select them based on currently selected material? What is the usual shader strategy in OpenGL or D3D?

    Read the article

  • How i can sign and/or group a specific set of vertices in a 3D file container like OBJ ? - in Blender

    - by user827992
    I would like to export a 3D model with each part having a name or a label if you will. For example i would like to export a model of an human body and name each part in specifics vertex groups like: left hand, right hand, right foot, head, ears, ... and you got the idea; so i can have a single 3D model that i can explode in various parts if needed. If there is a better technique about how to mark vertex groups in a 3D file please share your solution. As 3D editor i use Blender.

    Read the article

  • Efficient skeletal animation

    - by Will
    I am looking at adopting a skeletal animation format (as prompted here) for an RTS game. The individual representation of each model on-screen will be small but there will be lots of them! In skeletal animation e.g. MD5 files, each individual vertex can be attached to an arbitrary number of joints. How can you efficiently support this whilst doing the interpolation in GLSL? Or do engines do their animation on the CPU? Or do engines set arbitrary limits on maximum joints per vertex and invoke nop multiplies for those joints that don't use the maximum number? Are there games that use skeletal animation in an RTS-like setting thus proving that on integrated graphics cards I have nothing to worry about in going the bones route?

    Read the article

  • Developing GLSL Shaders?

    - by skln
    I want to create shaders but I need a tool to create and see the visual result before I put them into my game. As to determine if there is something wrong with my game or if it's something with the shader I created. I've looked at some like Render Monkey and OpenGL Shader Designer from what I recall of Render Monkey it had a way to define your own attributes (now as "in" for vertex shaders = 330) easily though I can't remember to what extent. Shader Designer requires a plugin that I didn't even bother to look at creating cause it's an external process and plugin. Are there any tools out there that support a scripting language and I could easily provide specific input such as float movement = sin(elapsedTime()); and then define in float movement; in the vertex shader ? It'd be cool if anyone could share how they develop shaders, if they just code away and then plug it into their game hoping to get the result they wanted.

    Read the article

  • How to get the Dash and HUD to appear. (and stop Unity spewing error messages.)

    - by Ubuntiac
    I just installed Ubuntu 12.04 on my wifes Dell Inspiron 1501, which uses an R300 ATI graphics chip. Neither the Dash or HUD appear when pushing the appropriate key. When I try unity --reset & in the terminal, I see that over and over it's spitting out: r300: CS space validation failed. (not enough memory?) Skipping rendering. This is just after starting Ubuntu with no apps open, so I find it hard to believe that just rendering the Dash / HUD is completely blowing out the VRAM. Any suggestions on getting this working? /usr/lib/nux/unity_support_test -p shows OpenGL vendor string: X.Org R300 Project OpenGL renderer string: Gallium 0.4 on ATI RS480 OpenGL version string: 2.1 Mesa 8.0.2 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: yes All sections say "YES"

    Read the article

  • Using OpenCl to jiggle the Pipe

    - by TOAOGG
    I've got the Idea to use OpenCL to program a simple Renderer. A clear contra is, that this approach won't benefit from the hardware as the functions on the device (I think). Would it be useful to do this in OpenCL..lets say we want to Cull as early as possible so we won't have many per vertex operations. Is it correct, that Culling is done after the Vertex-Shader? For static-vertecies who won't get effected by the shader it could be interesting to cull them before. Another idea would be an deferred renderer. So the main question is: Would it make sense to program a renderer in OpenCL (aside the effort)? The resulting picture would be drawn in OpenGL.

    Read the article

  • Microsoft XNA code sample wont work with blender model

    - by FreakinaBox
    I downloaded this code sample and integrated it into my game http://xbox.create.msdn.com/en-US/education/catalog/sample/mesh_instancing It works with the model that they supplied, but throws and exception whenever I use one of my models. The current vertex declaration does not include all the elements required by the current vertex shader. TextureCoordinate0 is missing. I tried pluging my model into their original source code and same thing. My model is an fbx from blender and has a texture. This is the function that throws the error GraphicsDevice.DrawInstancedPrimitives( PrimitiveType.TriangleList, 0, 0, meshPart.NumVertices, meshPart.StartIndex, meshPart.PrimitiveCount, instances.Length );

    Read the article

  • What file formats and conventions should I support to make my game engine artist-friendly?

    - by Avi
    I'm writing a game engine, and I want to know what I should do to make it more artist-friendly. I don't want to be too limiting in terms of what file formats I support, etc. Some specific questions: Are there specific formats artists like to model in? Does it not matter because the 3D modeler abstracts the data storage away? Is it okay if I don't support per-vertex coloration in my game engine? If I have to store a diffuse, specular, ambient, and emissive color value for each vertex, it doubles the size of vertices in the buffer. Is it reasonable to ask artists to do all these things in textures / maps? Any other tips you have about making it so that artists have to adapt their style to my specific engine as little as possible would be nice.

    Read the article

  • Triangles in a C++ STL Vector as an Objective-C member sometimes draws incorrectly in OpenGL ES

    - by Rahil627
    The polygons draw correctly 80% of the time. When it fails, a vertex is dislocated. The polygon is consistently drawn with the wrong vertex. I checked that the vector is correct during initialization, even when it's wrongly drawn. I'm using Cocos2d. The class member: @interface Polygon : CCSprite { std::vector<float> triangleVertices; } The draw function called in [Polygon draw]: + (void)drawTrianglesWithVertices:(const std::vector<float> &)v { //glEnableClientState(GL_VERTEX_ARRAY); glDisable(GL_TEXTURE_2D); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glDisableClientState(GL_COLOR_ARRAY); glVertexPointer(2, GL_FLOAT, 0, &v[0]); glDrawArrays(GL_TRIANGLES, 0, v.size()); //glDisableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnable(GL_TEXTURE_2D); } Any ideas?

    Read the article

  • Parallax backgrounds in OpenGL ES on the iPhone

    - by Scott
    I've got basically a 2d game on the iPhone and I'm trying to set up multiple backgrounds that scroll at different speeds (known as parallax backgrounds). So my thought was to just stick the backgrounds BEHIND the foreground using different z-coordinate planes, and just make them bigger than the foreground (in size) to accommodate, so that the whole thing can be scrolled (just at a different speed). And (as far as I know) I basically implemented that. The only problem is that it seems to entirely ignore whatever z-value I give it, or rather it just zeroes all of them. I see the background (I've only tested ONE background so far, to keep it simple...so for now I just have a foreground and I want one background scrolling at a different speed), but it scrolls 1:1 with my foreground, so it obviously doesn't look right, and most of it is cut off (cause it's bigger). And I've tried various z-values for the background and various near/far clipping planes...it's always the same. I'm probably just doing one simple thing wrong, but I can't figure it out. I'm wondering if it has to do with me using only 2 coordinates in glVertexPointer for the foreground? (Of course for the background I AM passing in 3) I'll post some code: This is some initial setup: glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -10.0f, 10.0f); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glEnableClientState(GL_VERTEX_ARRAY); //glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); //transparency glEnable (GL_BLEND); glBlendFunc (GL_ONE, GL_ONE_MINUS_SRC_ALPHA); A little bit about my foreground's float array....it's interleaved. For my foreground it goes vertex x, vertex y, texture x, texture y, repeat. This all works just fine. This is my FOREGROUND rendering: glVertexPointer(2, GL_FLOAT, 4*sizeof(GLfloat), texes); <br> glTexCoordPointer(2, GL_FLOAT, 4*sizeof(GLfloat), (GLvoid*)texes + 2*sizeof(GLfloat)); <br> glDrawArrays(GL_TRIANGLES, 0, indexCount / 4); BACKGROUND rendering: Same drill here except this time it goes vertex x, vertex y, vertex z, texture x, texture y, repeat. Note the z value this time. I did make sure the data in this array was correct while debugging (getting the right z values). And again, it shows up...it's just not going far back in the distance like it should. glVertexPointer(3, GL_FLOAT, 5*sizeof(GLfloat), b1Texes); glTexCoordPointer(2, GL_FLOAT, 5*sizeof(GLfloat), (GLvoid*)b1Texes + 3*sizeof(GLfloat)); glDrawArrays(GL_TRIANGLES, 0, b1IndexCount / 5); And to move my camera, I just do a simple glTranslatef(x, y, 0.0f); I'm not understanding what I'm doing wrong cause this seems like the most basic 3D function imaginable...things further away are smaller and don't move as fast when the camera moves. Not the case for me. Seems like it should be pretty basic and not even really be affected by my projection and all that (though I've even tried doing glFrustum just for fun, no success). Please help, I feel like it's just one dumb thing. I will post more code if necessary.

    Read the article

  • How color attributes work in VBO?

    - by Jayesh
    I am coding to OpenGL ES 2.0 (Webgl). I am using VBOs to draw primitives. I have vertex array, color array and array of indices. I have looked at sample codes, books and tutorial, but one thing I don't get - if color is defined per vertex how does it affect the polygonal surfaces adjacent to those vertices? (I am a newbie to OpenGL(ES)) I will explain with an example. I have a cube to draw. From what I read in OpenGLES book, the color is defined as an vertex attribute. In that case, if I want to draw 6 faces of the cube with 6 different colors how should I define the colors. The source of my confusion is: each vertex is common to 3 faces, then how will it help defining a color per vertex? (Or should the color be defined per index?). The fact that we need to subdivide these faces into triangles, makes it harder for me to understand how this relationship works. The same confusion goes for edges. Instead of drawing triangles, let's say I want to draw edges using LINES primitives. Each edge of different color. How am I supposed to define color attributes in that case? I have seen few working examples. Specifically this tutorial: http://learningwebgl.com/blog/?p=370 I see how color array is defined in the above example to draw a cube with 6 different colored faces, but I don't understand why is defined that way. (Why is each color copied 4 times into unpackedColors for instance?) Can someone explain how color attributes work in VBO? [The link above seems inaccessible, so I will post the relevant code here] cubeVertexPositionBuffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, cubeVertexPositionBuffer); vertices = [ // Front face -1.0, -1.0, 1.0, 1.0, -1.0, 1.0, 1.0, 1.0, 1.0, -1.0, 1.0, 1.0, // Back face -1.0, -1.0, -1.0, -1.0, 1.0, -1.0, 1.0, 1.0, -1.0, 1.0, -1.0, -1.0, // Top face -1.0, 1.0, -1.0, -1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, -1.0, // Bottom face -1.0, -1.0, -1.0, 1.0, -1.0, -1.0, 1.0, -1.0, 1.0, -1.0, -1.0, 1.0, // Right face 1.0, -1.0, -1.0, 1.0, 1.0, -1.0, 1.0, 1.0, 1.0, 1.0, -1.0, 1.0, // Left face -1.0, -1.0, -1.0, -1.0, -1.0, 1.0, -1.0, 1.0, 1.0, -1.0, 1.0, -1.0, ]; gl.bufferData(gl.ARRAY_BUFFER, new WebGLFloatArray(vertices), gl.STATIC_DRAW); cubeVertexPositionBuffer.itemSize = 3; cubeVertexPositionBuffer.numItems = 24; cubeVertexColorBuffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, cubeVertexColorBuffer); var colors = [ [1.0, 0.0, 0.0, 1.0], // Front face [1.0, 1.0, 0.0, 1.0], // Back face [0.0, 1.0, 0.0, 1.0], // Top face [1.0, 0.5, 0.5, 1.0], // Bottom face [1.0, 0.0, 1.0, 1.0], // Right face [0.0, 0.0, 1.0, 1.0], // Left face ]; var unpackedColors = [] for (var i in colors) { var color = colors[i]; for (var j=0; j < 4; j++) { unpackedColors = unpackedColors.concat(color); } } gl.bufferData(gl.ARRAY_BUFFER, new WebGLFloatArray(unpackedColors), gl.STATIC_DRAW); cubeVertexColorBuffer.itemSize = 4; cubeVertexColorBuffer.numItems = 24; cubeVertexIndexBuffer = gl.createBuffer(); gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, cubeVertexIndexBuffer); var cubeVertexIndices = [ 0, 1, 2, 0, 2, 3, // Front face 4, 5, 6, 4, 6, 7, // Back face 8, 9, 10, 8, 10, 11, // Top face 12, 13, 14, 12, 14, 15, // Bottom face 16, 17, 18, 16, 18, 19, // Right face 20, 21, 22, 20, 22, 23 // Left face ] gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, new WebGLUnsignedShortArray(cubeVertexIndices), gl.STATIC_DRAW); cubeVertexIndexBuffer.itemSize = 1; cubeVertexIndexBuffer.numItems = 36;

    Read the article

  • List<element> initialization fires "Process is terminated due to StackOverflowException"

    - by netmajor
    I have structs like below and when I do that initialization: ArrayList nodesMatrix = null; List<vertex> vertexMatrix = null; List<bool> odwiedzone = null; List<element> priorityQueue = null; vertexMatrix = new List<vertex>(nodesNr + 1); nodesMatrix = new ArrayList(nodesNr + 1); odwiedzone = new List<bool>(nodesNr + 1); priorityQueue = new List<element>(); arr.NodesMatrix = nodesMatrix; arr.VertexMatrix = vertexMatrix; arr.Odwiedzone = odwiedzone; arr.PriorityQueue = priorityQueue; //only here i have exception debuger fires Process is terminated due to StackOverflowException :/ Some idea why this collection fires this exception ? private struct arrays { ArrayList nodesMatrix; public ArrayList NodesMatrix { get { return nodesMatrix; } set { nodesMatrix = value; } } List<vertex> vertexMatrix; public List<vertex> VertexMatrix { get { return vertexMatrix; } set { vertexMatrix = value; } } List<bool> odwiedzone; public List<bool> Odwiedzone { get { return odwiedzone; } set { odwiedzone = value; } } public List<element> PriorityQueue { get { return PriorityQueue; } set { PriorityQueue = value; } } } public struct element : IComparable { public double priority { get { return priority; } set { priority = value; } } public int node { get { return node; } set { node = value; } } public element(double _prio, int _node) { priority = _prio; node = _node; } #region IComparable Members public int CompareTo(object obj) { element elem = (element)obj; return priority.CompareTo(elem.priority); } #endregion

    Read the article

  • A two way minimum spanning tree of a directed graph

    - by mvid
    Given a directed graph with weighted edges, what algorithm can be used to give a sub-graph that has minimum weight, but allows movement from any vertex to any other vertex in the graph (under the assumption that paths between any two vertices always exist). Does such an algorithm exist?

    Read the article

  • Creating a Haskell Empty Set

    - by mvid
    I am attempting to pass back a Node type from this function, but I get the error that empty is out of scope: import Data.Set (Set) import qualified Data.Set as Set data Node = Vertex String (Set Node) deriving Show toNode :: String -> Node toNode x = Vertex x empty What am I doing wrong?

    Read the article

  • Best way to render Tesselated Objects (OpenGL)

    - by user146780
    I'm using the GLUTesselator for Polygons. Right now the vertex callback does glvertex2f and gltex2f. Would it be better simply to collect the verticies from the vertex callback in a std::vector then use gldrawarrays()? Or would this actually be less efficient since it has to put the verts and texture coordinates in a vector? Thanks

    Read the article

  • c++ program debugged well with Cygwin4 (under Netbeans 7.2) but not with MinGW (under QT 4.8.1)

    - by GoldenAxe
    I have a c++ program which take a map text file and output it to a graph data structure I have made, I am using QT as I needed cross-platform program and GUI as well as visual representation of the map. I have several maps in different sizes (8x8 to 4096x4096). I am using unordered_map with a vector as key and vertex as value, I'm sending hash(1) and equal functions which I wrote to the unordered_map in creation. Under QT I am debugging my program with QT 4.8.1 for desktop MinGW (QT SDK), the program works and debug well until I try the largest map of 4096x4096, then the program stuck with the following error: "the inferior stopped because it received a signal from operating system", when debugging, the program halt at the hash function which used inside the unordered_map and not as part of the insertion state, but at a getter(2). Under Netbeans IDE 7.2 and Cygwin4 all works fine (debug and run). some code info: typedef std::vector<double> coordinate; typedef std::unordered_map<coordinate const*, Vertex<Element>*, container_hash, container_equal> vertexsContainer; vertexsContainer *m_vertexes (1) hash function: struct container_hash { size_t operator()(coordinate const *cord) const { size_t sum = 0; std::ostringstream ss; for ( auto it = cord->begin() ; it != cord->end() ; ++it ) { ss << *it; } sum = std::hash<std::string>()(ss.str()); return sum; } }; (2) the getter: template <class Element> Vertex<Element> *Graph<Element>::getVertex(const coordinate &cord) { try { Vertex<Element> *v = m_vertexes->at(&cord); return v; } catch (std::exception& e) { return NULL; } } I was thinking maybe it was some memory issue at the beginning, so before I was thinking of trying Netbeans I checked it with QT on my friend pc with a 16GB RAM and got the same error. Thanks.

    Read the article

  • Psychonauts crashes right after entering load save door

    - by user67974
    Psychonauts crashes right after entering the 'Load Save' door. Here is the terminal output: Shader assembly time: 0.88 seconds Found OpenAL device: 'Simple Directmedia Layer' Found OpenAL device: 'ALSA Software' Found OpenAL device: 'OSS Software' Found OpenAL device: 'PulseAudio Software' Opened OpenAL Device: '(null)' ERROR: CAudioDrv::CAudioDrv->alGenSources reports AL_INVALID_VALUE error. PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonfx.isb' to 'WorkResource/Sounds/commonfx.isb' PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonvoice.isb' to 'WorkResource/Sounds/commonvoice.isb' PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonmusic.isb' to 'WorkResource/Sounds/commonmusic.isb' PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonmentalfx.isb' to 'WorkResource/Sounds/commonmentalfx.isb' PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonmenfxmem.isb' to 'WorkResource/Sounds/commonmenfxmem.isb' PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonfxmem.isb' to 'WorkResource/Sounds/commonfxmem.isb' GameApp::StartUp InitSoundFiles() completed in 0.15 seconds GameApp::StartUp Load some common textures completed in 0.00 seconds WARN: ENGINE: Lua garbage collection starting FreeUnusedBlocksInBuckets released 0 Kb GameApp::StartUp InitEntities() completed in 0.02 seconds PSYCHONAUTS UNIX FILENAME: corrected 'WorkResource/SavedGames/savegameprefs.ini' to 'WorkResource/SAVEDGAMES/savegameprefs.ini' PSYCHONAUTS UNIX FILENAME: corrected 'WorkResource/SavedGames/savegameprefs.ini' to 'WorkResource/SAVEDGAMES/savegameprefs.ini' GameApp::StartUp m_pSaveLoadInterface->Startup() completed in 0.00 seconds GameApp::StartUp m_UserInterface.Setup() completed in 0.00 seconds STUBBED: multisample at EDisplayOptionsWidget (/home/icculus/projects/psychonauts/Source/game/luatest/Game/UIPCDisplayOptions.cpp:97) STUBBED: VK_* at CheckVirtualKey (/home/icculus/projects/psychonauts/Source/CommonLibs/DirectX/SDLInput.cpp:1443) Game: Engine Running hook startup Game: Engine -> SetupGlobalObjects Game: Engine -> SetupLevelMenu Game: Engine -> InitMath GameApp::StartUp InitLua2() completed in 0.00 seconds GameApp::StartUp SetupLevelMenu() completed in 0.00 seconds STUBBED: do we even use this? at InitSocket (/home/icculus/projects/psychonauts/Source/game/luatest/Game/Gameplaylogger.cpp:210) GameApp::StartUp Post-Install total completed in 0.20 seconds Start Up completed in 1.57 seconds UnixMain: StartUp successful.. Working directory: /opt/psychonauts STUBBED: dispatch SDL events at PCMainHandleAnyWindowsMessages (/home/icculus/projects/psychonauts/Source/game/luatest/UnixMain.cpp:56) STUBBED: write me at GetJoystickInput (/home/icculus/projects/psychonauts/Source/CommonLibs/DirectX/SDLInput.cpp:428) STUBBED: write me at GetJoystickActionValue (/home/icculus/projects/psychonauts/Source/CommonLibs/DirectX/SDLInput.cpp:613) PSYCHONAUTS UNIX FILENAME: corrected 'workresource/cutScenes/prerendered/dflogo.bik' to 'WorkResource/cutscenes/prerendered/DFLogo.bik' Prerender subtitle file: workresource\cutScenes\prerendered\dflogo.dfs not found PSYCHONAUTS UNIX FILENAME: corrected 'workresource/cutScenes/prerendered/dflogo.bik' to 'WorkResource/cutscenes/prerendered/DFLogo.bik' STUBBED: fixed function pipeline? at setColorOp (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2097) STUBBED: fixed function pipeline? at setColorArg1 (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2106) STUBBED: fixed function pipeline? at setColorArg2 (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2115) STUBBED: fixed function pipeline? at setAlphaOp (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2124) STUBBED: fixed function pipeline? at setAlphaArg1 (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2133) STUBBED: fixed function pipeline? at setAlphaArg2 (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2142) STUBBED: fixed function pipeline? at setProjected (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2223) LOC WARN: Could not open Localization file 'Localization/English/_StringTable.lub' STUBBED: memory status at UpdateMemoryTracking (/home/icculus/projects/psychonauts/Source/game/luatest/Game/GameApp.cpp:4884) WARN: Couldn't resize array to 128; out-of-bounds elements are still in use: Vertex Pool, 188 Loading new level 'STMU' STUBBED: Need multithreaded GL at DisplayLoadingScreen (/home/icculus/projects/psychonauts/Source/game/luatest/Game/LoadingScreen.cpp:83) ========================= Memory post unload level ========================= ========================= LOC WARN: Could not open Localization file 'Localization/English/ST_StringTable.lub' DaveD: Info: Texture pack file contains 137 textures Doing a texture readback for locking! Game: Engine Saved[GLOBAL]: InstaHintFord_HostileRecord = [table] Game: Engine Saved[GLOBAL]: InstaHintFord_HostileOrder = [table] WARN: Redundant packfile read: anims\thought_bubble\bubblefirestarting.jan WARN: Redundant packfile read: anims\thought_bubble\bubbleintothemind.jan WARN: Redundant packfile read: anims\thought_bubble\bubbleinvisibility.jan WARN: Redundant packfile read: anims\thought_bubble\bubblepopperfill.jan WARN: Redundant packfile read: anims\thought_bubble\bubbletelekinesis.jan Initializing level script (if there is one) PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/stfx.isb' to 'WorkResource/Sounds/stfx.isb' Game: Engine Reloading goals: Game: Engine Saved[GLOBAL]: NextEncouragement = '/GLZF014TO/ 10' Game: Engine Saved[GLOBAL]: bUsedSalts = 0 Game: Engine Saved[GLOBAL]: bSTEntered = 1 Game: Engine Saved[GLOBAL]: memoriesST = 1 Game: Engine Saved[GLOBAL]: PsiBallColor = 'red' Game: Engine Saved[ST]: lastSubLevel = 'STMU' Game: Engine LOADING LEVEL st.STMU Game: Engine Saved[CA]: CALevelState = 1 Game: Engine Cutscene progression: CS Script moving from state nil to state nil, resultant state nil. Time: 0.124746672809124. * Stack Trace 1: (null) (line -1, file '(none)) () 2: SpawnScript (line -1, file 'C) (global) 3: onBeginLevel (line -1, file '(none)) (field) 4: (null) (line -1, file '(none)) () WARN: Cannot call GetDirectoryListing when running from the DVD Game: Engine Raz spawning at DartStart startpoint VM : LevelScript could not find script 'doorrimlight1' * Stack Trace 1: (null) (line -1, file '(none)) () WARN: (none(-1) SetEntityAlpha LevelScript: NULL script object passed Game: Engine Saved[GLOBAL]: bLoadedFromMainMenu = 1 Game: Engine Saved[GLOBAL]: NextEncouragement = '/GLZF014TO/ 10' Game: Engine Saved[GLOBAL]: NeedRankIncrement = 0 STUBBED: Need multithreaded GL at HideLoadingScreen (/home/icculus/projects/psychonauts/Source/game/luatest/Game/LoadingScreen.cpp:110) WARN: ENGINE: Lua garbage collection starting FreeUnusedBlocksInBuckets released 0 Kb Game: Engine Saved[GLOBAL]: SplineFigmentTVSizex = 4.51434326171875 Game: Engine Saved[GLOBAL]: SplineFigmentTVSizey = 46.38104248046875 Game: Engine Saved[GLOBAL]: SplineFigmentTVSizez = 47.08810424804688 WARN: (none(-1) SetNewAction LevelScript: no string passed ====================== Asset load progression ====================== Initial: 2.518 MB Vertex, 8.688 MB Texture Level : 3.719 MB Vertex, 22.535 MB Texture Scripts: 3.747 MB Vertex, 22.848 MB Texture ====================== ====================== Memory post level load ====================== ====================== WARN: ENGINE: Lua garbage collection starting FreeUnusedBlocksInBuckets released 0 Kb DaveD: Level loaded in 0.14 seconds Anim: anims\objects\tk_arrow_idle.jan: loaded (1 frames latency) Anim: anims\dartnew\helmet\darthelmetdn.jan: loaded (1 frames latency) Anim: anims\thought_bubble\shieldloop.jan: loaded (1 frames latency) Anim: anims\dartnew\standready.jan: loaded (1 frames latency) Anim: anims\dartnew\walkmove.jan: loaded (1 frames latency) Anim: anims\janitor\hint_end.jan: loaded (1 frames latency) Anim: anims\thought_bubble\ballstatic.jan: loaded (1 frames latency) Anim: anims\dartnew\actionfall.jan: loaded (1 frames latency) Anim: anims\dartnew\standstill.jan: loaded (1 frames latency) Anim: anims\dartnew\pack\packbounce_lf_rt.jan: loaded (1 frames latency) Anim: anims\dartnew\pack\packbounce_up_dn.jan: loaded (1 frames latency) Anim: anims\dartnew\helmet\darthelmetdefpose.jan: loaded (1 frames latency) 1: 1 (number) 1: 1 (number) STUBBED: This is probably wrong at GetDt (/home/icculus/projects/psychonauts/Source/CommonLibs/DFUtil/Profiler.cpp:181) STUBBED: set specular highlights at setSpecularEnable (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Renderer.cpp:2035) Anim: anims\dartnew\trnrtcycle.jan: loaded (1 frames latency) Anim: anims\dartnew\run.jan: loaded (1 frames latency) Anim: anims\dartnew\walk.jan: loaded (1 frames latency) Anim: anims\thought_bubble\bubbledoublejump.jan: loaded (1 frames latency) Anim: anims\dartnew\longjump.jan: loaded (1 frames latency) Anim: anims\menubrain\door1crack.jan: loaded (1 frames latency) Anim: anims\menubrain\door1crackedidle.jan: loaded (1 frames latency) Anim: anims\menubrain\door1closedidle.jan: loaded (1 frames latency) Anim: anims\dartnew\180.jan: loaded (1 frames latency) Anim: anims\menubrain\door3crack.jan: loaded (1 frames latency) Anim: anims\menubrain\door3crackedidle.jan: loaded (1 frames latency) Anim: anims\menubrain\door3closedidle.jan: loaded (1 frames latency) Anim: anims\dartnew\railslide45angle.jan: loaded (1 frames latency) Anim: anims\dartnew\railslideflat.jan: loaded (1 frames latency) Anim: anims\dartnew\trnlfcycle.jan: loaded (1 frames latency) WARN: (none(-1) SetNewAction LevelScript: no string passed Anim: anims\dartnew\mainmenu_jump.jan: loaded (1 frames latency) Anim: anims\menubrain\door1open.jan: loaded (1 frames latency) ERROR: Assert in /home/icculus/projects/psychonauts/Source/game/luatest/../../CommonLibs/Include/../DFGraphics/Color.h, line 96 v.x >= 0.0f && v.x <= 1.0f && v.y >= 0.0f && v.y <= 1.0f && v.z >= 0.0f && v.z <= 1.0f && v.w >= 0.0f && v.w <= 1.0f Encountered Error: Psychonauts has encountered an error /home/icculus/projects/psychonauts/Source/game/luatest/../../CommonLibs/Include/../DFGraphics/Color.h, line 96 v.x >= 0.0f && v.x <= 1.0f && v.y >= 0.0f && v.y <= 1.0f && v.z >= 0.0f && v.z <= 1.0f && v.w >= 0.0f && v.w <= 1.0f Please contact technical support at http://www.doublefine.com. I am currently using Bumblebee for hybrid graphics, if that helps in any way.

    Read the article

  • Deferred rendering with VSM - Scaling light depth loses moments

    - by user1423893
    I'm calculating my shadow term using a VSM method. This works correctly when using forward rendered lights but fails with deferred lights. // Shadow term (1 = no shadow) float shadow = 1; // [Light Space -> Shadow Map Space] // Transform the surface into light space and project // NB: Could be done in the vertex shader, but doing it here keeps the // "light shader" abstraction and doesn't limit the number of shadowed lights float4x4 LightViewProjection = mul(LightView, LightProjection); float4 surf_tex = mul(position, LightViewProjection); // Re-homogenize // 'w' component is not used in later calculations so no need to homogenize (it will equal '1' if homogenized) surf_tex.xyz /= surf_tex.w; // Rescale viewport to be [0,1] (texture coordinate system) float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = -surf_tex.y * 0.5f + 0.5f; // Half texel offset //shadow_tex += (0.5 / 512); // Scaled distance to light (instead of 'surf_tex.z') float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; //float rescaled_dist_to_light = surf_tex.z; // [Variance Shadow Map Depth Calculation] // No filtering float2 moments = tex2D(ShadowSampler, shadow_tex).xy; // Flip the moments values to bring them back to their original values moments.x = 1.0 - moments.x; moments.y = 1.0 - moments.y; // Compute variance float E_x2 = moments.y; float Ex_2 = moments.x * moments.x; float variance = E_x2 - Ex_2; variance = max(variance, Bias.y); // Surface is fully lit if the current pixel is before the light occluder (lit_factor == 1) // One-tailed inequality valid if float lit_factor = (rescaled_dist_to_light <= moments.x - Bias.x); // Compute probabilistic upper bound (mean distance) float m_d = moments.x - rescaled_dist_to_light; // Chebychev's inequality float p = variance / (variance + m_d * m_d); p = ReduceLightBleeding(p, Bias.z); // Adjust the light color based on the shadow attenuation shadow *= max(lit_factor, p); This is what I know for certain so far: The lighting is correct if I do not try and calculate the shadow term. (No shadows) The shadow term is correct when calculated using forward rendered lighting. (VSM works with forward rendered lights) With the current rescaled light distance (lightAttenuation.y is the far plane value): float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; The light is correct and the shadow appears to be zoomed in and misses the blurring: When I do not rescale the light and use the homogenized 'surf_tex': float rescaled_dist_to_light = surf_tex.z; the shadows are blurred correctly but the lighting is incorrect and the cube model is no longer lit Why is scaling by the far plane value (LightAttenuation.y) zooming in too far? The only other factor involved is my world pixel position, which is calculated as follows: // [Position] float4 position; // [Screen Position] position.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above position.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component position.z = 1.0 - position.z; position.w = 1.0; // 1.0 = position.w / position.w // [World Position] position = mul(position, CameraViewProjectionInverse); // Re-homogenize position (xyz AND w, otherwise shadows will bend when camera is close) position.xyz /= position.w; position.w = 1.0; Using the inverse matrix of the camera's view x projection matrix does work for lighting but maybe it is incorrect for shadow calculation? EDIT: Light calculations for shadow including 'dist_to_light' // Work out the light position and direction in world space float3 light_position = float3(LightViewInverse._41, LightViewInverse._42, LightViewInverse._43); // Direction might need to be negated float3 light_direction = float3(-LightViewInverse._31, -LightViewInverse._32, -LightViewInverse._33); // Unnormalized light vector float3 dir_to_light = light_position - position; // Direction from vertex float dist_to_light = length(dir_to_light); // Normalise 'toLight' vector for lighting calculations dir_to_light = normalize(dir_to_light); EDIT2: These are the calculations for the moments (depth) //============================================= //---[Vertex Shaders]-------------------------- //============================================= DepthVSOutput depth_VS( float4 Position : POSITION, uniform float4x4 shadow_view, uniform float4x4 shadow_view_projection) { DepthVSOutput output = (DepthVSOutput)0; // First transform position into world space float4 position_world = mul(Position, World); output.position_screen = mul(position_world, shadow_view_projection); output.light_vec = mul(position_world, shadow_view).xyz; return output; } //============================================= //---[Pixel Shaders]--------------------------- //============================================= DepthPSOutput depth_PS(DepthVSOutput input) { DepthPSOutput output = (DepthPSOutput)0; // Work out the depth of this fragment from the light, normalized to [0, 1] float2 depth; depth.x = length(input.light_vec) / FarPlane; depth.y = depth.x * depth.x; // Flip depth values to avoid floating point inaccuracies depth.x = 1.0f - depth.x; depth.y = 1.0f - depth.y; output.depth = depth.xyxy; return output; } EDIT 3: I have tried the folloiwng: float4 pp; pp.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above pp.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component pp.z = 1.0 - pp.z; pp.w = 1.0; // 1.0 = position.w / position.w // Determine the depth of the pixel with respect to the light float4x4 LightViewProjection = mul(LightView, LightProjection); float4x4 matViewToLightViewProj = mul(CameraViewProjectionInverse, LightViewProjection); float4 vPositionLightCS = mul(pp, matViewToLightViewProj); float fLightDepth = vPositionLightCS.z / vPositionLightCS.w; // Transform from light space to shadow map texture space. float2 vShadowTexCoord = 0.5 * vPositionLightCS.xy / vPositionLightCS.w + float2(0.5f, 0.5f); vShadowTexCoord.y = 1.0f - vShadowTexCoord.y; // Offset the coordinate by half a texel so we sample it correctly vShadowTexCoord += (0.5f / 512); //g_vShadowMapSize This suffers the same problem as the second picture. I have tried storing the depth based on the view x projection matrix: output.position_screen = mul(position_world, shadow_view_projection); //output.light_vec = mul(position_world, shadow_view); output.light_vec = output.position_screen; depth.x = input.light_vec.z / input.light_vec.w; This gives a shadow that has lots surface acne due to horrible floating point precision errors. Everything is lit correctly though. EDIT 4: Found an OpenGL based tutorial here I have followed it to the letter and it would seem that the uv coordinates for looking up the shadow map are incorrect. The source uses a scaled matrix to get the uv coordinates for the shadow map sampler /// <summary> /// The scale matrix is used to push the projected vertex into the 0.0 - 1.0 region. /// Similar in role to a * 0.5 + 0.5, where -1.0 < a < 1.0. /// <summary> const float4x4 ScaleMatrix = float4x4 ( 0.5, 0.0, 0.0, 0.0, 0.0, -0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.5, 1.0 ); I had to negate the 0.5 for the y scaling (M22) in order for it to work but the shadowing is still not correct. Is this really the correct way to scale? float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = surf_tex.y * -0.5f + 0.5f; The depth calculations are exactly the same as the source code yet they still do not work, which makes me believe something about the uv calculation above is incorrect.

    Read the article

  • GLSL compiler messages from different vendors [on hold]

    - by revers
    I'm writing a GLSL shader editor and I want to parse GLSL compiler messages to make hyperlinks to invalid lines in a shader code. I know that these messages are vendor specific but currently I have access only to AMD's video cards. I want to handle at least NVidia's and Intel's hardware, apart from AMD's. If you have video card from different vendor than AMD, could you please give me the output of following C++ program: #include <GL/glew.h> #include <GL/freeglut.h> #include <iostream> using namespace std; #define STRINGIFY(X) #X static const char* fs = STRINGIFY( out vec4 out_Color; mat4 m; void main() { vec3 v3 = vec3(1.0); vec2 v2 = v3; out_Color = vec4(5.0 * v2.x, 1.0); vec3 k = 3.0; float = 5; } ); static const char* vs = STRINGIFY( in vec3 in_Position; void main() { vec3 v(5); gl_Position = vec4(in_Position, 1.0); } ); void printShaderInfoLog(GLint shader) { int infoLogLen = 0; int charsWritten = 0; GLchar *infoLog; glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLogLen); if (infoLogLen > 0) { infoLog = new GLchar[infoLogLen]; glGetShaderInfoLog(shader, infoLogLen, &charsWritten, infoLog); cout << "Log:\n" << infoLog << endl; delete [] infoLog; } } void printProgramInfoLog(GLint program) { int infoLogLen = 0; int charsWritten = 0; GLchar *infoLog; glGetProgramiv(program, GL_INFO_LOG_LENGTH, &infoLogLen); if (infoLogLen > 0) { infoLog = new GLchar[infoLogLen]; glGetProgramInfoLog(program, infoLogLen, &charsWritten, infoLog); cout << "Program log:\n" << infoLog << endl; delete [] infoLog; } } void initShaders() { GLuint v = glCreateShader(GL_VERTEX_SHADER); GLuint f = glCreateShader(GL_FRAGMENT_SHADER); GLint vlen = strlen(vs); GLint flen = strlen(fs); glShaderSource(v, 1, &vs, &vlen); glShaderSource(f, 1, &fs, &flen); GLint compiled; glCompileShader(v); bool succ = true; glGetShaderiv(v, GL_COMPILE_STATUS, &compiled); if (!compiled) { cout << "Vertex shader not compiled." << endl; succ = false; } printShaderInfoLog(v); glCompileShader(f); glGetShaderiv(f, GL_COMPILE_STATUS, &compiled); if (!compiled) { cout << "Fragment shader not compiled." << endl; succ = false; } printShaderInfoLog(f); GLuint p = glCreateProgram(); glAttachShader(p, v); glAttachShader(p, f); glLinkProgram(p); glUseProgram(p); printProgramInfoLog(p); if (!succ) { exit(-1); } delete [] vs; delete [] fs; } int main(int argc, char* argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA); glutInitWindowSize(600, 600); glutCreateWindow("Triangle Test"); glewInit(); GLenum err = glewInit(); if (GLEW_OK != err) { cout << "glewInit failed, aborting." << endl; exit(1); } cout << "Using GLEW " << glewGetString(GLEW_VERSION) << endl; const GLubyte* renderer = glGetString(GL_RENDERER); const GLubyte* vendor = glGetString(GL_VENDOR); const GLubyte* version = glGetString(GL_VERSION); const GLubyte* glslVersion = glGetString(GL_SHADING_LANGUAGE_VERSION); GLint major, minor; glGetIntegerv(GL_MAJOR_VERSION, &major); glGetIntegerv(GL_MINOR_VERSION, &minor); cout << "GL Vendor : " << vendor << endl; cout << "GL Renderer : " << renderer << endl; cout << "GL Version : " << version << endl; cout << "GL Version : " << major << "." << minor << endl; cout << "GLSL Version : " << glslVersion << endl; initShaders(); return 0; } On my video card it gives: Status: Using GLEW 1.7.0 GL Vendor : ATI Technologies Inc. GL Renderer : ATI Radeon HD 4250 GL Version : 3.3.11631 Compatibility Profile Context GL Version : 3.3 GLSL Version : 3.30 Vertex shader not compiled. Log: Vertex shader failed to compile with the following errors: ERROR: 0:1: error(#132) Syntax error: '5' parse error ERROR: error(#273) 1 compilation errors. No code generated Fragment shader not compiled. Log: Fragment shader failed to compile with the following errors: WARNING: 0:1: warning(#402) Implicit truncation of vector from size 3 to size 2. ERROR: 0:1: error(#174) Not enough data provided for construction constructor WARNING: 0:1: warning(#402) Implicit truncation of vector from size 1 to size 3. ERROR: 0:1: error(#132) Syntax error: '=' parse error ERROR: error(#273) 2 compilation errors. No code generated Program log: Vertex and Fragment shader(s) were not successfully compiled before glLinkProgram() was called. Link failed. Or if you like, you could give me other compiler messages than proposed by me. To summarize, the question is: What are GLSL compiler messages formats (INFOs, WARNINGs, ERRORs) for different vendors? Please give me examples or pattern explanation. EDIT: Ok, it seems that this question is too broad, then shortly: How does NVidia's and Intel's GLSL compilers present ERROR and WARNING messages? AMD/ATI uses patterns like this: ERROR: <position>:<line_number>: <message> WARNING: <position>:<line_number>: <message> (examples are above).

    Read the article

  • Texture will not apply to my 3d Cube directX

    - by numerical25
    I am trying to apply a texture onto my 3d cube but it is not showing up correctly. I believe that it might some what be working because the cube is all brown which is almost the same complexion as the texture. And I did not originally make the cube brown. These are the steps I've done to add the texture I first declared 2 new varibles ID3D10EffectShaderResourceVariable* pTextureSR; ID3D10ShaderResourceView* textureSRV; I also added a variable and a struct to my shader .fx file Texture2D tex2D; SamplerState linearSampler { Filter = MIN_MAG_MIP_LINEAR; AddressU = Wrap; AddressV = Wrap; }; I then grabbed the image from my local hard drive from within the .cpp file. I believe this was successful, I checked all varibles for errors, everything has a memory address. Plus I pulled resources before and never had a problem. D3DX10CreateShaderResourceViewFromFile(mpD3DDevice,L"crate.jpg",NULL,NULL,&textureSRV,NULL); I grabbed the tex2d varible from my fx file and placed into my resource varible pTextureSR = modelObject.pEffect->GetVariableByName("tex2D")->AsShaderResource(); And added the resource to the varible pTextureSR->SetResource(textureSRV); I also added the extra property to my vertex layout D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"NORMAL",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 24, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"TEXCOORD",0, DXGI_FORMAT_R32G32_FLOAT, 0 , 36, D3D10_INPUT_PER_VERTEX_DATA, 0} }; as well as my struct struct VertexPos { D3DXVECTOR3 pos; D3DXVECTOR4 color; D3DXVECTOR3 normal; D3DXVECTOR2 texCoord; }; Then I created a new pixel shader that adds the texture to it. Below is the code in its entirety matrix Projection; matrix WorldMatrix; Texture2D tex2D; float3 lightSource; float4 lightColor = {0.5, 0.5, 0.5, 0.5}; // PS_INPUT - input variables to the pixel shader // This struct is created and fill in by the // vertex shader struct PS_INPUT { float4 Pos : SV_POSITION; float4 Color : COLOR0; float4 Normal : NORMAL; float2 Tex : TEXCOORD; }; SamplerState linearSampler { Filter = MIN_MAG_MIP_LINEAR; AddressU = Wrap; AddressV = Wrap; }; //////////////////////////////////////////////// // Vertex Shader - Main Function /////////////////////////////////////////////// PS_INPUT VS(float4 Pos : POSITION, float4 Color : COLOR, float4 Normal : NORMAL, float2 Tex : TEXCOORD) { PS_INPUT psInput; // Pass through both the position and the color psInput.Pos = mul( Pos, Projection ); psInput.Normal = Normal; psInput.Tex = Tex; return psInput; } /////////////////////////////////////////////// // Pixel Shader /////////////////////////////////////////////// float4 PS(PS_INPUT psInput) : SV_Target { float4 finalColor = 0; finalColor = saturate(dot(lightSource, psInput.Normal) * lightColor); return finalColor; } float4 textured( PS_INPUT psInput ) : SV_Target { return tex2D.Sample( linearSampler, psInput.Tex ); } // Define the technique technique10 Render { pass P0 { SetVertexShader( CompileShader( vs_4_0, VS() ) ); SetGeometryShader( NULL ); SetPixelShader( CompileShader( ps_4_0, textured() ) ); } } Below is my CPU code. It maybe a little sloppy. But I am just adding code anywhere cause I am just experimenting and playing around. You should find most of the texture code at the bottom createObject #include "MyGame.h" #include "OneColorCube.h" /* This code sets a projection and shows a turning cube. What has been added is the project, rotation and a rasterizer to change the rasterization of the cube. The issue that was going on was something with the effect file which was causing the vertices not to be rendered correctly.*/ typedef struct { ID3D10Effect* pEffect; ID3D10EffectTechnique* pTechnique; //vertex information ID3D10Buffer* pVertexBuffer; ID3D10Buffer* pIndicesBuffer; ID3D10InputLayout* pVertexLayout; UINT numVertices; UINT numIndices; }ModelObject; ModelObject modelObject; // World Matrix D3DXMATRIX WorldMatrix; // View Matrix D3DXMATRIX ViewMatrix; // Projection Matrix D3DXMATRIX ProjectionMatrix; ID3D10EffectMatrixVariable* pProjectionMatrixVariable = NULL; ID3D10EffectMatrixVariable* pWorldMatrixVarible = NULL; ID3D10EffectVectorVariable* pLightVarible = NULL; ID3D10EffectShaderResourceVariable* pTextureSR; bool MyGame::InitDirect3D() { if(!DX3dApp::InitDirect3D()) { return false; } D3D10_RASTERIZER_DESC rastDesc; rastDesc.FillMode = D3D10_FILL_WIREFRAME; rastDesc.CullMode = D3D10_CULL_FRONT; rastDesc.FrontCounterClockwise = true; rastDesc.DepthBias = false; rastDesc.DepthBiasClamp = 0; rastDesc.SlopeScaledDepthBias = 0; rastDesc.DepthClipEnable = false; rastDesc.ScissorEnable = false; rastDesc.MultisampleEnable = false; rastDesc.AntialiasedLineEnable = false; ID3D10RasterizerState *g_pRasterizerState; mpD3DDevice->CreateRasterizerState(&rastDesc, &g_pRasterizerState); //mpD3DDevice->RSSetState(g_pRasterizerState); // Set up the World Matrix D3DXMatrixIdentity(&WorldMatrix); D3DXMatrixLookAtLH(&ViewMatrix, new D3DXVECTOR3(0.0f, 10.0f, -20.0f), new D3DXVECTOR3(0.0f, 0.0f, 0.0f), new D3DXVECTOR3(0.0f, 1.0f, 0.0f)); // Set up the projection matrix D3DXMatrixPerspectiveFovLH(&ProjectionMatrix, (float)D3DX_PI * 0.5f, (float)mWidth/(float)mHeight, 0.1f, 100.0f); if(!CreateObject()) { return false; } return true; } //These are actions that take place after the clearing of the buffer and before the present void MyGame::GameDraw() { static float rotationAngleY = 15.0f; static float rotationAngleX = 0.0f; static D3DXMATRIX rotationXMatrix; static D3DXMATRIX rotationYMatrix; D3DXMatrixIdentity(&rotationXMatrix); D3DXMatrixIdentity(&rotationYMatrix); // create the rotation matrix using the rotation angle D3DXMatrixRotationY(&rotationYMatrix, rotationAngleY); D3DXMatrixRotationX(&rotationXMatrix, rotationAngleX); rotationAngleY += (float)D3DX_PI * 0.0008f; rotationAngleX += (float)D3DX_PI * 0.0005f; WorldMatrix = rotationYMatrix * rotationXMatrix; // Set the input layout mpD3DDevice->IASetInputLayout(modelObject.pVertexLayout); pWorldMatrixVarible->SetMatrix((float*)&WorldMatrix); // Set vertex buffer UINT stride = sizeof(VertexPos); UINT offset = 0; mpD3DDevice->IASetVertexBuffers(0, 1, &modelObject.pVertexBuffer, &stride, &offset); // Set primitive topology mpD3DDevice->IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST); //ViewMatrix._43 += 0.005f; // Combine and send the final matrix to the shader D3DXMATRIX finalMatrix = (WorldMatrix * ViewMatrix * ProjectionMatrix); pProjectionMatrixVariable->SetMatrix((float*)&finalMatrix); // make sure modelObject is valid // Render a model object D3D10_TECHNIQUE_DESC techniqueDescription; modelObject.pTechnique->GetDesc(&techniqueDescription); // Loop through the technique passes for(UINT p=0; p < techniqueDescription.Passes; ++p) { modelObject.pTechnique->GetPassByIndex(p)->Apply(0); // draw the cube using all 36 vertices and 12 triangles mpD3DDevice->Draw(36,0); } } //Render actually incapsulates Gamedraw, so you can call data before you actually clear the buffer or after you //present data void MyGame::Render() { DX3dApp::Render(); } bool MyGame::CreateObject() { //Create Layout D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"NORMAL",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 24, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"TEXCOORD",0, DXGI_FORMAT_R32G32_FLOAT, 0 , 36, D3D10_INPUT_PER_VERTEX_DATA, 0} }; UINT numElements = (sizeof(layout)/sizeof(layout[0])); modelObject.numVertices = sizeof(vertices)/sizeof(VertexPos); for(int i = 0; i < modelObject.numVertices; i += 3) { D3DXVECTOR3 out; D3DXVECTOR3 v1 = vertices[0 + i].pos; D3DXVECTOR3 v2 = vertices[1 + i].pos; D3DXVECTOR3 v3 = vertices[2 + i].pos; D3DXVECTOR3 u = v2 - v1; D3DXVECTOR3 v = v3 - v1; D3DXVec3Cross(&out, &u, &v); D3DXVec3Normalize(&out, &out); vertices[0 + i].normal = out; vertices[1 + i].normal = out; vertices[2 + i].normal = out; } //Create buffer desc D3D10_BUFFER_DESC bufferDesc; bufferDesc.Usage = D3D10_USAGE_DEFAULT; bufferDesc.ByteWidth = sizeof(VertexPos) * modelObject.numVertices; bufferDesc.BindFlags = D3D10_BIND_VERTEX_BUFFER; bufferDesc.CPUAccessFlags = 0; bufferDesc.MiscFlags = 0; D3D10_SUBRESOURCE_DATA initData; initData.pSysMem = vertices; //Create the buffer HRESULT hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &modelObject.pVertexBuffer); if(FAILED(hr)) return false; /* //Create indices DWORD indices[] = { 0,1,3, 1,2,3 }; ModelObject.numIndices = sizeof(indices)/sizeof(DWORD); bufferDesc.ByteWidth = sizeof(DWORD) * ModelObject.numIndices; bufferDesc.BindFlags = D3D10_BIND_INDEX_BUFFER; initData.pSysMem = indices; hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &ModelObject.pIndicesBuffer); if(FAILED(hr)) return false;*/ ///////////////////////////////////////////////////////////////////////////// //Set up fx files LPCWSTR effectFilename = L"effect.fx"; modelObject.pEffect = NULL; hr = D3DX10CreateEffectFromFile(effectFilename, NULL, NULL, "fx_4_0", D3D10_SHADER_ENABLE_STRICTNESS, 0, mpD3DDevice, NULL, NULL, &modelObject.pEffect, NULL, NULL); if(FAILED(hr)) return false; pProjectionMatrixVariable = modelObject.pEffect->GetVariableByName("Projection")->AsMatrix(); pWorldMatrixVarible = modelObject.pEffect->GetVariableByName("WorldMatrix")->AsMatrix(); pTextureSR = modelObject.pEffect->GetVariableByName("tex2D")->AsShaderResource(); ID3D10ShaderResourceView* textureSRV; D3DX10CreateShaderResourceViewFromFile(mpD3DDevice,L"crate.jpg",NULL,NULL,&textureSRV,NULL); pLightVarible = modelObject.pEffect->GetVariableByName("lightSource")->AsVector(); //Dont sweat the technique. Get it! LPCSTR effectTechniqueName = "Render"; D3DXVECTOR3 vLight(1.0f, 1.0f, 1.0f); pLightVarible->SetFloatVector(vLight); modelObject.pTechnique = modelObject.pEffect->GetTechniqueByName(effectTechniqueName); if(modelObject.pTechnique == NULL) return false; pTextureSR->SetResource(textureSRV); //Create Vertex layout D3D10_PASS_DESC passDesc; modelObject.pTechnique->GetPassByIndex(0)->GetDesc(&passDesc); hr = mpD3DDevice->CreateInputLayout(layout, numElements, passDesc.pIAInputSignature, passDesc.IAInputSignatureSize, &modelObject.pVertexLayout); if(FAILED(hr)) return false; return true; } And here is my cube coordinates. I actually only added coordinates to one side. And that is the front side. To double check I flipped the cube in all directions just to make sure i didnt accidentally place the text on the incorrect side //Create vectors and put in vertices // Create vertex buffer VertexPos vertices[] = { // BACK SIDES { D3DXVECTOR3(-5.0f, 5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(-5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(1.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(0.0,1.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(-5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.0f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, // 2 FRONT SIDE { D3DXVECTOR3(-5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f), D3DXVECTOR2(2.0,0.0)}, { D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f), D3DXVECTOR2(0.0,2.0)}, { D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f), D3DXVECTOR2(0.0,2.0)}, { D3DXVECTOR3(5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f) , D3DXVECTOR2(2.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.0f,0.0f), D3DXVECTOR2(2.0,2.0)}, // 3 { D3DXVECTOR3(-5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(-5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(-5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, // 4 { D3DXVECTOR3(-5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, 5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, -5.0f), D3DXVECTOR4(1.0f,0.5f,0.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, // 5 { D3DXVECTOR3(5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,0.0)}, { D3DXVECTOR3(5.0f, -5.0f, 5.0f), D3DXVECTOR4(0.0f,1.0f,0.5f,0.0f), D3DXVECTOR2(0.0,0.0)}, // 6 {D3DXVECTOR3(-5.0f, 5.0f, -5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, {D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, {D3DXVECTOR3(-5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, {D3DXVECTOR3(-5.0f, 5.0f, 5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, {D3DXVECTOR3(-5.0f, -5.0f, -5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, {D3DXVECTOR3(-5.0f, -5.0f, 5.0f), D3DXVECTOR4(0.5f,0.0f,1.0f,0.0f), D3DXVECTOR2(0.0,0.0)}, };

    Read the article

  • In GLSL is it possible to offset vertices based on height map colour?

    - by Rob
    I am attempting to generate some terrain based upon a heightmap. I have generated a 32 x 32 grid and a corresponding height map - In my vertex shader I am trying to offset the position of the Y axis based upon the colour of the heightmap, white vertices being higher than black ones. //Vertex Shader Code #version 330 uniform mat4 modelMatrix; uniform mat4 viewMatrix; uniform mat4 projectionMatrix; uniform sampler2D heightmap; layout (location=0) in vec4 vertexPos; layout (location=1) in vec4 vertexColour; layout (location=3) in vec2 vertexTextureCoord; layout (location=4) in float offset; out vec4 fragCol; out vec4 fragPos; out vec2 fragTex; void main() { // Retreive the current pixel's colour vec4 hmColour = texture(heightmap,vertexTextureCoord); // Offset the y position by the value of current texel's colour value ? vec4 offset = vec4(vertexPos.x , vertexPos.y + hmColour.r, vertexPos.z , 1.0); // Final Position gl_Position = projectionMatrix * viewMatrix * modelMatrix * offset; // Data sent to Fragment Shader. fragCol = vertexColour; fragPos = vertexPos; fragTex = vertexTextureCoord; } However the code I have produced only creates a grid with none of the y vertices higher than any others. This is the C++ code that generates the grid and texture co-orientates which I believe to be correct as the texture is mapped to the grid, hence the white blob in the middle. The grid-lines are generated in the fragment shader, sorry for any confusion. I have tried multiplying the r value of hmColour by 1000 unfortunately that had no effect. The only other problem it could be is that the texture coordinate data is incorrect ? for (int z = 0; z < MAP_Z ; z++) { for(int x = 0; x < MAP_X ; x++) { //Generate Vertex Buffer vertexData[iVertex++] = float (x) * MAP_X; vertexData[iVertex++] = 0; vertexData[iVertex++] = -(float) (z) * MAP_Z; //Colour Buffer NOT NEEDED colourData[iColour++] = 255.0f; // R colourData[iColour++] = 1.0f; // G colourData[iColour++] = 0.0f; // B //Texture Buffer textureData[iTexture++] = (float ) x * (1.0f / MAP_X); textureData[iTexture++] = (float ) z * (1.0f / MAP_Z); } } The heightmap texture I am trying to use appears like so (without grid-lines). This is the corresponding fragment shader // Fragment Shader Code #version 330 uniform sampler2D hmTexture; layout (location=0) out vec4 fragColour; in vec2 fragTex; in vec4 pos; void main(void) { vec2 line = fragTex * 32; // Without Gridlines fragColour = texture(hmTexture,fragTex); // With grid lines // + mix(vec4(0.0, 0.0, 1.0, 0.0), vec4(1.0, 1.0, 1.0, 1.0), // smoothstep(0.05,fract(line.y), 0.99) * smoothstep(0.05,fract(line.x),0.99)); }

    Read the article

  • Tessellation Texture Coordinates

    - by Stuart Martin
    Firstly some info - I'm using DirectX 11 , C++ and I'm a fairly good programmer but new to tessellation and not a master graphics programmer. I'm currently implementing a tessellation system for a terrain model, but i have reached a snag. My current system produces a terrain model from a height map complete with multiple texture coordinates, normals, binormals and tangents for rendering. Now when i was using a simple vertex and pixel shader combination everything worked perfectly but since moving to include a hull and domain shader I'm slightly confused and getting strange results. My terrain is a high detail model but the textured results are very large patches of solid colour. My current setup passes the model data into the vertex shader then through the hull into the domain and then finally into the pixel shader for use in rendering. My only thought is that in my hull shader i pass the information into the domain shader per patch and this is producing the large areas of solid colour because each patch has identical information. Lighting and normal data are also slightly off but not as visibly as texturing. Below is a copy of my hull shader that does not work correctly because i think the way that i am passing the data through is incorrect. If anyone can help me out but suggesting an alternative way to get the required data into the pixel shader? or by showing me the correct way to handle the data in the hull shader id be very thankful! cbuffer TessellationBuffer { float tessellationAmount; float3 padding; }; struct HullInputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; }; struct ConstantOutputType { float edges[3] : SV_TessFactor; float inside : SV_InsideTessFactor; }; struct HullOutputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; float4 depthPosition : TEXCOORD2; }; ConstantOutputType ColorPatchConstantFunction(InputPatch<HullInputType, 3> inputPatch, uint patchId : SV_PrimitiveID) { ConstantOutputType output; output.edges[0] = tessellationAmount; output.edges[1] = tessellationAmount; output.edges[2] = tessellationAmount; output.inside = tessellationAmount; return output; } [domain("tri")] [partitioning("integer")] [outputtopology("triangle_cw")] [outputcontrolpoints(3)] [patchconstantfunc("ColorPatchConstantFunction")] HullOutputType ColorHullShader(InputPatch<HullInputType, 3> patch, uint pointId : SV_OutputControlPointID, uint patchId : SV_PrimitiveID) { HullOutputType output; output.position = patch[pointId].position; output.tex = patch[pointId].tex; output.tex2 = patch[pointId].tex2; output.normal = patch[pointId].normal; output.tangent = patch[pointId].tangent; output.binormal = patch[pointId].binormal; return output; } Edited to include the domain shader:- [domain("tri")] PixelInputType ColorDomainShader(ConstantOutputType input, float3 uvwCoord : SV_DomainLocation, const OutputPatch<HullOutputType, 3> patch) { float3 vertexPosition; PixelInputType output; // Determine the position of the new vertex. vertexPosition = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.position = mul(float4(vertexPosition, 1.0f), worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); output.depthPosition = output.position; output.tex = patch[0].tex; output.tex2 = patch[0].tex2; output.normal = patch[0].normal; output.tangent = patch[0].tangent; output.binormal = patch[0].binormal; return output; }

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >