Search Results

Search found 1333 results on 54 pages for 'geometry shader'.

Page 26/54 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Speeding up procedural texture generation

    - by FalconNL
    Recently I've begun working on a game that takes place in a procedurally generated solar system. After a bit of a learning curve (having neither worked with Scala, OpenGL 2 ES or Libgdx before), I have a basic tech demo going where you spin around a single procedurally textured planet: The problem I'm running into is the performance of the texture generation. A quick overview of what I'm doing: a planet is a cube that has been deformed to a sphere. To each side, a n x n (e.g. 256 x 256) texture is applied, which are bundled in one 8n x n texture that is sent to the fragment shader. The last two spaces are not used, they're only there to make sure the width is a power of 2. The texture is currently generated on the CPU, using the updated 2012 version of the simplex noise algorithm linked to in the paper 'Simplex noise demystified'. The scene I'm using to test the algorithm contains two spheres: the planet and the background. Both use a greyscale texture consisting of six octaves of 3D simplex noise, so for example if we choose 128x128 as the texture size there are 128 x 128 x 6 x 2 x 6 = about 1.2 million calls to the noise function. The closest you will get to the planet is about what's shown in the screenshot and since the game's target resolution is 1280x720 that means I'd prefer to use 512x512 textures. Combine that with the fact the actual textures will of course be more complicated than basic noise (There will be a day and night texture, blended in the fragment shader based on sunlight, and a specular mask. I need noise for continents, terrain color variation, clouds, city lights, etc.) and we're looking at something like 512 x 512 x 6 x 3 x 15 = 70 million noise calls for the planet alone. In the final game, there will be activities when traveling between planets, so a wait of 5 or 10 seconds, possibly 20, would be acceptable since I can calculate the texture in the background while traveling, though obviously the faster the better. Getting back to our test scene, performance on my PC isn't too terrible, though still too slow considering the final result is going to be about 60 times worse: 128x128 : 0.1s 256x256 : 0.4s 512x512 : 1.7s This is after I moved all performance-critical code to Java, since trying to do so in Scala was a lot worse. Running this on my phone (a Samsung Galaxy S3), however, produces a more problematic result: 128x128 : 2s 256x256 : 7s 512x512 : 29s Already far too long, and that's not even factoring in the fact that it'll be minutes instead of seconds in the final version. Clearly something needs to be done. Personally, I see a few potential avenues, though I'm not particularly keen on any of them yet: Don't precalculate the textures, but let the fragment shader calculate everything. Probably not feasible, because at one point I had the background as a fullscreen quad with a pixel shader and I got about 1 fps on my phone. Use the GPU to render the texture once, store it and use the stored texture from then on. Upside: might be faster than doing it on the CPU since the GPU is supposed to be faster at floating point calculations. Downside: effects that cannot (easily) be expressed as functions of simplex noise (e.g. gas planet vortices, moon craters, etc.) are a lot more difficult to code in GLSL than in Scala/Java. Calculate a large amount of noise textures and ship them with the application. I'd like to avoid this if at all possible. Lower the resolution. Buys me a 4x performance gain, which isn't really enough plus I lose a lot of quality. Find a faster noise algorithm. If anyone has one I'm all ears, but simplex is already supposed to be faster than perlin. Adopt a pixel art style, allowing for lower resolution textures and fewer noise octaves. While I originally envisioned the game in this style, I've come to prefer the realistic approach. I'm doing something wrong and the performance should already be one or two orders of magnitude better. If this is the case, please let me know. If anyone has any suggestions, tips, workarounds, or other comments regarding this problem I'd love to hear them.

    Read the article

  • Problems with texture orientation in space

    - by frankie
    I am currently drawing texture in 3D space and have some problems with it's orientation. I'd like me textures always to be oriented with front face to user. My desirable result looks like Note, that text size stay without changes when we rotating world and stay oriented with front face to user. Now I can draw text in 3D space, but it is not oriented with front but rotating with world. Such results I got with following shaders: Vertex Shader uniform vec3 Position; void main() { gl_Position = vec4(Position, 1.0); } Geometry Shader layout(points) in; layout(triangle_strip, max_vertices = 4) out; out vec2 fsTextureCoordinates; uniform mat4 projectionMatrix; uniform mat4 modelViewMatrix; uniform sampler2D og_texture0; uniform float og_highResolutionSnapScale; uniform vec2 u_originScale; void main() { vec2 halfSize = vec2(textureSize(og_texture0, 0)) * 0.5 * og_highResolutionSnapScale; vec4 center = gl_in[0].gl_Position; center.xy += (u_originScale * halfSize); vec4 v0 = vec4(center.xy - halfSize, center.z, 1.0); vec4 v1 = vec4(center.xy + vec2(halfSize.x, -halfSize.y), center.z, 1.0); vec4 v2 = vec4(center.xy + vec2(-halfSize.x, halfSize.y), center.z, 1.0); vec4 v3 = vec4(center.xy + halfSize, center.z, 1.0); gl_Position = projectionMatrix * modelViewMatrix * v0; fsTextureCoordinates = vec2(0.0, 0.0); EmitVertex(); gl_Position = projectionMatrix * modelViewMatrix * v1; fsTextureCoordinates = vec2(1.0, 0.0); EmitVertex(); gl_Position = projectionMatrix * modelViewMatrix * v2; fsTextureCoordinates = vec2(0.0, 1.0); EmitVertex(); gl_Position = projectionMatrix * modelViewMatrix * v3; fsTextureCoordinates = vec2(1.0, 1.0); EmitVertex(); } Fragment Shader in vec2 fsTextureCoordinates; out vec4 fragmentColor; uniform sampler2D og_texture0; uniform vec3 u_color; void main() { vec4 color = texture(og_texture0, fsTextureCoordinates); if (color.a == 0.0) { discard; } fragmentColor = vec4(color.rgb * u_color.rgb, color.a); } Any ideas how to get my desirable result? EDIT 1: I make edit in my geometry shader and got part of lable drawn on screen at corner. But it is not rotating. .......... vec4 centerProjected = projectionMatrix * modelViewMatrix * center; centerProjected /= centerProjected.w; vec4 v0 = vec4(centerProjected.xy - halfSize, 0.0, 1.0); vec4 v1 = vec4(centerProjected.xy + vec2(halfSize.x, -halfSize.y), 0.0, 1.0); vec4 v2 = vec4(centerProjected.xy + vec2(-halfSize.x, halfSize.y), 0.0, 1.0); vec4 v3 = vec4(centerProjected.xy + halfSize, 0.0, 1.0); gl_Position = og_viewportOrthographicMatrix * v0; ..........

    Read the article

  • Computing a normal matrix in conjunction with gluLookAt

    - by Chris Smith
    I have a hand-rolled camera class that converts yaw, pitch, and roll angles into a forward, side, and up vector suitable for calling gluLookAt. Using this camera class I can modify the model-view matrix to move about the 3D world just fine. However, I am having trouble when using this camera class (and associated model-view matrix) when trying to perform directional lighting in my vertex shader. The problem is that the light direction, (0, 1, 0) for example, is relative to where the 'camera is looking' and not the actual world coordinates. (Or is this eye coordinates vs. model coordinates?) I would like the light direction to be unaffected by the camera's viewing direction. For example, when the camera is looking down the Z axis the ground is lit correctly. However, if I point the camera straight at the ground, then it goes dark. This is (I think) because the light direction is parallel with the camera's 'up' vector which is perpendicular with the ground's normal vector. I tried computing the normal matrix without taking the camera's model view into account, but then none of my objects were rotated correctly. Sorry if this sounds vague. I suspect there is a straight forward answer, but I'm not 100% clear on how the normal matrix should be used for transforming vertex normals in my vertex shader. For reference, here is pseudo code for my rendering loop: pMatrix = new Matrix(); pMatrix = makePerspective(...) mvMatrix = new Matrix() camera.apply(mvMatrix); // Calls gluLookAt // Move the object into position. mvMatrix.translatev(position); mvMatrix.rotatef(rotation.x, 1, 0, 0); mvMatrix.rotatef(rotation.y, 0, 1, 0); mvMatrix.rotatef(rotation.z, 0, 0, 1); var nMatrix = new Matrix(); nMatrix.set(mvMatrix.get().getInverse().getTranspose()); // Set vertex shader uniforms. gl.uniformMatrix4fv(shaderProgram.pMatrixUniform, false, new Float32Array(pMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.mvMatrixUniform, false, new Float32Array(mvMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.nMatrixUniform, false, new Float32Array(nMatrix.getFlattened())); // ... gl.drawElements(gl.TRIANGLES, this.vertexIndexBuffer.numItems, gl.UNSIGNED_SHORT, 0); And the corresponding vertex shader: // Attributes attribute vec3 aVertexPosition; attribute vec4 aVertexColor; attribute vec3 aVertexNormal; // Uniforms uniform mat4 uMVMatrix; uniform mat4 uNMatrix; uniform mat4 uPMatrix; // Varyings varying vec4 vColor; // Constants const vec3 LIGHT_DIRECTION = vec3(0, 1, 0); // Opposite direction of photons. const vec4 AMBIENT_COLOR = vec4 (0.2, 0.2, 0.2, 1.0); float ComputeLighting() { vec4 transformedNormal = vec4(aVertexNormal.xyz, 1.0); transformedNormal = uNMatrix * transformedNormal; float base = dot(normalize(transformedNormal.xyz), normalize(LIGHT_DIRECTION)); return max(base, 0.0); } void main(void) { gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0); float lightWeight = ComputeLighting(); vColor = vec4(aVertexColor.xyz * lightWeight, 1.0) + AMBIENT_COLOR; } Note that I am using WebGL, so if the anser is use glFixThisProblem(...) any pointers on how to re-implement that on WebGL if missing would be appreciated.

    Read the article

  • C++ OpenGL wireframe cube rendering blank

    - by caleb.breckon
    I'm just trying to draw a bunch of lines that make up a "cube". I can't for the life of me figure out why this is producing a black screen. The debugger does not break at any point. I'm sure it's a problem with my pointers, as I'm only decent at them in regular c++ and in OpenGL it gets even worse. const char* vertexSource = "#version 150\n" "in vec3 position;" "void main() {" " gl_Position = vec4(position, 1.0);" "}"; const char* fragmentSource = "#version 150\n" "out vec4 outColor;" "void main() {" " outColor = vec4(1.0, 1.0, 1.0, 1.0);" "}"; int main() { initializeGLFW(); // Initialize GLEW glewExperimental = GL_TRUE; glewInit(); // Create Vertex Array Object GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); // Create a Vertex Buffer Object and copy the vertex data to it GLuint vbo; glGenBuffers( 1, &vbo ); float vertices[] = { 1.0f, 1.0f, 1.0f, // Vertex 0 (X, Y, Z) -1.0f, 1.0f, 1.0f, // Vertex 1 (X, Y, Z) -1.0f, -1.0f, 1.0f, // Vertex 2 (X, Y, Z) 1.0f, -1.0f, 1.0f, // Vertex 3 (X, Y, Z) 1.0f, 1.0f, -1.0f, // Vertex 4 (X, Y, Z) -1.0f, 1.0f, -1.0f, // Vertex 5 (X, Y, Z) -1.0f, -1.0f, -1.0f, // Vertex 6 (X, Y, Z) 1.0f, -1.0f, -1.0f // Vertex 7 (X, Y, Z) }; GLuint indices[] = { 0, 1, 1, 2, 2, 3, 3, 0, 4, 5, 5, 6, 6, 7, 7, 4, 0, 4, 1, 5, 2, 6, 3, 7 }; glBindBuffer( GL_ARRAY_BUFFER, vbo ); glBufferData( GL_ARRAY_BUFFER, sizeof( vertices ), vertices, GL_STATIC_DRAW ); //glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, vbo); //glBufferData( GL_ELEMENT_ARRAY_BUFFER, sizeof( indices ), indices, GL_STATIC_DRAW ); // Create and compile the vertex shader GLuint vertexShader = glCreateShader( GL_VERTEX_SHADER ); glShaderSource( vertexShader, 1, &vertexSource, NULL ); glCompileShader( vertexShader ); // Create and compile the fragment shader GLuint fragmentShader = glCreateShader( GL_FRAGMENT_SHADER ); glShaderSource( fragmentShader, 1, &fragmentSource, NULL ); glCompileShader( fragmentShader ); // Link the vertex and fragment shader into a shader program GLuint shaderProgram = glCreateProgram(); glAttachShader( shaderProgram, vertexShader ); glAttachShader( shaderProgram, fragmentShader ); glBindFragDataLocation( shaderProgram, 0, "outColor" ); glLinkProgram (shaderProgram); glUseProgram( shaderProgram); // Specify the layout of the vertex data GLint posAttrib = glGetAttribLocation( shaderProgram, "position" ); glEnableVertexAttribArray( posAttrib ); glVertexAttribPointer( posAttrib, 3, GL_FLOAT, GL_FALSE, 0, 0 ); // Main loop while(glfwGetWindowParam(GLFW_OPENED)) { // Clear the screen to black glClearColor( 0.0f, 0.0f, 0.0f, 1.0f ); glClear( GL_COLOR_BUFFER_BIT ); // Draw lines from 2 vertices glDrawElements(GL_LINES, sizeof(indices), GL_UNSIGNED_INT, indices ); // Swap buffers glfwSwapBuffers(); } // Clean up glDeleteProgram( shaderProgram ); glDeleteShader( fragmentShader ); glDeleteShader( vertexShader ); //glDeleteBuffers( 1, &ebo ); glDeleteBuffers( 1, &vbo ); glDeleteVertexArrays( 1, &vao ); glfwTerminate(); exit( EXIT_SUCCESS ); }

    Read the article

  • Getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provides a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? Use mipmaps and simply read the pixel on the 1x1 level. Again the question of accuracy and if it is even possible using non-power-of-two textures. The problem wit these approaches is, that the pipeline gets stalled which results in major performance issues. Therefore, I'm looking for a more performant way to accomplish my goal. Using the EXT_OCCLUSION_QUERY_BOOLEAN extension Apple introduced EXT_OCCLUSION_QUERY_BOOLEAN in iOS 5.0 for iPad 2. "4.1.6 Occlusion Queries Occlusion queries use query objects to track the number of fragments or samples that pass the depth test. An occlusion query can be started and finished by calling BeginQueryEXT and EndQueryEXT, respectively, with a target of ANY_SAMPLES_PASSED_EXT or ANY_SAMPLES_PASSED_CONSERVATIVE_EXT. When an occlusion query is started with the target ANY_SAMPLES_PASSED_EXT, the samples-boolean state maintained by the GL is set to FALSE. While that occlusion query is active, the samples-boolean state is set to TRUE if any fragment or sample passes the depth test. When the occlusion query finishes, the samples-boolean state of FALSE or TRUE is written to the corresponding query object as the query result value, and the query result for that object is marked as available. If the target of the query is ANY_SAMPLES_PASSED_CONSERVATIVE_EXT, an implementation may choose to use a less precise version of the test which can additionally set the samples-boolean state to TRUE in some other implementation dependent cases." The first sentence hints on a behavior which is exactly what I'm looking for: getting the number of pixels which passed the depth test in an asynchronous manner without much performance loss. However, the rest of the document describes only how to get boolean results. Is it possible to exploit this extension to get the pixel count? Does the hardware support it so that there may be hidden API to get access to the pixel count? Other extensions which could be exploitable would be debugging features like the number of times the fragment shader was invoked (PSInvocations in DirectX - not sure if something simila is available in OpenGL ES). However, this would also result in a pipeline stall.

    Read the article

  • MSTest on x64 C++/CLI

    - by Oyvind
    I got a problem using MSTest on x64: The test project depends on a couple of C++/CLI assemblies, and fails to load for some reason. In Visual Studio, I get (stripped down): Error loading D:\xxx\Xxx.Test.dll: Unable to load the test container 'D:\xxx\Xxx.Test.dll' or one of its dependencies. Error details: System.BadImageFormatException: Could not load file or assembly 'Common.Geometry.Native, Version=1.1.4574.22395, Culture=neutral, PublicKeyToken=null' or one of its dependencies. An attempt was made to load a program with an incorrect format. Running MSTest manually in a command prompt, I get: Unable to load the test container 'D:\xxx\Xxx.Test.dll' or one of its dependencies. Error details: System.IO.FileNotFoundException: Could not load file or assembly 'Common.Geometry.Native, Version=1.1.4574.22395, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified. Details worth mentioning: The test project itself is compiled using 'Any Cpu'. I use a x64 specific testrunconfig Dependency walker shows no missing native dependencies in the C++/CLI assembly (Common.Geometry.Native) Even more interesting, there is another test project in the same solution using the same C++/CLI assembly (Common.Geometry.Native), and it runs without any problems. I have also verified that there are no 32bit assemblies/dlls interfering. Any suggestions is welcome !

    Read the article

  • Ogre material scripts; how do I give a technique multiple lod_indexes?

    - by BlueNovember
    I have an Ogre material script that defines 4 rendering techniques. 1 using GLSL shaders, then 3 others that just use textures of different resolutions. I want to use the GLSL shader unconditionally if the graphics card supports it, and the other 3 textures depending on camera distance. At the moment my script is; material foo { lod_distances 1600 2000 technique shaders { lod_index 0 lod_index 1 lod_index 2 //various passes here } technique high_res { lod_index 0 //various passes here } technique medium_res { lod_index 1 //various passes here } technique low_res { lod_index 2 //various passes here } Extra information The Ogre manual says; Increasing indexes denote lower levels of detail You can (and often will) assign more than one technique to the same LOD index, what this means is that OGRE will pick the best technique of the ones listed at the same LOD index. OGRE determines which one is 'best' by which one is listed first. Currently, on a machine supporting the GLSL version I am using, the script behaves as follows; Camera 2000 : Shader technique Camera 1600 <= 2000 : Medium Camera <= 1600 : High If I change the lod order in shader technique to { lod_index 2 lod_index 1 lod_index 0 } The behaviour becomes; Camera 2000 : Low Camera 1600 <= 2000 : Medium Camera <= 1600 : Shader implying only the latest lod_index is used. If I change it to lod_index 0 1 2 It shouts at me Compiler error: fewer parameters expected in foo.material(#): lod_index only supports 1 argument So how do I specify a technique to have 3 lod_indexes? Duplication works; technique shaders { lod_index 0 //various passes here } technique shaders1 { lod_index 1 //passes repeated here } technique shaders2 { lod_index 2 //passes repeated here } ...but it's ugly.

    Read the article

  • How To: Using spatial data with Entity Framework and Connector/Net

    - by GABMARTINEZ
    One of the new features introduced in Entity Framework 5.0 is the incorporation of some new types of data within an Entity Data Model: the spatial data types. These types allow us to perform operations on coordinates values in an easier way. There's no need to add stored routines or functions for every operation among these geometry types, now the user can have the alternative to put this logic on his application or keep it in the database. In the new 6.7.4 version there's also this new feature incorporated to Connector/Net library so our users can start exploring it and could provide us some feedback or comments about this new functionality. Through this tutorial on how to create a Code First Entity Model with a geometry column, we'll show an example on using Geometry types and some common operations when using geometry types inside an application. Requirements: - Connector/Net 6.7.4 - Entity Framework 5.0 version - .NET Framework 4.5 version - Basic understanding on Entity Framework and C# language. - An installed and running instance of MySQL Server 5.5.x or 5.6.10 version- Visual Studio 2012. Step One: Create a new Console Application  Inside Visual Studio select File->New Project menu option and select the Console Application template. Also make sure the .Net 4.5 version is selected so the new features for EF 5.0 will work with the application. Step Two: Add the Entity Framework Package For adding the Entity Framework Package there is more than one option: the package manager console or the Manage Nuget Packages option dialog. If you want to open the Package Manager Console, go to the Tools Menu -> Library Package Manager -> Package Manager Console. On the Package Manager Console Type:Install-Package EntityFrameworkThis will add the reference to the project of the latest released No alpha version of Entity Framework. Step Three: Adding Entity class and DBContext We'll add a simple class that represents a table entity to save some places and its location using a DBGeometry column that will be mapped to a Geometry type in MySQL. After that some operations can be performed using this data. public class MyPlace { [Key] public int Id { get; set; } public string name { get; set; } public DbGeometry location { get; set; } } public class JourneyDb : DbContext { public DbSet<MyPlace> MyPlaces { get; set; } }  Also make sure to add the connection string to the App.Config file as in the example: <?xml version="1.0" encoding="utf-8"?> <configuration>   <configSections>     <!-- For more information on Entity Framework configuration, visit http://go.microsoft.com/fwlink/?LinkID=237468 -->     <section name="entityFramework" type="System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection, EntityFramework, Version=5.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />   </configSections>   <startup>     <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />   </startup>   <connectionStrings>     <add name="JourneyDb" connectionString="server=localhost;userid=root;pwd=;database=journeydb" providerName="MySql.Data.MySqlClient"/>   </connectionStrings>   <entityFramework>     </entityFramework> </configuration> Note also that the <entityFramework> section is empty.Step Four: Adding some new records.On the Program.cs file add the following code for the Main method so the Database gets created and also some new data can be added to the new table. This code adds some records containing some determinate locations. After being added a distance function will be used to know how much distance has each location in reference to the Queens Village Station in New York. static void Main(string[] args)    {     using (JourneyDb cxt = new JourneyDb())      {        cxt.Database.Delete();        cxt.Database.Create();         cxt.MyPlaces.Add(new MyPlace()        {          name = "JFK INTERNATIONAL AIRPORT OF NEW YORK",          location = DbGeometry.FromText("POINT(40.644047 -73.782291)"),        });         cxt.MyPlaces.Add(new MyPlace()        {          name = "ALLEY POND PARK",          location = DbGeometry.FromText("POINT(40.745696 -73.742638)"),        });       cxt.MyPlaces.Add(new MyPlace()        {          name = "CUNNINGHAM PARK",          location = DbGeometry.FromText("POINT(40.735031 -73.768387)"),        });         cxt.MyPlaces.Add(new MyPlace()        {          name = "QUEENS VILLAGE STATION",          location = DbGeometry.FromText("POINT(40.717957 -73.736501)"),        });         cxt.SaveChanges();         var points = (from p in cxt.MyPlaces                      select new { p.name, p.location });        foreach (var item in points)       {         Console.WriteLine("Location " + item.name + " has a distance in Km from Queens Village Station " + DbGeometry.FromText("POINT(40.717957 -73.736501)").Distance(item.location) * 100);       }       Console.ReadKey();      }  }}Output : Location JFK INTERNATIONAL AIRPORT OF NEW YORK has a distance from Queens Village Station 8.69448802402959 Km. Location ALLEY POND PARK has a distance from Queens Village Station 2.84097675104912 Km. Location CUNNINGHAM PARK has a distance from Queens Village Station 3.61695793727275 Km. Location QUEENS VILLAGE STATION has a distance from Queens Village Station 0 Km. Conclusion:Adding spatial data to a table is easier than before when having Entity Framework 5.0. This new Entity Framework feature that handles spatial data columns within the Data layer has a lot of integrated functions and methods toease this type of tasks.Notes:This version of Connector/Net is just released as GA so is preatty much stable to be used on a ProductionEnvironment. Please send us your comments or questions using this blog or at the Forums where we keep answering any questions you have about Connector/Net and MySQL Server.A copy of this sample project can be downloaded here. This application does not include any library so you will haveto add them before running it. Happly MySQL/.Net Coding.

    Read the article

  • Low framerate on background apps

    - by user1698923
    My problem is that when a game is running in the foreground, in Full Screen mode, any applications on my second monitor (such as youtube videos, videos, not app specific) drop their frame-rate to about 2-3 FPS. It seems like some sort of power management option that I can't track down. As far as I can tell, it's not due to the GPU not being able to keep up. For instance, my PC can play League of Legends at about 280FPS when the framerate is uncapped. If i cap it at 60FPS using the in-game option, it has no affect on the performance of the background app. Summary Operating System Windows 8 Pro 64-bit CPU Intel Core i7 3820 @ 3.60GHz 42 °C Sandy Bridge-E 32nm Technology RAM 12.0GB Triple-Channel DDR3 @ 533MHz (7-7-7-20) Motherboard Gigabyte Technology Co., Ltd. X79-UD3 (SOCKET 0) 37 °C Graphics DELL U2713HM (2560x1440@59Hz) DELL U2713HM (2560x1440@59Hz) 1280MB NVIDIA GeForce GTX 570 (Gigabyte) 58 °C Hard Drives 212GB Volume0 (RAID) 1863GB Western Digital WDC WD20EARS-00MVWB0 (SATA) 36 °C 1863GB Western Digital WDC WD20EARS-00MVWB0 (SATA) 34 °C Optical Drives No optical disk drives detected Audio ASUS Xonar Essence STX Audio Device Operating System Windows 8 Pro 64-bit Computer type: Desktop Graphics Monitor 1 Name DELL U2713HM on NVIDIA GeForce GTX 570 Current Resolution 2560x1440 pixels Work Resolution 2560x1400 pixels State Enabled, Output devices support Multiple displays Extended, Secondary, Enabled Monitor Width 2560 Monitor Height 1440 Monitor BPP 32 bits per pixel Monitor Frequency 59 Hz Device \\.\DISPLAY4\Monitor0 Monitor 2 Name DELL U2713HM on NVIDIA GeForce GTX 570 Current Resolution 2560x1440 pixels Work Resolution 2560x1400 pixels State Enabled, Output devices support Multiple displays Extended, Primary, Enabled Monitor Width 2560 Monitor Height 1440 Monitor BPP 32 bits per pixel Monitor Frequency 59 Hz Device \\.\DISPLAY5\Monitor0 NVIDIA GeForce GTX 570 Manufacturer NVIDIA Model GeForce GTX 570 GPU GF110 Device ID 10DE-1086 Revision A2 Subvendor Gigabyte (1458) Series GeForce GTX 500 Current Performance Level Level 3 Current GPU Clock 845 MHz Current Memory Clock 1900 MHz Current Shader Clock 1690 MHz Voltage 0.988 V Technology 40 nm Die Size 520 mm² Release Date Dec 07, 2010 DirectX Support 11.0 OpenGL Support 5.0 Bus Interface PCI Express x16 Temperature 57 °C Driver version 9.18.13.2018 BIOS Version 70.10.55.00.01 ROPs 40 Shaders 512 unified Memory Type GDDR5 Memory 1280 MB Bus Width 64x5 (320 bit) Filtering Modes 16x Anisotropic Noise Level Moderate Max Power Draw 219 Watts Count of performance levels : 3 Level 1 - "Default" GPU Clock 50 MHz Memory Clock 135 MHz Shader Clock 101 MHz Level 2 - "2D Desktop" GPU Clock 405 MHz Memory Clock 324 MHz Shader Clock 810 MHz Level 3 - "3D Applications" GPU Clock 845 MHz Memory Clock 1900 MHz Shader Clock 1690 MHz Things I've tried: 1) Updating the graphics driver 2) Setting windows power mode to High Performance 3) Reset Nvidia Global Performance settings to default

    Read the article

  • using a texture mesh and wireframe mesh in threejs

    - by Andy Poes
    I'm trying to draw a wireframe mesh and a textured mesh in threeJS but when I have both added to my scene the textured mesh doesn't display. Code below: I'm having trouble creating two meshes that share the same geometry where one of the materials is wireframe and the other is a texture. If one of the materials is wireframe and the other is just a color fill it works fine- but as soon as I make the second material a texture it stops working. If I comment out scene.add( wireMesh ); then the textured mesh shows up. var wireMat = new THREE.MeshBasicMaterial( { color:0x00FFFF, wireframe: true, transparent: true, overdraw:true } ); var wireMesh = new THREE.Mesh(geometry, wireMat); scene.add( wireMesh ); var texture = texture = THREE.ImageUtils.loadTexture( 'textures/world.jpg' ); var imageMat = new THREE.MeshBasicMaterial( {color:0xffffff, map: texture } ); var fillMesh = new THREE.Mesh(geometry, imageMat); scene.add( fillMesh );

    Read the article

  • What's the best way to convert a .eps (CMYK) to a .jpg (RGB) with Image Magick

    - by Slinky
    Hi All, I have a bunch of .eps files (CMYK) that I need to convert to .jpg (RGB) files. The following command sometimes gives me under or over saturated .jpg images, when compared to the source EPS file: $cmd = "convert -density 300 -quality 100% -colorspace RGB ".$epsURL." -flatten -strip ".$convertedURL; Is there a smarter way to do this such that the converted image will have the same qualities as the source EPS file? Here is an example of the source file info: Image: rejm.eps Format: PS (PostScript) Class: DirectClass Geometry: 537x471 Base geometry: 1074x941 Type: ColorSeparation Endianess: Undefined Colorspace: CMYK Channel depth: Cyan: 8-bit Magenta: 8-bit Yellow: 8-bit Black: 8-bit Channel statistics: Cyan: Min: 0 (0) Max: 255 (1) Mean: 161.913 (0.634955) Standard deviation: 72.8257 (0.285591) Magenta: Min: 0 (0) Max: 255 (1) Mean: 184.261 (0.722591) Standard deviation: 75.7933 (0.297229) Yellow: Min: 0 (0) Max: 255 (1) Mean: 70.6607 (0.277101) Standard deviation: 39.8677 (0.156344) Black: Min: 0 (0) Max: 195 (0.764706) Mean: 34.4382 (0.135052) Standard deviation: 38.1863 (0.14975) Total ink density: 292% Colors: 210489 Rendering intent: Undefined Resolution: 28.35x28.35 Units: PixelsPerCentimeter Filesize: 997.727kb Interlace: None Background color: white Border color: #DFDFDFDFDFDF Matte color: grey74 Page geometry: 537x471+0+0 Dispose: Undefined Iterations: 0 Compression: Undefined Orientation: Undefined Signature: 8ea00688cb5ae496812125e8a5aea40b0f0e69c9b49b2dc4eb028b22f76f2964 Profile-iptc: 19738 bytes Thanks

    Read the article

  • Remove unncessary error raised by paperclip - filename is not recognized by the 'identify' command.

    - by kgpdeveloper
    I've been searching on how to solve this issue but could not find any real and working solution. User model: validates_attachment_content_type :avatar, :content_type = ['image/jpeg', 'image/pjpeg', 'image/x-png', 'image/png', 'image/gif'] Unnecessary error shows up when invalid file type is uploaded: Avatar /tmp/sample,23283,0.txt is not recognized by the 'identify' command. Server details: Ubuntu and Debian which identify results to /usr/bin/identify Specifying command path does not solve the issue: Paperclip.options[:command_path] = "/usr/bin" Any ideas? I looked at geometry.rb file and there seems to be something wrong with it. def self.from_file file file = file.path if file.respond_to? "path" geometry = begin Paperclip.run("identify", %Q[-format "%wx%h" "#{file}"[0]]) rescue PaperclipCommandLineError "" end parse(geometry) || raise(NotIdentifiedByImageMagickError.new("#{file} is not recognized by the 'identify' command.")) end Let me know if anyone has solved the same issue and how. Many thanks.

    Read the article

  • glTexParameter and filtering in OpenGL and GLSL?

    - by sharoz
    I have a couple questions about glTexParameter and filtering 1) What is the scope when applying a glTexParameter (specifically the filtering)? Here's a scenario: Bind a texture. Set the filters to LINEAR Set the texture to "Sampler1" of a shader Bind another texture. Set its filters to NEAREST Set that texture to "Sampler2" of a shader Draw When I use the textures in a shader, will one be linear and the other be nearest? Or will they both be nearest because it was called last? 2) Is it possible to set the filtering method in GLSL? Thanks in advance!

    Read the article

  • Unity3D draw call optimization : static batching VS manually draw mesh with MaterialPropertyBlock

    - by Heisenbug
    I've read Unity3D draw call batching documentation. I understood it, and I want to use it (or something similar) in order to optimize my application. My situation is the following: I'm drawing hundreds of 3d buildings. Each building can be represented using a Mesh (or a SubMesh for each building, but I don't thing this will affect performances) Each building can be textured with several combinations of texture patterns(walls, windows,..). Textures are stored into an Atlas for optimizaztion (see Texture2d.PackTextures) Texture mapping and facade pattern generation is done in fragment shader. The shader can be the same (except for few values) for all buildings, so I'd like to use a sharedMaterial in order to optimize parameters passed to the GPU. The main problem is that, even if I use an Atlas, share the material, and declare the objects as static to use static batching, there are few parameters(very fews, it could be just even a float I guess) that should be different for every draw call. I don't know exactly how to manage this situation using Unity3D. I'm trying 2 different solutions, none of them completely implemented. Solution 1 Build a GameObject for each building building (I don't like very much the overhead of a GameObject, anyway..) Prepare each GameObject to be static batched with StaticBatchingUtility.Combine. Pack all texture into an atlas Assign the parent game object of combined batched objects the Material (basically the shader and the atlas) Change some properties in the material before drawing an Object The problem is the point 5. Let's say I have to assign a different id to an object before drawing it, how can I do this? If I use a different material for each object I can't benefit of static batching. If I use a sharedMaterial and I modify a material property, all GameObjects will reference the same modified variable Solution 2 Build a Mesh for every building (sounds better, no GameObject overhead) Pack all textures into an Atlas Draw each mesh manually using Graphics.DrawMesh Customize each DrawMesh call using a MaterialPropertyBlock This would solve the issue related to slightly modify material properties for each draw call, but the documentation isn't clear on the following point: Does several consecutive calls to Graphic.DrawMesh with a different MaterialPropertyBlock would cause a new material to be instanced? Or Unity can understand that I'm modifying just few parameters while using the same material and is able to optimize that (in such a way that the big atlas is passed just once to the GPU)?

    Read the article

  • Rain effect using DirectX 9 capabilities

    - by teodron
    Is it possible to achieve something similar to nVidia's rain demo using only shader model 3.0 capabilities? If yes, could you point out a few documents/web resources that are suitable candidates and do not require a heavy programming load (e.g. not more than two hard weeks of programming for one single person)? It would be nice if the answer could also contain a pro/con phrase for the proposed idea (e.g. postprocessing rain shader vs. a particle based effect).

    Read the article

  • XNA `tex2Dlod` always returns transparent black

    - by feralin
    I want to sample a texture in a vertex shader, so at first I just tried using float2 texcoords = ...; color = tex2D(texture, texcoords); But apparently I cannot use tex2D in a vertex shader, and must use tex2Dlod. So then I changed the above code to color = tex2Dlod(texture, float4(texcoords, 0, 0)); But now color is always float4(0, 0, 0, 0) (i.e. transparent black). Why is this, and how can I fix it? EDIT: I know for a fact that the texture does not contain just transparent black pixels.

    Read the article

  • gpgpu vs. physX for physics simulation

    - by notabene
    Hello First theoretical question. What is better (faster)? Develop your own gpgpu techniques for physics simulation (cloth, fluids, colisions...) or to use PhysX? (If i say develop i mean implement existing algorithms like navier-strokes...) I don't care about what will take more time to develop. What will be faster for end user? As i understand that physx are accelerated through PPU units in gpu, does it mean that physical simulation can run in paralel with rastarization? Are PPUs different units than unified shader units used as vertex/geometry/pixel/gpgpu shader units? And little non-theoretical question: Is physx able to do sofisticated simulation equal to lets say Autodesk's Maya fluid solver? Are there any c++ gpu accelerated physics frameworks to try? (I am interested in both physx and gpgpu, commercial engines are ok too).

    Read the article

  • Is there a standard way to store 3D meshes to easily communicate between libraries?

    - by awiebe
    In a 3D game lots of different systems need to know about geometry data, however the only way they seem to be able to agree to on in representing it by an array of triangles. Can anyone recommend a good geometry manipulation library that will allow me to easily integrate the drawing library(OpenGL), the physics engine(Bullet), Serialization(Several 3D file formats) and my own code(objective-c++). Focus on the a representation between the drawing library and the physics engine. Also if the library can triangulate a mesh definition that would be very helpful. My code can work around what exists already.

    Read the article

  • OpenGL and atlas

    - by user30088
    I'm trying to draw element from a texture atlas with OpenGL ES 2. Currently, I'm drawing my elements using something like that in the shader: uniform mat4 uCamera; uniform mat4 uModel; attribute vec4 aPosition; attribute vec4 aColor; attribute vec2 aTextCoord; uniform vec2 offset; uniform vec2 scale; varying lowp vec4 vColor; varying lowp vec2 vUV; void main() { vUV = offset + aTextCoord * scale; gl_Position = (uCamera * uModel) * aPosition; vColor = aColor; } For each elements to draw I send his offset and scale to the shader. The problem with this method: I can't rotate the element but it's not a problem for now. I would like to know, what is better for performance: Send uniforms like that for each element on every frames Update quad geometry (uvs) for each element Thanks!

    Read the article

  • Using SurfaceFormat.Single and HLSL for GPGPU with XNA

    - by giancarlo todone
    I'm trying to implement a so-called ping-pong technique in XNA; you basically have two RenderTarget2D A and B and at each iteration you use one as texture and the other as target - and vice versa - for a quad rendered through an HLSL pixel shader. step1: A--PS--B step2: B--PS--A step3: A--PS--B ... In my setup, both RenderTargets are SurfaceFormat.Single. In my .fx file, I have a tachnique to do the update, and another to render the "current buffer" to the screen. Before starting the "ping-pong", buffer A is filled with test data with SetData<float>(float[]) function: this seems to work properly, because if I render a quad on the screen through the "Draw" pixel shader, i do see the test data being correctly rendered. However, if i do update buffer B, something does not function proerly and the next rendering to screen will be all black. For debug purposes, i replaced the "Update" HLSL pixel shader with one that should simply copy buffer A into B (or B into A depending on which among "ping" and "pong" phases we are...). From some examples i found on the net, i see that in order to correctly fetch a float value from a texture sampler from HLSL code, i should only need to care for the red channel. So, basically the debug "Update" HLSL function is: float4 ComputePS(float2 inPos : TEXCOORD0) : COLOR0 { float v1 = tex2D(bufSampler, inPos.xy).r; return float4(v1,0,0,1); } which still doesn't work and results in a all-zeroes ouput. Here's the "Draw" function that seems to properly display initial data: float4 DrawPS(float2 inPos : TEXCOORD0) : COLOR0 { float v1 = tex2D(bufSampler, inPos.xy).r; return float4(v1,v1,v1,1); } Now: playing around with HLSL doesn't change anything, so maybe I'm missing something on the c# side of this, so here's the infamous Update() function: _effect.Parameters["bufTexture"].SetValue(buf[_currentBuf]); _graphicsDevice.SetRenderTarget(buf[1 - _currentBuf]); _graphicsDevice.Clear(Color.Black); // probably not needed since RenderTargetUsage is DiscardContents _effect.CurrentTechnique = _computeTechnique; _computeTechnique.Passes[0].Apply(); _quadRender.Render(); _graphicsDevice.SetRenderTarget(null); _currentBuf = 1 - _currentBuf; Any clue?

    Read the article

  • Shadow mapping with deffered shading for directional lights - shadow map projection problem

    - by Harry
    I'm trying to implement shadow mapping to my engine. I started with directional lights because they seemed to be the easiest one, but I was wrong :) I have implemented deferred shading and I retrieve position from depth. I think that there is the biggest problem but code looks ok for me. Now more about problem: Shadow map projected onto meshes looks bad scaled and translated and also some informations from shadow map texture aren't visible. You can see it on this screen: http://img5.imageshack.us/img5/2254/93dn.png Yelow frustum is light frustum and I have mixed shadow map preview and actual scene. As you can see shadows are in wrong place and shadow of cone and sphere aren't visible. Could you look at my codes and tell me where I have a mistake? // create shadow map if(!_shd)glGenTextures(1, &_shd); glBindTexture(GL_TEXTURE_2D, _shd); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_FLOAT,NULL); // shadow map size glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, _shd, 0); glDrawBuffer(GL_NONE); // setting camera Vector dire=Vector(0,0,1); ACamera.setLookAt(dire,Vector(0)); ACamera.setPerspectiveView(60.0f,1,0.1f,10.0f); // currently needed for proper frustum corners calculation Vector min(ACamera._point[0]),max(ACamera._point[0]); for(int i=0;i<8;i++){ max=Max(max,ACamera._point[i]); min=Min(min,ACamera._point[i]); } ACamera.setOrthogonalView(min.x,max.x,min.y,max.y,-max.z,-min.z); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, _s_buffer); // framebuffer for shadow map // rendering to depth buffer glBindFramebuffer(GL_DRAW_FRAMEBUFFER, _g_buffer); Shaders["DirLight"].set(true); Matrix4 bias; bias.x.set(0.5,0.0,0.0,0.0); bias.y.set(0.0,0.5,0.0,0.0); bias.z.set(0.0,0.0,0.5,0.0); bias.w.set(0.5,0.5,0.5,1.0); Shaders["DirLight"].set("textureMatrix",ACamera.matrix*Projection3D*bias); // order of multiplications are 100% correct, everything gives mi the same result as using glm glActiveTexture(GL_TEXTURE5); glBindTexture(GL_TEXTURE_2D,_shd); lightDir(dir); // light calculations Vertex Shader makes nothing related to shadow calculatons Pixel shader function which calculates if pixel is in shadow or not: float readShadowMap(vec3 eyeDir) { // retrieve depth of pixel float z = texture2D(depth, gl_FragCoord.xy/screen).z; vec3 pos = vec3(gl_FragCoord.xy/screen, z); // transform by the projection and view inverse vec4 worldSpace = inverse(View)*inverse(ProjectionMatrix)*vec4(pos*2-1,1); worldSpace /= worldSpace.w; vec4 coord=textureMatrix*worldSpace; float vis=1.0f; if(texture2D(shadow, coord.xy).z < coord.z-0.001)vis=0.2f; return vis; } I also have question about shadows specifically for directional light. Currently I always look at 0,0,0 position and in further implementation I have to move light frustum along to camera frustum. I've found how to do this here: http://www.gamedev.net/topic/505893-orthographic-projection-for-shadow-mapping/ but it doesn't give me what I want. Maybe because of problems mentioned above, but I want know your opinion. EDIT: vec4 worldSpace is position read from depht of the scene (not shadow map). Maybe I wasn't precise so I'll try quick explain what is what: View is camera view matrix, ProjectionMatrix is camera projection,. First I try to get world space position from depth map and then multiply it by textureMatrix which is light view *light projection*bias. Rest of code is the same as in many tutorials. I can't use vertex shader to make something like gl_Position=textureMatrix*gl_Vertex and get it interpolated in fragment shader because of deffered rendering use so I want get it from depht buffer. EDIT2: I also tried make it as in Coding Labs tutorial about Shadow Mapping with Deferred Rendering but unfortunately this either works wrong.

    Read the article

  • XNA Deferred Shading, Replace BasicEffect

    - by Alex
    I have implemented deferred shading in my XNA 4.0 project, meaning that I need all objects to start out with the same shader "RenderGBuffer.fx". How can I use a custom Content Processor to: Not load any textures by default (I want to manually do this) Use "RenderGBuffer.fx" as the default shader instead of BasicEffect Below is the progress so far public class DeferredModelProcessor : ModelProcessor { EffectMaterialContent deferredShader; public DeferredModelProcessor() { } protected override MaterialContent ConvertMaterial(MaterialContent material, ContentProcessorContext context) { deferredShader = new EffectMaterialContent(); deferredShader.Effect = new ExternalReference<EffectContent>("DeferredShading/RenderGBuffer.fx"); return context.Convert<MaterialContent, MaterialContent>(deferredShader, typeof(MaterialProcessor).Name); } }

    Read the article

  • "Marching cubes" voxel terrain - triplanar texturing with depth?

    - by Dan the Man
    I am currently working on a voxel terrain that uses the marching cubes algorithm for polygonizing the scalar field of voxels. I am using a triplanar texturing shader for texturing. say I have a grass texture set to the Y axis and a dirt texture for both the X and Z axes. Now, when my player digs downwards, it still appears as grass. How would I make it to appear as dirt? I have been thinking about this for a while, and the only thing I can think of to make this effect, would be to mark vertices that have been dug with a certain vertex color. When it has that vertex color, the shader would apply that dirt texture to the vertices marked. Is there a better method?

    Read the article

  • *DX11, HLSL* - Colour as 4 floats or one UINT

    - by Paul
    With the DX11 pipeline, would it be much quicker for the vertex buffer to pass one single UINT with one byte per channel to the input assembler, as opposed to three floats? Then the vertex shader would convert the four bytes to four floats, which I guess is the required colour format for the pipeline. In this instance, colour accuracy isn't an issue. The vertex buffer would need to be updated many times per frame, so using a single UINT and saving 12 bytes for every vertex could well be worth it: quicker uploads to vram and also less memory used. But the cost is the extra shader work for every vertex to convert each 8 bits of the input UNIT into a float. Anyone have an idea if it might be worth doing? Or, is it possible for the pipeline to be set to just internally use a four-byte colour format? The swap chain buffer has been initialised as DXGI_FORMAT_R8G8B8A8_UNORM, so ultimately that's how the colour will be written. Thanks!

    Read the article

  • Is there any difference between storing textures and baked lighting for environment meshes?

    - by Ben Hymers
    I assume that when texturing environments, one or several textures will be used, and the UVs of the environment geometry will likely overlap on these textures, so that e.g. a tiling brick texture can be used by many parts of the environment, rather than UV unwrapping the entire thing, and having several areas of the texture be identical. If my assumption is wrong, please let me know! Now, when thinking about baking lighting, clearly this can't be done the same way - lighting in general will be unique to every face so the environment must be UV unwrapped without overlap, and lighting must be baked onto unique areas of one or several textures, to give each surface its own texture space to store its lighting. My questions are: Have I got this wrong? If so, how? Isn't baking lighting going to use a lot of texture space? Will the geometry need two UV sets, one used for the colour/normal texture and one for the lighting texture? Anything else you'd like to add? :)

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >