Search Results

Search found 1725 results on 69 pages for 'compute shader'.

Page 17/69 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • GLSL Error: failed to preprocess the source. How can I troubleshoot this?

    - by Brent Parker
    I'm trying to learn to play with OpenGL GLSL shaders. I've written a very simple program to simply create a shader and compile it. However, whenever I get to the compile step, I get the error: Error: Preprocessor error Error: failed to preprocess the source. Here's my very simple code: #include <GL/gl.h> #include <GL/glu.h> #include <GL/glut.h> #include <GL/glext.h> #include <time.h> #include <stdio.h> #include <iostream> #include <stdlib.h> using namespace std; const int screenWidth = 640; const int screenHeight = 480; const GLchar* gravity_shader[] = { "#version 140" "uniform float t;" "uniform mat4 MVP;" "in vec4 pos;" "in vec4 vel;" "const vec4 g = vec4(0.0, 0.0, -9.80, 0.0);" "void main() {" " vec4 position = pos;" " position += t*vel + t*t*g;" " gl_Position = MVP * position;" "}" }; double pointX = (double)screenWidth/2.0; double pointY = (double)screenWidth/2.0; void initShader() { GLuint shader = glCreateShader(GL_VERTEX_SHADER); glShaderSource(shader, 1, gravity_shader, NULL); glCompileShader(shader); GLint compiled = true; glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled); if(!compiled) { GLint length; GLchar* log; glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &length); log = (GLchar*)malloc(length); glGetShaderInfoLog(shader, length, &length, log); std::cout << log <<std::endl; free(log); } exit(0); } bool myInit() { initShader(); glClearColor(1.0f, 1.0f, 1.0f, 0.0f); glColor3f(0.0f, 0.0f, 0.0f); glPointSize(1.0); glLineWidth(1.0f); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluOrtho2D(0.0, (GLdouble) screenWidth, 0.0, (GLdouble) screenHeight); glEnable(GL_DEPTH_TEST); return true; } int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB); glutInitWindowSize(screenWidth, screenHeight); glutInitWindowPosition(100, 150); glutCreateWindow("Mouse Interaction Display"); myInit(); glutMainLoop(); return 0; } Where am I going wrong? If it helps, I am trying to do this on a Acer Aspire One with an atom processor and integrated Intel video running the latest Ubuntu. It's not very powerful, but then again, this is a very simple shader. Thanks a lot for taking a look!

    Read the article

  • F# Objects &ndash; Integration with the other .Net Languages &ndash; Part 2

    - by MarkPearl
    So in part one of my posting I covered the real basics of object creation. Today I will hopefully dig a little deeper… My expert F# book brings up an interesting point – properties in F# are just syntactic sugar for method calls. This makes sense… for instance assume I had the following object with the property exposed called Firstname. type Person(Firstname : string, Lastname : string) = member v.Firstname = Firstname I could extend the Firstname property with the following code and everything would be hunky dory… type Person(Firstname : string, Lastname : string) = member v.Firstname = Console.WriteLine("Side Effect") Firstname   All that this would do is each time I use the property Firstname, I would see the side effect printed to the screen saying “Side Effect”. Member methods have a very similar look & feel to properties, in fact the only difference really is that you declare that parameters are being passed in. type Person(Firstname : string, Lastname : string) = member v.FullName(middleName) = Firstname + " " + middleName + " " + Lastname   In the code above, FullName requires the parameter middleName, and if viewed from another project in C# would show as a method and not a property. Precomputation Optimizations Okay, so something that is obvious once you think of it but that poses an interesting side effect of mutable value holders is pre-computation of results. All it is, is a slight difference in code but can result in quite a huge saving in performance. Basically pre-computation means you would not need to compute a value every time a method is called – but could perform the computation at the creation of the object (I hope I have got it right). In a way I battle to differentiate this from lazy evaluation but I will show an example to explain the principle. Let me try and show an example to illustrate the principle… assume the following F# module namespace myNamespace open System module myMod = let Add val1 val2 = Console.WriteLine("Compute") val1 + val2 type MathPrecompute(val1 : int, val2 : int) = let precomputedsum = Add val1 val2 member v.Sum = precomputedsum type MathNormalCompute(val1 : int, val2 : int) = member v.Sum = Add val1 val2 Now assume you have a C# console app that makes use of the objects with code similar to the following… using System; using myNamespace; namespace CSharpTest { class Program { static void Main(string[] args) { Console.WriteLine("Constructing Objects"); var myObj1 = new myMod.MathNormalCompute(10, 11); var myObj2 = new myMod.MathPrecompute(10, 11); Console.WriteLine(""); Console.WriteLine("Normal Compute Sum..."); Console.WriteLine(myObj1.Sum); Console.WriteLine(myObj1.Sum); Console.WriteLine(myObj1.Sum); Console.WriteLine(""); Console.WriteLine("Pre Compute Sum..."); Console.WriteLine(myObj2.Sum); Console.WriteLine(myObj2.Sum); Console.WriteLine(myObj2.Sum); Console.ReadKey(); } } } The output when running the console application would be as follows…. You will notice with the normal compute object that the system would call the Add function every time the method was called. With the Precompute object it only called the compute method when the object was created. Subtle, but something that could lead to major performance benefits. So… this post has gone off in a slight tangent but still related to F# objects.

    Read the article

  • How to decide between using PLINQ and LINQ at runtime?

    - by Hamish Grubijan
    Or decide between a parallel and a sequential operation in general. It is hard to know without testing whether parallel or sequential implementation is best due to overhead. Obviously it will take some time to train "the decider" which method to use. I would say that this method cannot be perfect, so it is probabilistic in nature. The x,y,z do influence "the decider". I think a very naive implementation would be to give both 1/2 chance at the beginning and then start favoring them according to past performance. This disregards x,y,z, however. I suspect that this question would be better answered by academics than practitioners. Anyhow, please share your heuristic, your experience if any, your tips on this. Sample code: public interface IComputer { decimal Compute(decimal x, decimal y, decimal z); } public class SequentialComputer : IComputer { public decimal Compute( ... // sequential implementation } public class ParallelComputer : IComputer { public decimal Compute( ... // parallel implementation } public class HybridComputer : IComputer { private SequentialComputer sc; private ParallelComputer pc; private TheDecider td; // Helps to decide between the two. public HybridComputer() { sc = new SequentialComputer(); pc = new ParallelComputer(); td = TheDecider(); } public decimal Compute(decimal x, decimal y, decimal z) { decimal result; decimal time; if (td.PickOneOfTwo() == 0) { // Time this and save result into time. result = sc.Compute(...); } else { // Time this and save result into time. result = pc.Compute(); } td.Train(time); return result; } }

    Read the article

  • Windows Azure Use Case: High-Performance Computing (HPC)

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: High-Performance Computing (also called Technical Computing) at its most simplistic is a layout of computer workloads where a “head node” accepts work requests, and parses them out to “worker nodes'”. This is useful in cases such as scientific simulations, drug research, MatLab work and where other large compute loads are required. It’s not the immediate-result type computing many are used to; instead, a “job” or group of work requests is sent to a cluster of computers and the worker nodes work on individual parts of the calculations and return the work to the scheduler or head node for the requestor in a batch-request fashion. This is typical to the way that many mainframe computing use-cases work. You can use commodity-based computers to create an HPC Cluster, such as the Linux application called Beowulf, and Microsoft has a server product for HPC using standard computers, called the Windows Compute Cluster that you can read more about here. The issue with HPC (from any vendor) that some organization have is the amount of compute nodes they need. Having too many results in excess infrastructure, including computers, buildings, storage, heat and so on. Having too few means that the work is slower, and takes longer to return a result to the calling application. Unless there is a consistent level of work requested, predicting the number of nodes is problematic. Implementation: Recently, Microsoft announced an internal partnership between the HPC group (Now called the Technical Computing Group) and Windows Azure. You now have two options for implementing an HPC environment using Windows. You can extend the current infrastructure you have for HPC by adding in Compute Nodes in Windows Azure, using a “Broker Node”.  You can then purchase time for adding machines, and then stop paying for them when the work is completed. This is a common pattern in groups that have a constant need for HPC, but need to “burst” that load count under certain conditions. The second option is to install only a Head Node and a Broker Node onsite, and host all Compute Nodes in Windows Azure. This is often the pattern for organizations that need HPC on a scheduled and periodic basis, such as financial analysis or actuarial table calculations. References: Blog entry on Hybrid HPC with Windows Azure: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/12/13/high-performance-computing-on-premise-and-in-the-windows-azure-cloud.aspx  Links for further research on HPC, includes Windows Azure information: http://blogs.msdn.com/b/ncdevguy/archive/2011/02/16/handy-links-for-hpc-and-azure.aspx 

    Read the article

  • problem with loading in .FBX meshes in DirectX 10

    - by N0xus
    I'm trying to load in meshes into DirectX 10. I've created a bunch of classes that handle it and allow me to call in a mesh with only a single line of code in my main game class. How ever, when I run the program this is what renders: In the debug output window the following errors keep appearing: D3D10: ERROR: ID3D10Device::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The reason is that Semantic 'TEXCOORD' is defined for mismatched hardware registers between the output stage and input stage. [ EXECUTION ERROR #343: DEVICE_SHADER_LINKAGE_REGISTERINDEX ] D3D10: ERROR: ID3D10Device::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The reason is that the input stage requires Semantic/Index (POSITION,0) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND ] The thing is, I've no idea how to fix this. The code I'm using does work and I've simply brought all of that code into a new project of mine. There are no build errors and this only appears when the game is running The .fx file is as follows: float4x4 matWorld; float4x4 matView; float4x4 matProjection; struct VS_INPUT { float4 Pos:POSITION; float2 TexCoord:TEXCOORD; }; struct PS_INPUT { float4 Pos:SV_POSITION; float2 TexCoord:TEXCOORD; }; Texture2D diffuseTexture; SamplerState diffuseSampler { Filter = MIN_MAG_MIP_POINT; AddressU = WRAP; AddressV = WRAP; }; // // Vertex Shader // PS_INPUT VS( VS_INPUT input ) { PS_INPUT output=(PS_INPUT)0; float4x4 viewProjection=mul(matView,matProjection); float4x4 worldViewProjection=mul(matWorld,viewProjection); output.Pos=mul(input.Pos,worldViewProjection); output.TexCoord=input.TexCoord; return output; } // // Pixel Shader // float4 PS(PS_INPUT input ) : SV_Target { return diffuseTexture.Sample(diffuseSampler,input.TexCoord); //return float4(1.0f,1.0f,1.0f,1.0f); } RasterizerState NoCulling { FILLMODE=SOLID; CULLMODE=NONE; }; technique10 Render { pass P0 { SetVertexShader( CompileShader( vs_4_0, VS() ) ); SetGeometryShader( NULL ); SetPixelShader( CompileShader( ps_4_0, PS() ) ); SetRasterizerState(NoCulling); } } In my game, the .fx file and model are called and set as follows: Loading in shader file //Set the shader flags - BMD DWORD dwShaderFlags = D3D10_SHADER_ENABLE_STRICTNESS; #if defined( DEBUG ) || defined( _DEBUG ) dwShaderFlags |= D3D10_SHADER_DEBUG; #endif ID3D10Blob * pErrorBuffer=NULL; if( FAILED( D3DX10CreateEffectFromFile( TEXT("TransformedTexture.fx" ), NULL, NULL, "fx_4_0", dwShaderFlags, 0, md3dDevice, NULL, NULL, &m_pEffect, &pErrorBuffer, NULL ) ) ) { char * pErrorStr = ( char* )pErrorBuffer->GetBufferPointer(); //If the creation of the Effect fails then a message box will be shown MessageBoxA( NULL, pErrorStr, "Error", MB_OK ); return false; } //Get the technique called Render from the effect, we need this for rendering later on m_pTechnique=m_pEffect->GetTechniqueByName("Render"); //Number of elements in the layout UINT numElements = TexturedLitVertex::layoutSize; //Get the Pass description, we need this to bind the vertex to the pipeline D3D10_PASS_DESC PassDesc; m_pTechnique->GetPassByIndex( 0 )->GetDesc( &PassDesc ); //Create Input layout to describe the incoming buffer to the input assembler if (FAILED(md3dDevice->CreateInputLayout( TexturedLitVertex::layout, numElements,PassDesc.pIAInputSignature, PassDesc.IAInputSignatureSize, &m_pVertexLayout ) ) ) { return false; } model loading: m_pTestRenderable=new CRenderable(); //m_pTestRenderable->create<TexturedVertex>(md3dDevice,8,6,vertices,indices); m_pModelLoader = new CModelLoader(); m_pTestRenderable = m_pModelLoader->loadModelFromFile( md3dDevice,"armoredrecon.fbx" ); m_pGameObjectTest = new CGameObject(); m_pGameObjectTest->setRenderable( m_pTestRenderable ); // Set primitive topology, how are we going to interpet the vertices in the vertex buffer md3dDevice->IASetPrimitiveTopology( D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST ); if ( FAILED( D3DX10CreateShaderResourceViewFromFile( md3dDevice, TEXT( "armoredrecon_diff.png" ), NULL, NULL, &m_pTextureShaderResource, NULL ) ) ) { MessageBox( NULL, TEXT( "Can't load Texture" ), TEXT( "Error" ), MB_OK ); return false; } m_pDiffuseTextureVariable = m_pEffect->GetVariableByName( "diffuseTexture" )->AsShaderResource(); m_pDiffuseTextureVariable->SetResource( m_pTextureShaderResource ); Finally, the draw function code: //All drawing will occur between the clear and present m_pViewMatrixVariable->SetMatrix( ( float* )m_matView ); m_pWorldMatrixVariable->SetMatrix( ( float* )m_pGameObjectTest->getWorld() ); //Get the stride(size) of the a vertex, we need this to tell the pipeline the size of one vertex UINT stride = m_pTestRenderable->getStride(); //The offset from start of the buffer to where our vertices are located UINT offset = m_pTestRenderable->getOffset(); ID3D10Buffer * pVB=m_pTestRenderable->getVB(); //Bind the vertex buffer to input assembler stage - md3dDevice->IASetVertexBuffers( 0, 1, &pVB, &stride, &offset ); md3dDevice->IASetIndexBuffer( m_pTestRenderable->getIB(), DXGI_FORMAT_R32_UINT, 0 ); //Get the Description of the technique, we need this in order to loop through each pass in the technique D3D10_TECHNIQUE_DESC techDesc; m_pTechnique->GetDesc( &techDesc ); //Loop through the passes in the technique for( UINT p = 0; p < techDesc.Passes; ++p ) { //Get a pass at current index and apply it m_pTechnique->GetPassByIndex( p )->Apply( 0 ); //Draw call md3dDevice->DrawIndexed(m_pTestRenderable->getNumOfIndices(),0,0); //m_pD3D10Device->Draw(m_pTestRenderable->getNumOfVerts(),0); } Is there anything I've clearly done wrong or are missing? Spent 2 weeks trying to workout what on earth I've done wrong to no avail. Any insight a fresh pair eyes could give on this would be great.

    Read the article

  • LWJGL - OpenGL - Texture shading

    - by Trixmix
    I want to use LWJGL to create a shader that all it does is change the color of the given texture. For example I tell it to draw the letter A using a sprite sheet then I can tell the shader to draw the letter in a certain color. How would you do something like this without needed to create different colored letter sprite sheets? Task for the shader: Simply change all pixels to a certain color in the texture. Input: Color , texture. Output: it draws onto the screen the new colored texture. How do i accomplish such a thing?

    Read the article

  • How can I simulate objects floating on water without a physics engine?

    - by user1075940
    In my game the water movement is done in a shader using Gerstner equations. The water movement looks realistic enough for a school project but I encounter serious problem when I wanted to do sailing on waves (similar to this). I managed to do collision with land by calculating quad's vertices and normals beneath ship, however same method can not be applied to water because XZ are displaced and Y is calculated in a shader :( How to approach this problem ? Is it possible to retrieve transformed grid from shader? Unfortunately no external physics libraries can be used.

    Read the article

  • Where to store shaders

    - by Mark Ingram
    I have an OpenGL renderer which has a Scene member variable. The Scene object can contain N SceneObjects. I use these SceneObjects for storing the vertex position and any transforms. My question is, where should shaders be stored in this arrangement? I guess they need to be in a central location because multiple objects can use the same shader. But then each object needs access to the shader because it needs to set attributes into the shader. Does anyone have any advice?

    Read the article

  • How to categorize textures into atlases

    - by Esa
    I am going to use texture atlasing for the first time in my games, and at first it seemed like a great idea to split textures into atlases by categorizing them by terrain themes e.g ForestTextures, WinterTextures etc. But that could cause a problem when for example a flower has to use transparency shader and other models use a diffuse shader. So those cannot be atlased into the same texture. Thus, would atlasing textures into themes as mentioned before and then splitting them by shader like ForestDiffuse and ForestTransparent be good? Or is there a better way to categorize and build them?

    Read the article

  • What is the practical use of IBOs / degenerate vertex in OpenGL?

    - by 0xFAIL
    Vertices in 3D models CAN get cut in the process of optimizing 3D geometry, (degenerate vertices) by 3D graphics software (Blender, ...) when exporting because they aren't needed when reusing a vertex for multiple triangles. (In the current case 3D data is exported from Blender as .ply and read by a simple application that displays the 3D model) Every vertex has a few attributes like position, color, normal, tangent,... But the data for each vertex that is cut through the vertex sharing is lost and is missing in the vertex shader. Modern shader techniques like Bump or Normal mapping require normals/tangents per vertex which are also cut. To use complex shader techniques IBOs must not be used? Or is there a way to use IBOs and retain the data per vertex that was origionally lost?

    Read the article

  • Which will be faster? Switching shaders or ignore that some cases don't need full code?

    - by PolGraphic
    I have two types of 2d objects: In first case (for about 70% of objects), I need that code in the shader: float2 texCoord = input.TexCoord + textureCoord.xy But in the second case I have to use: float2 texCoord = fmod(input.TexCoord, texCoordM.xy - textureCoord.xy) + textureCoord.xy I can use second code also for first case, but it will be a little slower (fmod is useless here, input.TexCoord will be always lower than textureCoord.xy - textureCoord.xy for sure). My question is, which way will be faster: Making two independent shaders for both types of rectangles, group rectangles by types and switch shaders during rendering. Make one shader and use some if statement. Make one shader and ignore that sometimes (70% of cases) I don't need to use fmod.

    Read the article

  • Basics of drawing in 2d with OpenGL 3 shaders

    - by davidism
    I am new to OpenGL 3 and graphics programming, and want to create some basic 2d graphics. I have the following scenario of how I might go about drawing a basic (but general) 2d rectangle. I'm not sure if this is the correct way to think about it, or, if it is, how to implement it. In my head, here's how I imagine doing it: t = make_rectangle(width, height) build general VBO, centered at 0, 0 optionally: t.set_scale(2) optionally: t.set_angle(30) t.draw_at(x, y) calculates some sort of scale/rotate/translate matrix (or matrices), passes the VBO and the matrix to a shader program Something happens to clip the world to the view visible on screen. I'm really unclear on how 4 and 5 will work. The main problem is that all the tutorials I find either: use fixed function pipeline, are for 3d, or are unclear how to do something this "simple". Can someone provide me with either a better way to think of / do this, or some concrete code detailing performing the transformations in a shader and constructing and passing the data required for this shader transformation?

    Read the article

  • Bad texture on model with different GPU

    - by Pacha
    I have some kind of distortion on the texture of my 3D model. It works perfectly well on an AMD GPU, but when testing on a integrated Intel HD graphics card it has a weird issue. I don't have a problem with the rest of my entities as they are not scaled. The models with the problems are scaled, as my engine supports different sizes for the platforms. I am using Ogre3D as rendering engine, and GLSL as shader language. Vertex shader: #version 120 varying vec2 UV; void main() { UV = gl_MultiTexCoord0; gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } Fragment shader: #version 120 varying vec2 UV; uniform sampler2D diffuseMap; void main(void) { gl_FragColor = texture(diffuseMap, UV); } Screenshot (the error is on the right and left side, the top and bottom part are rendered perfectly well):

    Read the article

  • Using normals in DirectX 10

    - by Dave
    I've got a working OBJ loader that loads vertices, indices, texture coordinates, and normals. As of right now it doesn't process texture coordinates or normals but it stores them in arrays and creates a valid mesh with the vertices and indices. Now I am trying to figure out how can I make the shader use the correct normal in the array for the current vertex if I can't setnormals() to my mesh. If I were to just use an index in my array of normals corresponding to the index in the vertices, how would I retrieve the current index the shader is processing? BTW: I am trying to write a blinn-phong shader technique. Also when I create the input layout and I've added the semantic NORMAL to it, how would I list the multiple semantics in that single parameter? Would I just separate it with a space? PS: If you need to see any code, just let me know.

    Read the article

  • HLSL: An array of textures and sampler states

    - by nate142
    The shader must switch between multiple textures depending on the Alpha value of the original texture for each pixel. Now this would word fine if I didn't have to worry about SamplerStates. I have created my array of textures and can select a texture based on the Alpha value of the pixel. But how do I create an Array of SamplerStates and link it to my array of textures? I attempted to treat the SamplerState as a function by adding the (int i) but that didn't work. Also I can't use Texture.Sample since this is shader model 2.0. //shader model 2.0 (DX9) texture subTextures[255]; SamplerState MeshTextureSampler(int i) { Texture = (subTextures[i]); }; float4 SampleCompoundTexture(float2 texCoord, float4 diffuse) { float4 SelectedColor = SAMPLE_TEXTURE(Texture, texCoord); int i = SelectedColor.a; texture SelectedTx = subTextures[i]; return tex2D(MeshTextureSampler(i), texCoord) * diffuse; }

    Read the article

  • Fast lighting with multiple lights

    - by codymanix
    How can I implement fast lighting with multiple lights? I don't want to restrain the player, he can place an unlimited number and possibly overlapping (point) lights into the level. The problem is that shaders which contain dynamic loops which would be necessary to calculate the lighting tend to be very slow. I had the idea that if it could be possible at compiletime to compile a shader n times where n is the number of lights. If the number n is known at compiletime, the loops can be unrolled automatically. Is this possible to generate n versions of the same shader with just a different number of lights? At runtime I could then decide which shader to use for which part of the level.

    Read the article

  • Fast lighting with multple lights

    - by codymanix
    How can I implement fast lighting with multiple lights? I don't want to restrain the player, he can place an unlimited number and possibly overlapping (point) lights into the level. The problem is that shaders which contain dynamic loops which would be necessary to calculate the lighting tend to be very slow. I had the idea that if it could be possible at compiletime to compile a shader n times where n is the number of lights. If the number n is known at compiletime, the loops can be unrolled automatically. Is this possible to generate n versions of the same shader with just a different number of lights? At runtime I could then decide which shader to use for which part of the level.

    Read the article

  • How to simulate objects floating on water without a physics engine?

    - by user1075940
    In my game the water movement is done in a shader using Gerstner equations. The water movement looks realistic enough for a school project but I encounter serious problem when I wanted to do sailing on waves (similar to this). I managed to do collision with land by calculating quad's vertices and normals beneath ship, however same method can not be applied to water because XZ are displaced and Y is calculated in a shader :( How to approach this problem ? Is it possible to retrieve transformed grid from shader? Unfortunately no external physics libraries can be used.

    Read the article

  • GPU optimization question: pre-computed or procedural?

    - by Jay
    Good morning, I'm learning shader program and need some general direction. I want to add noise to my laser beam (like this). Which is the best way to handle it? I could pre-compute an image and pass it to the shader. I could then use the image to change the opacity and easily animate the smoke by changing the offset of the texture lookup. I could also generate noise in the shader and do the same thing the texture was used for. Is it generally better to avoid I/O to the graphics card or the opposite? Thanks!

    Read the article

  • Entity System and rendering

    - by hayer
    Okey, what I know so far; The entity contains a component(data-storage) which holds information like; - Texture/sprite - Shader - etc And then I have a renderer system which draws all this. But what I don't understand is how the renderer should be designed. Should I have one component for each "visual type". One component without shader, one with shader, etc? Just need some input on whats the "correct way" to do this. Tips and pitfalls to watch out for.

    Read the article

  • How Ubuntu cloud version enforces the "no root login" over ssh ?

    - by Maxim Veksler
    Hello, I'm looking to tweak ubuntu cloud version default setup where is denies root login. Attempting to connect to such machine yields: maxim@maxim-desktop:~/workspace/integration/deployengine$ ssh [email protected] The authenticity of host 'ec2-204-236-252-95.compute-1.amazonaws.com (204.236.252.95)' can't be established. RSA key fingerprint is 3f:96:f4:b3:b9:4b:4f:21:5f:00:38:2a:bb:41:19:1a. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ec2-204-236-252-95.compute-1.amazonaws.com' (RSA) to the list of known hosts. Please login as the ubuntu user rather than root user. Connection to ec2-204-236-252-95.compute-1.amazonaws.com closed. I would like to know where this is setup and how I can change the printed message? Thank you, Maxim.

    Read the article

  • webgl adding projection doesnt display object

    - by dazed3confused
    I am having a look at web gl, and trying to render a cube, but I am having a problem when I try to add projection into the vertex shader. I have added an attribute, but when I use it to multiple the modelview and position, it stops displaying the cube. Im not sure why and was wondering if anyone could help? Ive tried looking at a few examples but just cant get this to work vertex shader attribute vec3 aVertexPosition; uniform mat4 uMVMatrix; uniform mat4 uPMatrix; void main(void) { gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0); //gl_Position = uMVMatrix * vec4(aVertexPosition, 1.0); } fragment shader #ifdef GL_ES precision highp float; // Not sure why this is required, need to google it #endif uniform vec4 uColor; void main() { gl_FragColor = uColor; } function init() { // Get a reference to our drawing surface canvas = document.getElementById("webglSurface"); gl = canvas.getContext("experimental-webgl"); /** Create our simple program **/ // Get our shaders var v = document.getElementById("vertexShader").firstChild.nodeValue; var f = document.getElementById("fragmentShader").firstChild.nodeValue; // Compile vertex shader var vs = gl.createShader(gl.VERTEX_SHADER); gl.shaderSource(vs, v); gl.compileShader(vs); // Compile fragment shader var fs = gl.createShader(gl.FRAGMENT_SHADER); gl.shaderSource(fs, f); gl.compileShader(fs); // Create program and attach shaders program = gl.createProgram(); gl.attachShader(program, vs); gl.attachShader(program, fs); gl.linkProgram(program); // Some debug code to check for shader compile errors and log them to console if (!gl.getShaderParameter(vs, gl.COMPILE_STATUS)) console.log(gl.getShaderInfoLog(vs)); if (!gl.getShaderParameter(fs, gl.COMPILE_STATUS)) console.log(gl.getShaderInfoLog(fs)); if (!gl.getProgramParameter(program, gl.LINK_STATUS)) console.log(gl.getProgramInfoLog(program)); /* Create some simple VBOs*/ // Vertices for a cube var vertices = new Float32Array([ -0.5, 0.5, 0.5, // 0 -0.5, -0.5, 0.5, // 1 0.5, 0.5, 0.5, // 2 0.5, -0.5, 0.5, // 3 -0.5, 0.5, -0.5, // 4 -0.5, -0.5, -0.5, // 5 -0.5, 0.5, -0.5, // 6 -0.5,-0.5, -0.5 // 7 ]); // Indices of the cube var indicies = new Int16Array([ 0, 1, 2, 1, 2, 3, // front 5, 4, 6, 5, 6, 7, // back 0, 1, 5, 0, 5, 4, // left 2, 3, 6, 6, 3, 7, // right 0, 4, 2, 4, 2, 6, // top 5, 3, 1, 5, 3, 7 // bottom ]); // create vertices object on the GPU vbo = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, vbo); gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW); // Create indicies object on th GPU ibo = gl.createBuffer(); gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, ibo); gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, indicies, gl.STATIC_DRAW); gl.clearColor(0.0, 0.0, 0.0, 1.0); gl.enable(gl.DEPTH_TEST); // Render scene every 33 milliseconds setInterval(render, 33); } var mvMatrix = mat4.create(); var pMatrix = mat4.create(); function render() { // Set our viewport and clear it before we render gl.viewport(0, 0, canvas.width, canvas.height); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); gl.useProgram(program); // Bind appropriate VBOs gl.bindBuffer(gl.ARRAY_BUFFER, vbo); gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, ibo); // Set the color for the fragment shader program.uColor = gl.getUniformLocation(program, "uColor"); gl.uniform4fv(program.uColor, [0.3, 0.3, 0.3, 1.0]); // // code.google.com/p/glmatrix/wiki/Usage program.uPMatrix = gl.getUniformLocation(program, "uPMatrix"); program.uMVMatrix = gl.getUniformLocation(program, "uMVMatrix"); mat4.perspective(45, gl.viewportWidth / gl.viewportHeight, 1.0, 10.0, pMatrix); mat4.identity(mvMatrix); mat4.translate(mvMatrix, [0.0, -0.25, -1.0]); gl.uniformMatrix4fv(program.uPMatrix, false, pMatrix); gl.uniformMatrix4fv(program.uMVMatrix, false, mvMatrix); // Set the position for the vertex shader program.aVertexPosition = gl.getAttribLocation(program, "aVertexPosition"); gl.enableVertexAttribArray(program.aVertexPosition); gl.vertexAttribPointer(program.aVertexPosition, 3, gl.FLOAT, false, 3*4, 0); // position // Render the Object gl.drawElements(gl.TRIANGLES, 36, gl.UNSIGNED_SHORT, 0); } Thanks in advance for any help

    Read the article

  • Build config file into executable?

    - by REM
    I am currently working on a little graphics demo (using DirectX) which is primarily based around an HLSL shader I am working on. Using the D3DX10CreateEffectFromFile I am loading (and compiling the shader) at runtime as I find it easier for tweaking. However, once I am done I'd like to do some combination of the following: Pre-compile the shader so the demo starts up faster for the user Bury (compile into the executable) the compiled shader (or maybe just the source if necessary) Primarily, I want to do this because I want the demo to just be one file that can be very easily copied around. One thing I could easily do is just put the source text right into a cpp but that would be very tedious I needed to update it later. Is it possible to do something like this (using Visual Studio, DirectX, HLSL)?

    Read the article

  • New DataCenter Options for Windows Azure

    - by ScottKlein
    Effective immediately, new compute and storage resource options are now available when selecting data center options in the Windows Azure Portal. "West US" and "East US" options are now available, for Compute and Storage. SQL Azure options for these two data centers will be available in the next few months. The official announcement can be found here.In terms of geo-replication:US East and West are paired together for Windows Azure Storage geo-replicationUS North and South are paired together for Windows Azure Storage geo-replicationThese two new data centers are now visible in the Windows Azure Management Portal effective immediately. Compute and Storage pricing remains the same across all data centers. Get started with Windows Azure through the free 90 day trial.

    Read the article

  • Running Mixed Physical and Virtual Exalogic Elastic Cloud Software Versions in an Exalogic Rack is now Supported

    - by csoto
    Although it was not supported on older versions, now as of EECS 2.0.6, an Exalogic rack can be configured in a mixed-mode: half virtual and half physical Linux: Flexibility to have physical and virtual environments on same rack. For example, production on physical and test/dev on virtual. Exalogic Control manages the virtual compute nodes on the rack. Physical compute nodes are managed manually (including PKeys). Option to change full physical to hybrid and hybrid to full virtual rack. User has an option to choose either the top or bottom nodes for physical or virtual deployment. For further information about how the compute nodes can be split up on the rack (into bottom or top half) to run either Oracle Virtual Server (OVS "hypervisor") or Oracle Linux, please take a look at MOS Note 1536945.1. Note: Solaris is not yet supported in the mixed configuration.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >