Search Results

Search found 1725 results on 69 pages for 'compute shader'.

Page 5/69 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How can I go about learning to write a shader

    - by Donutz
    So here's the background: I'm writing a game, just for my own amusement and education really. I've already come to the conclusion that XNA was the way to go for graphics, I've bought a couple of books, I've gotten basic game graphics going, and that's great. Now I'm starting to get a little in-depth and I'm starting to need to do stuff not covered in my (beginner) books. In particular, I need to display a sprite using a mask. Actually, what I need to do is display a generic sprite with a different color for each player. After banging around on the web, it seems the way to go is to have a color texture (one for each player) which I display using the mask, then display the generic part of the sprite. This has to be done dynamically, i.e. at runtime because there are too many sprites to keep in memory if I try to generate all the permutations at startup. So, I need to use a shader. Fine. I've downloaded a sample shader program, and managed to hit it with a hammer until it does something close enough to what I want so that I know I'm on the right track. And here, we come to my problem... I have no friggin' clue what I'm doing. While there are a lot of samples and such about shaders, no one ever actually explains what's going on. For instance, I can't find any real docs on Tex2D. I feel like the guys in Zoolander poking at the computer. So, my question (yes, I have a question) -- where is a good URL or what is a good book to take me from dumskie to reasonably competent to write a basic shader?

    Read the article

  • Oracle Virtual Compute Appliance Launch Channel Update Webcast - May 28

    - by Roxana Babiciu
    Join us for an Oracle Virtual Compute Appliance (OVCA) launch update for the channel. This training webcast is a follow up to the OVCA launch on April 16. We will provide a brief product overview of OVCA followed by some great OPN program content, resell criteria, OPN Incentive Program and Demo Equipment Program details. There will be two sessions to accommodate each region. Additionally, don’t miss the latest Oracle Virtual Compute Appliance article packed with great information!

    Read the article

  • XNA 4.0 - Normal mapping shader - strange texture artifacts

    - by Taylor
    I recently started using custom shader. Shader can do diffuse and specular lighting and normal mapping. But normal mapping is causing really ugly artifacts (some sort of pixeling noise) for textures in greater distance. It looks like this: Image link This is HLSL code: // Matrix float4x4 World : World; float4x4 View : View; float4x4 Projection : Projection; //Textury texture2D ColorMap; sampler2D ColorMapSampler = sampler_state { Texture = <ColorMap>; MinFilter = Anisotropic; MagFilter = Linear; MipFilter = Linear; MaxAnisotropy = 16; }; texture2D NormalMap; sampler2D NormalMapSampler = sampler_state { Texture = <NormalMap>; MinFilter = Anisotropic; MagFilter = Linear; MipFilter = Linear; MaxAnisotropy = 16; }; // Light float4 AmbientColor : Color; float AmbientIntensity; float3 DiffuseDirection : LightPosition; float4 DiffuseColor : Color; float DiffuseIntensity; float4 SpecularColor : Color; float3 CameraPosition : CameraPosition; float Shininess; // The input for the VertexShader struct VertexShaderInput { float4 Position : POSITION0; float2 TexCoord : TEXCOORD0; float3 Normal : NORMAL0; float3 Binormal : BINORMAL0; float3 Tangent : TANGENT0; }; // The output from the vertex shader, used for later processing struct VertexShaderOutput { float4 Position : POSITION0; float2 TexCoord : TEXCOORD0; float3 View : TEXCOORD1; float3x3 WorldToTangentSpace : TEXCOORD2; }; // The VertexShader. VertexShaderOutput VertexShaderFunction(VertexShaderInput input, float3 Normal : NORMAL) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); output.TexCoord = input.TexCoord; output.WorldToTangentSpace[0] = mul(normalize(input.Tangent), World); output.WorldToTangentSpace[1] = mul(normalize(input.Binormal), World); output.WorldToTangentSpace[2] = mul(normalize(input.Normal), World); output.View = normalize(float4(CameraPosition,1.0) - worldPosition); return output; } // The Pixel Shader float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float4 color = tex2D(ColorMapSampler, input.TexCoord); float3 normalMap = 2.0 *(tex2D(NormalMapSampler, input.TexCoord)) - 1.0; normalMap = normalize(mul(normalMap, input.WorldToTangentSpace)); float4 normal = float4(normalMap,1.0); float4 diffuse = saturate(dot(-DiffuseDirection,normal)); float4 reflect = normalize(2*diffuse*normal-float4(DiffuseDirection,1.0)); float4 specular = pow(saturate(dot(reflect,input.View)), Shininess); return color * AmbientColor * AmbientIntensity + color * DiffuseIntensity * DiffuseColor * diffuse + color * SpecularColor * specular; } // Techniques technique Lighting { pass Pass1 { VertexShader = compile vs_2_0 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } Any advice? Thanks!

    Read the article

  • Encode two integers into colour values and compare them in a HLSL shader

    - by Ben Slinger
    I am writing a 2D point and click adventure game in Monogame, and I'd like to be able to create an image mask for every room which defines which parts of the background a character can walk behind, and at which Y value a character needs to be at for the background to be drawn above the character. I haven't done any shader work before but after doing some reading I thought the following solution should work: Create a mask for the room with different walk behind areas painted in a colour that defines the baseline Y value (Walk Behind Mask) Render all objects to a RenderTarget2D (Base Texture) Render all objects to a different RenderTarget2D, but changing every pixel of each object to a colour that defines its Y value (Position Mask) Pass these two textures plus the image mask into the shader, and for each pixel compare the colour of the image mask to the colour of the Position Mask to the Walk Behind Mask - if the Position Mask pixel is larger (thus lower on the screen and closer to the camera) than the Walk Behind Mask, draw the pixel from the Base Texture, otherwise draw a transparent pixel (allowing the background to show through). I've got it mostly working, but I'm having trouble packing and unpacking the Y values into colours and retrieving them correctly in the shader. Here are some code examples of how I'm doing it so far: (When drawing to the Position Mask RenderTarget2D) Color posColor = new Color(((int)Position.Y >> 16) & 255, ((int)Position.Y >> 8) & 255, (int)Position.Y & 255); So as far as I can tell, this should be taking the first 3 bytes of the position integer and encoding them into a 4 byte colour (ignoring the alpha as the 4th byte). This seems to work fine, as when my character is at Y = 600, the resulting Color from this is: {[Color: R=0, G=2, B=88, A=255, PackedValue=4283957760]}. I then have an area in my Walk Behind Mask that I only want the character to be displayed behind if his Y value is lower than 655, so I've painted it with R=0, G=2, B=143, A=255. Now, I think I have the shader OK as well, here's what I have: sampler BaseTexture : register(s0); sampler MaskTexture : register(s1); sampler PositionTexture : register(s2); float4 mask( float2 coords : TEXCOORD0 ) : COLOR0 { float4 color = tex2D(BaseTexture, coords); float4 maskColor = tex2D(MaskTexture, coords); float4 positionColor = tex2D(PositionTexture, coords); float maskCompare = (maskColor.r * pow(2,24)) + (maskColor.g * pow(2,16)) + (maskColor.b * pow(2,8)); float positionCompare = (positionColor.r * pow(2,24)) + (positionColor.g * pow(2,16)) + (positionColor.b * pow(2,8)); return positionCompare < maskCompare ? float4(0,0,0,0) : color; } technique Technique1 { pass NoEffect { PixelShader = compile ps_3_0 mask(); } } This isn't working, however - currently all characters are displayed behind the walk behind area, regardless of their Y value. I tried printing out some debug info by grabbing the pixel from both the Position Mask and the Walk Under Mask under the current mouse position, and it seems like maybe the colours aren't being rendered to the Position Mask correctly? When calculating the colour in that code above I'm getting R=0, G=2, B=88, A=255, but when I mouseover my character I get R=0, G=0, B=30, A=255. Any ideas what I'm doing wrong? It seems like maybe I'm losing some information when rendering to the RenderTarget2D, but I'm now knowledgeable enough to figure out what's happening. Also, I should probably ask, is this an efficient way to do this? Will there be a performance impact? Edit: Whoops, turns out there was a bug that I'd introduced myself, I was drawing out the Position Mask with the position Color, left over from some early testing I was doing. So this solution is working perfectly, though I'm still interested in whether this is an efficient solution performance wise.

    Read the article

  • GLSL Shader Texture Performance

    - by Austin
    I currently have a project that renders OpenGL video using a vertex and fragment shader. The shaders work fine as-is, but in trying to add in texturing, I am running into performance issues and can't figure out why. Before adding texturing, my program ran just fine and loaded my CPU between 0%-4%. When adding texturing (specifically textures AND color -- noted by comment below), my CPU is 100% loaded. The only code I have added is the relevant texturing code to the shader, and the "glBindTexture()" calls to the rendering code. Here are my shaders and relevant rending code. Vertex Shader: #version 150 uniform mat4 mvMatrix; uniform mat4 mvpMatrix; uniform mat3 normalMatrix; uniform vec4 lightPosition; uniform float diffuseValue; layout(location = 0) in vec3 vertex; layout(location = 1) in vec3 color; layout(location = 2) in vec3 normal; layout(location = 3) in vec2 texCoord; smooth out VertData { vec3 color; vec3 normal; vec3 toLight; float diffuseValue; vec2 texCoord; } VertOut; void main(void) { gl_Position = mvpMatrix * vec4(vertex, 1.0); VertOut.normal = normalize(normalMatrix * normal); VertOut.toLight = normalize(vec3(mvMatrix * lightPosition - gl_Position)); VertOut.color = color; VertOut.diffuseValue = diffuseValue; VertOut.texCoord = texCoord; } Fragment Shader: #version 150 smooth in VertData { vec3 color; vec3 normal; vec3 toLight; float diffuseValue; vec2 texCoord; } VertIn; uniform sampler2D tex; layout(location = 0) out vec3 colorOut; void main(void) { float diffuseComp = max( dot(normalize(VertIn.normal), normalize(VertIn.toLight)) ), 0.0); vec4 color = texture2D(tex, VertIn.texCoord); colorOut = color.rgb * diffuseComp * VertIn.diffuseValue + color.rgb * (1 - VertIn.diffuseValue); // FOLLOWING LINE CAUSES PERFORMANCE ISSUES colorOut *= VertIn.color; } Relevant Rendering Code: // 3 textures have been successfully pre-loaded, and can be used // texture[0] is a 1x1 white texture to effectively turn off texturing glUseProgram(program); // Draw squares glBindTexture(GL_TEXTURE_2D, texture[1]); // Set attributes, uniforms, etc glDrawArrays(GL_QUADS, 0, 6*4); // Draw triangles glBindTexture(GL_TEXTURE_2D, texture[0]); // Set attributes, uniforms, etc glDrawArrays(GL_TRIANGLES, 0, 3*4); // Draw reference planes glBindTexture(GL_TEXTURE_2D, texture[0]); // Set attributes, uniforms, etc glDrawArrays(GL_LINES, 0, 4*81*2); // Draw terrain glBindTexture(GL_TEXTURE_2D, texture[2]); // Set attributes, uniforms, etc glDrawArrays(GL_TRIANGLES, 0, 501*501*6); // Release glBindTexture(GL_TEXTURE_2D, 0); glUseProgram(0); Any help is greatly appreciated!

    Read the article

  • Why can't a blendShader sample anything but the current coordinate of the background image?

    - by Triynko
    In Flash, you can set a DisplayObject's blendShader property to a pixel shader (flash.shaders.Shader class). The mechanism is nice, because Flash automatically provides your Shader with two input images, including the background surface and the foreground display object's bitmap. The problem is that at runtime, the shader doesn't allow you to sample the background anywhere but under the current output coordinate. If you try to sample other coordinates, it just returns the color of the current coordinate instead, ignoring the coordinates you specified. This seems to occur only at runtime, because it works properly in the Pixel Bender toolkit. This limitation makes it impossible to simulate, for example, the Aero Glass effect in Windows Vista/7, because you cannot sample the background properly for blurring. I must mention that it is possible to create the effect in Flash through manual composition techniques, but it's hard to determine when it actually needs updated, because Flash does not provide information about when a particular area of the screen or a particular display object needs re-rendered. For example, you may have a fixed glass surface with objects moving underneath it that don't dispatch events when they move. The only alternative is to re-render the glass bar every frame, which is inefficient, which is why I am trying to do it through a blendShader so Flash determines when it needs rendered automatically. Is there a technical reason for this limitation, or is it an oversight of some sort? Does anyone know of a workaround, or a way I could provide my manual composition implementation with information about when it needs re-rendered? The limitation is mentioned with no explanation in the last note in this page: http://help.adobe.com/en_US/as3/dev/WSB19E965E-CCD2-4174-8077-8E5D0141A4A8.html It says: "Note: When a Pixel Bender shader program is run as a blend in Flash Player or AIR, the sampling and outCoord() functions behave differently than in other contexts.In a blend, a sampling function will always return the current pixel being evaluated by the shader. You cannot, for example, use add an offset to outCoord() in order to sample a neighboring pixel. Likewise, if you use the outCoord() function outside a sampling function, its coordinates always evaluate to 0. You cannot, for example, use the position of a pixel to influence how the blended images are combined."

    Read the article

  • GLSL shader render to texture not saving alpha value

    - by quadelirus
    I am rendering to a texture using a GLSL shader and then sending that texture as input to a second shader. For the first texture I am using RGB channels to send color data to the second GLSL shader, but I want to use the alpha channel to send a floating point number that the second shader will use as part of its program. The problem is that when I read the texture in the second shader the alpha value is always 1.0. I tested this in the following way: at the end of the first shader I did this: gl_FragColor(r, g, b, 0.1); and then in the second texture I read the value of the first texture using something along the lines of vec4 f = texture2D(previous_tex, pos); if (f.a != 1.0) { gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); return; } No pixels in my output are black, whereas if I change the above code to read gl_FragColor(r, g, 0.1, 1.0); //Notice I'm now sending 0.1 for blue and in the second shader vec4 f = texture2D(previous_tex, pos); if (f.b != 1.0) { gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); return; } All the appropriate pixels are black. This means that for some reason when I set the alpha value to something other than 1.0 in the first shader and render to a texture, it is still seen as being 1.0 by the second shader. Before I render to texture I glDisable(GL_BLEND); It seems pretty clear to me that the problem has to do with OpenGL handling alpha values in some way that isn't obvious to me since I can use the blue channel in the way I want, and figured someone out there will instantly see the problem.

    Read the article

  • How do I get the current color of a fragment?

    - by Mason Wheeler
    I'm trying to wrap my head around shaders in GLSL, and I've found some useful resources and tutorials, but I keep running into a wall for something that ought to be fundamental and trivial: how does my fragment shader retrieve the color of the current fragment? You set the final color by saying gl_FragColor = whatever, but apparently that's an output-only value. How do you get the original color of the input so you can perform calculations on it? That's got to be in a variable somewhere, but if anyone out there knows its name, they don't seem to have recorded it in any tutorial or documentation that I've run across so far, and it's driving me up the wall.

    Read the article

  • Strange ATI vs Nvidia TRIANGLE_STRIP issue

    - by chriscisco
    I have this code, I am using a test for my Engine I am working on. On My NVIDIA NVS 4200M it displays the GL_TRIANGLE_STRIP as expected. On my ATI Radeon 5800 it appears to draw a Triangle. shader.begin(); Matrix4<float> temp = getActiveCamera()->getProjectionMatrix() * getActiveCamera()->getObjectToWorld().fastInverse(); glUniformMatrix4fv(shader["mvp"], 1, GL_TRUE, temp.getArray()); glBegin(GL_TRIANGLE_STRIP); glVertexAttrib3f(shader["colour"],0,1,0); glVertexAttrib3f(shader["coord3d"],-.5,-.5,0); glVertexAttrib3f(shader["colour"],1,1,0); glVertexAttrib3f(shader["coord3d"],0.5,-.5,0); glVertexAttrib3f(shader["colour"],1,0,1); glVertexAttrib3f(shader["coord3d"],-.5,.5,0); glVertexAttrib3f(shader["colour"],0,1,1); glVertexAttrib3f(shader["coord3d"],.5,.5,0); glEnd(); shader.end(); Here are what it actually looks like on my two computers. https://www.dropbox.com/s/sgm2j978tx2ipnp/not%20working.png https://www.dropbox.com/s/27idv0b8k0p4pcx/working.png

    Read the article

  • Fragment shader seems to floor() imprecisely

    - by Peter K.
    I'm trying to interpolate coordinates in my fragment shader. Unfortunately if close to the upper edge the interpolated value of fVertexInteger seems to be rounded up instead of beeing floored. This happens above approximately fVertexInteger >= x.97. Example: floor(64.7) returns 64.0 -- correct floor(64.98) returns 65.0 -- incorrect The same happens on ceiling close above x.0, where ceil(65.02) returns 65.0 instead of 66.0. Q: Any ideas how to solve this? Note: GL ES 2.0 with GLSL 1.0 highp floats are not supported in fragment shaders on my hardware flat varying hasn't been a solution, because I'm drawing TRIANGLE_STRIP and can't redeclare the provoking vertex (only OpenGL 3.2+) Fragment Shader: varying float fVertexInteger; varying float fVertexFraction; void main() { // Fix vertex integer fixedVertexInteger = floor(fVertexInteger); // Fragment color gl_FragColor = vec4( fixedVertexInteger / 65025.0, fract(fixedVertexInteger / 255.0), fVertexFraction, 1.0 ); }

    Read the article

  • Bloom shader makes it impossible to render black?

    - by Mathias Lykkegaard Lorenzen
    I am playing around with the bloom shader from the XNA sample page, to do some glow shading. I am rendering primitive vector-ish squares of linelists/linestrips, on a background. However, I am facing a few problems. With a black background and white squares, I can actually see the squares. However, with a white background and black squares, I can't see them at all. Why is this happening, and is there any way of me fixing it? Can I modify my bloom shader to also "glow" dark elements, if that's what is causing it?

    Read the article

  • OpenGL ES Basic Fragment Shader help with transparency

    - by Chris
    I have just spent my first half hour playing with the shader language. I have modified the basic program I have which renders the texture, to allow me to colour the texture. varying vec2 texCoord; uniform sampler2D texSampler; /* Given the texture coordinates, our pixel shader grabs the corresponding * color from the texture. */ void main() { //gl_FragColor = texture2D(texSampler, texCoord); gl_FragColor = vec4(0,1,0,1)*vec4(texture2D(texSampler,texCoord).xyz,1); } I have noticed how this affects my transparent textures, and I believe I am loosing the alpha channel which would explain why previously transparent area's appear totally black. If I use the following line instead, I am shown the transparent area's gl_FragColor = vec4(0,1,0,1)*vec4(texture2D(texSampler,texCoord).aaa,1); How can I retain the transparency after this modification to the colour? I have seen various things about a .w property, and also luminous, but my tweaks with those and the .aaa property are not working XD

    Read the article

  • Heightmap in Shader not working

    - by CSharkVisibleBasix
    I'm trying to implement GPU based geometry clipmapping and have problems to apply a simple heightmap to my terrain. For the heightmap I use a simple texture with the surface format "single". I've taken the texture from here. To apply it to my terrain, I use the following shader code: texture Heightmap; sampler2D HeightmapSampler = sampler_state { Texture = <Heightmap>; MinFilter = Point; MagFilter = Point; MipFilter = Point; AddressU = Mirror; AddressV = Mirror; }; Vertex Shader: float4 worldPos = mul(float4(pos.x,0.0f,pos.y, 1.0f), worldMatrix); float elevation = tex2Dlod(HeightmapSampler, float4(worldPos.x, worldPos.z,0,0)); worldPos.y = elevation * 128; The complete vertex shader (also containig clipmapping transforms) looks like this: float4x4 View : View; float4x4 Projection : Projection; float3 CameraPos : CameraPosition; float LODscale; //The LOD ring index 0:highest x:lowest float2 Position; //The position of the part in the world texture Heightmap; sampler2D HeightmapSampler = sampler_state { Texture = <Heightmap>; MinFilter = Point; MagFilter = Point; MipFilter = Point; AddressU = Mirror; AddressV = Mirror; }; //Accept only float2 because we extract 3rd dim out of Heightmap float4 wireframe_VS(float2 pos : POSITION) : POSITION{ float4x4 worldMatrix = float4x4( LODscale, 0, 0, 0, 0, LODscale, 0, 0, 0, 0, LODscale, 0, - Position.x * 64 * LODscale + CameraPos.x, 0, Position.y * 64 * LODscale + CameraPos.z, 1); float4 worldPos = mul(float4(pos.x,0.0f,pos.y, 1.0f), worldMatrix); float elevation = tex2Dlod(HeightmapSampler, float4(worldPos.x, worldPos.z,0,0)); worldPos.y = elevation * 128; float4 viewPos = mul(worldPos, View); float4 projPos = mul(viewPos, Projection); return projPos; }

    Read the article

  • 2D shader to draw representation of rotating sphere.

    - by TheBigO
    I want to display a 3D textured sphere, and then rotate it in one direction. The direction will never change, and the camera will never move. One way is to actually create a spherical mesh, map a texture to it, rotate the sphere, and render in 3D. My question is, is there a way to display a 2D circle, that looks like a rotating sphere, with just a 2D shader. In other words, can someone think of a trick, like mapping a texture to the circle in a particular way, to give the appearance of an in-place rotating sphere, that is always viewed from the side? I don't need exact shader code, I'm just looking for the right idea.

    Read the article

  • Writing to a structured buffer with a compute shader (D3D11)

    - by Vertexwahn
    I have some problems writing to a structured buffer. First I create a structured buffer that is filled with float values beginning from 0 to 99. Afterwards a copy the structured buffer to a CPU accessible buffer is made to print the content of the structured buffer to the console. The output is as expected (Numbers 0 to 99 appear on the console). Afterwards I use a compute shader that should change the contents of the structured buffer: RWStructuredBuffer<float> Result : register( u0 ); [numthreads(1, 1, 1)] void CS_main( uint3 GroupId : SV_GroupID ) { Result[GroupId.x] = GroupId.x * 10; } But the compute shader does not change the contents of the structured buffer. The source code can be found here (main.cpp): https://bitbucket.org/Vertexwahn/cmakedemos/src/4abb067afd5781b87a553c4c720956668adca22a/D3D11ComputeShader/src/main.cpp?at=default FillCS.hlsl: https://bitbucket.org/Vertexwahn/cmakedemos/src/4abb067afd5781b87a553c4c720956668adca22a/D3D11ComputeShader/src/FillCS.hlsl?at=default

    Read the article

  • Outline Shader Effect for Orthogonal Geometry in XNA

    - by Griffin
    I just recently started learning the art of shading, but I can't give an outline width to 2D, concave geometry when restrained to a single vertex/pixel shader technique (thanks to XNA). the shape I need to give an outline to has smooth, per-vertex coloring, as well as opacity. The outline, which has smooth, per-vertex coloring, variable width, and opacity cannot interfere with the original shape's colors. A pixel depth border detection algorithm won't work because pixel depth isn't a 3.0 semantic. expanding geometry / redrawing won't work because it interferes with the original shape's colors. I'm wondering if I can do something with the stencil/depth buffer outside of the shader functions since I have access to that through the graphics device. But I don't believe I'm able to manipulate actual values. How might I do this?

    Read the article

  • GLSL, all in one or many shader programs?

    - by stjepano
    I am doing some 3D demos using OpenGL and I noticed that GLSL is somewhat "limited" (or is it just me?). Anyway I have many different types of materials. Some materials have ambient and diffuse color, some materials have ambient occlusion map, some have specular map and bump map etc. Is it better to support everything in one vertex/fragment shader pair or is it better to create many vertex/fragment shaders and select them based on currently selected material? What is the usual shader strategy in OpenGL or D3D?

    Read the article

  • Using a texture as an integer array (OpenGL 3.3, shader version 3.3)

    - by Cubic
    I'm trying to have something like an integer array uniform for my fragment shader (I only need read access). Since it's a fairly large chunk of data (not so large that uploading it in every frame would be impossible, but enough to make me want to rather not do it). Essentially I want to just pass it a uniform telling the shader where this "array" is. I believe I can use a 1D texture for this, but I don't know how (actually, I don't know how to do many things because I just can't seem to find a reference for GLSL 3.3, I only ever find references for the C API). This sounds like a rather basic question and I'm sure it's been answered already somewhere, but I keep searching and can't quite find what I'm looking for.

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

  • Quickly compute added and removed lines

    - by Philippe Marschall
    I'm trying to compare two text files. I want to compute how many lines were added and removed. Basically what git diff --stat is doing. Bonus points for not having to store the entire file contents in memory. The approach I'm currently having in mind is: read each line of the old file compute a hash (probably MD5 or SHA-1) for each line store the hashes in a set do the same for each line in the new file every hash from the old file set that's missing in the new file set was removed every hash from the new file set that's missing in the old file set was added I'll probably want to exclude empty and all white space lines. There is a small issue with duplicated lines. This can either be solved by additionally storing how often a hash appears or comparing the number of lines in the old and new file and adjust either the added or removed lines so that the numbers add up. Do you see room for improvements or a better approach?

    Read the article

  • FREE Windows Azure Platform Compute and Storage through the Cloud Essentials Pack for Partners

    - by Eric Nelson
    It can be difficult to find something to look forward to in January – but this year it was a little easier as a) I got lots of great Xbox 360 games and b) the Windows Azure Platform element of the Cloud Essentials Pack for Microsoft Partner Network partners went live. I have previously explained what the Cloud Essentials Pack is and how you can access – but at the time I couldn’t share the details of the Windows Azure Platform element. The Windows Azure Platform element is now available. It gives you each month, for FREE: Windows Azure: 750 hours of extra small compute instance 25 hours of small compute instance 3GB of storage and 250,000 storage transactions SQL Azure: 1 SQL Azure Web Edition database (5GB) Windows Azure AppFabric: App Fabric with 100,000 Access Control transactions and 2 Service Bus connections Plus: Data Transfer:  3GB in and 6GB out (More details of the offer) To activate this offer You need to: Sign your company up to Microsoft Platform Ready (NB: there are other routes to get this benefit – but I know about MPR) Read about Microsoft Platform Ready Visit http://www.microsoftcloudpartner.com/ and sign up.

    Read the article

  • My vertex shader doesn't affect texture coords or diffuse info but works for position

    - by tina nyaa
    I am new to 3D and DirectX - in the past I have only used abstractions for 2D drawing. Over the past month I've been studying really hard and I'm trying to modify and adapt some of the shaders as part of my personal 'study project'. Below I have a shader, modified from one of the Microsoft samples. I set diffuse and tex0 vertex shader outputs to zero, but my model still shows the full texture and lighting as if I hadn't changed the values from the vertex buffer. Changing the position of the model works, but nothing else. Why is this? // // Skinned Mesh Effect file // Copyright (c) 2000-2002 Microsoft Corporation. All rights reserved. // float4 lhtDir = {0.0f, 0.0f, -1.0f, 1.0f}; //light Direction float4 lightDiffuse = {0.6f, 0.6f, 0.6f, 1.0f}; // Light Diffuse float4 MaterialAmbient : MATERIALAMBIENT = {0.1f, 0.1f, 0.1f, 1.0f}; float4 MaterialDiffuse : MATERIALDIFFUSE = {0.8f, 0.8f, 0.8f, 1.0f}; // Matrix Pallette static const int MAX_MATRICES = 100; float4x3 mWorldMatrixArray[MAX_MATRICES] : WORLDMATRIXARRAY; float4x4 mViewProj : VIEWPROJECTION; /////////////////////////////////////////////////////// struct VS_INPUT { float4 Pos : POSITION; float4 BlendWeights : BLENDWEIGHT; float4 BlendIndices : BLENDINDICES; float3 Normal : NORMAL; float3 Tex0 : TEXCOORD0; }; struct VS_OUTPUT { float4 Pos : POSITION; float4 Diffuse : COLOR; float2 Tex0 : TEXCOORD0; }; float3 Diffuse(float3 Normal) { float CosTheta; // N.L Clamped CosTheta = max(0.0f, dot(Normal, lhtDir.xyz)); // propogate scalar result to vector return (CosTheta); } VS_OUTPUT VShade(VS_INPUT i, uniform int NumBones) { VS_OUTPUT o; float3 Pos = 0.0f; float3 Normal = 0.0f; float LastWeight = 0.0f; // Compensate for lack of UBYTE4 on Geforce3 int4 IndexVector = D3DCOLORtoUBYTE4(i.BlendIndices); // cast the vectors to arrays for use in the for loop below float BlendWeightsArray[4] = (float[4])i.BlendWeights; int IndexArray[4] = (int[4])IndexVector; // calculate the pos/normal using the "normal" weights // and accumulate the weights to calculate the last weight for (int iBone = 0; iBone < NumBones-1; iBone++) { LastWeight = LastWeight + BlendWeightsArray[iBone]; Pos += mul(i.Pos, mWorldMatrixArray[IndexArray[iBone]]) * BlendWeightsArray[iBone]; Normal += mul(i.Normal, mWorldMatrixArray[IndexArray[iBone]]) * BlendWeightsArray[iBone]; } LastWeight = 1.0f - LastWeight; // Now that we have the calculated weight, add in the final influence Pos += (mul(i.Pos, mWorldMatrixArray[IndexArray[NumBones-1]]) * LastWeight); Normal += (mul(i.Normal, mWorldMatrixArray[IndexArray[NumBones-1]]) * LastWeight); // transform position from world space into view and then projection space //o.Pos = mul(float4(Pos.xyz, 1.0f), mViewProj); o.Pos = mul(float4(Pos.xyz, 1.0f), mViewProj); o.Diffuse.x = 0.0f; o.Diffuse.y = 0.0f; o.Diffuse.z = 0.0f; o.Diffuse.w = 0.0f; o.Tex0 = float2(0,0); return o; } technique t0 { pass p0 { VertexShader = compile vs_3_0 VShade(4); } } I am currently using the SlimDX .NET wrapper around DirectX, but the API is extremely similar: public void Draw() { var device = vertexBuffer.Device; device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.White, 1.0f, 0); device.SetRenderState(RenderState.Lighting, true); device.SetRenderState(RenderState.DitherEnable, true); device.SetRenderState(RenderState.ZEnable, true); device.SetRenderState(RenderState.CullMode, Cull.Counterclockwise); device.SetRenderState(RenderState.NormalizeNormals, true); device.SetSamplerState(0, SamplerState.MagFilter, TextureFilter.Anisotropic); device.SetSamplerState(0, SamplerState.MinFilter, TextureFilter.Anisotropic); device.SetTransform(TransformState.World, Matrix.Identity * Matrix.Translation(0, -50, 0)); device.SetTransform(TransformState.View, Matrix.LookAtLH(new Vector3(-200, 0, 0), Vector3.Zero, Vector3.UnitY)); device.SetTransform(TransformState.Projection, Matrix.PerspectiveFovLH((float)Math.PI / 4, (float)device.Viewport.Width / device.Viewport.Height, 10, 10000000)); var material = new Material(); material.Ambient = material.Diffuse = material.Emissive = material.Specular = new Color4(Color.White); material.Power = 1f; device.SetStreamSource(0, vertexBuffer, 0, vertexSize); device.VertexDeclaration = vertexDeclaration; device.Indices = indexBuffer; device.Material = material; device.SetTexture(0, texture); var param = effect.GetParameter(null, "mWorldMatrixArray"); var boneWorldTransforms = bones.OrderedBones.OrderBy(x => x.Id).Select(x => x.CombinedTransformation).ToArray(); effect.SetValue(param, boneWorldTransforms); effect.SetValue(effect.GetParameter(null, "mViewProj"), Matrix.Identity);// Matrix.PerspectiveFovLH((float)Math.PI / 4, (float)device.Viewport.Width / device.Viewport.Height, 10, 10000000)); effect.SetValue(effect.GetParameter(null, "MaterialDiffuse"), material.Diffuse); effect.SetValue(effect.GetParameter(null, "MaterialAmbient"), material.Ambient); effect.Technique = effect.GetTechnique(0); var passes = effect.Begin(FX.DoNotSaveState); for (var i = 0; i < passes; i++) { effect.BeginPass(i); device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, skin.Vertices.Length, 0, skin.Indicies.Length / 3); effect.EndPass(); } effect.End(); } Again, I set diffuse and tex0 vertex shader outputs to zero, but my model still shows the full texture and lighting as if I hadn't changed the values from the vertex buffer. Changing the position of the model works, but nothing else. Why is this? Also, whatever I set in the bone transformation matrices doesn't seem to have an effect on my model. If I set every bone transformation to a zero matrix, the model still shows up as if nothing had happened, but changing the Pos field in shader output makes the model disappear. I don't understand why I'm getting this kind of behaviour. Thank you!

    Read the article

  • Ambient occlusion shader just shows models as all white

    - by dvds414
    Okay so I have this shader for ambient occlusion. It loads to world correctly, but it just shows all the models as being white. I do not know why. I am just running the shader while the model is rendering, is that correct? or do I need to make a render target or something? If so then how? I'm using C++. Here is my shader: float sampleRadius; float distanceScale; float4x4 xProjection; float4x4 xView; float4x4 xWorld; float3 cornerFustrum; struct VS_OUTPUT { float4 pos : POSITION; float2 TexCoord : TEXCOORD0; float3 viewDirection : TEXCOORD1; }; VS_OUTPUT VertexShaderFunction( float4 Position : POSITION, float2 TexCoord : TEXCOORD0) { VS_OUTPUT Out = (VS_OUTPUT)0; float4 WorldPosition = mul(Position, xWorld); float4 ViewPosition = mul(WorldPosition, xView); Out.pos = mul(ViewPosition, xProjection); Position.xy = sign(Position.xy); Out.TexCoord = (float2(Position.x, -Position.y) + float2( 1.0f, 1.0f ) ) * 0.5f; float3 corner = float3(-cornerFustrum.x * Position.x, cornerFustrum.y * Position.y, cornerFustrum.z); Out.viewDirection = corner; return Out; } texture depthTexture; texture randomTexture; sampler2D depthSampler = sampler_state { Texture = <depthTexture>; ADDRESSU = CLAMP; ADDRESSV = CLAMP; MAGFILTER = LINEAR; MINFILTER = LINEAR; }; sampler2D RandNormal = sampler_state { Texture = <randomTexture>; ADDRESSU = WRAP; ADDRESSV = WRAP; MAGFILTER = LINEAR; MINFILTER = LINEAR; }; float4 PixelShaderFunction(VS_OUTPUT IN) : COLOR0 { float4 samples[16] = { float4(0.355512, -0.709318, -0.102371, 0.0 ), float4(0.534186, 0.71511, -0.115167, 0.0 ), float4(-0.87866, 0.157139, -0.115167, 0.0 ), float4(0.140679, -0.475516, -0.0639818, 0.0 ), float4(-0.0796121, 0.158842, -0.677075, 0.0 ), float4(-0.0759516, -0.101676, -0.483625, 0.0 ), float4(0.12493, -0.0223423, -0.483625, 0.0 ), float4(-0.0720074, 0.243395, -0.967251, 0.0 ), float4(-0.207641, 0.414286, 0.187755, 0.0 ), float4(-0.277332, -0.371262, 0.187755, 0.0 ), float4(0.63864, -0.114214, 0.262857, 0.0 ), float4(-0.184051, 0.622119, 0.262857, 0.0 ), float4(0.110007, -0.219486, 0.435574, 0.0 ), float4(0.235085, 0.314707, 0.696918, 0.0 ), float4(-0.290012, 0.0518654, 0.522688, 0.0 ), float4(0.0975089, -0.329594, 0.609803, 0.0 ) }; IN.TexCoord.x += 1.0/1600.0; IN.TexCoord.y += 1.0/1200.0; normalize (IN.viewDirection); float depth = tex2D(depthSampler, IN.TexCoord).a; float3 se = depth * IN.viewDirection; float3 randNormal = tex2D( RandNormal, IN.TexCoord * 200.0 ).rgb; float3 normal = tex2D(depthSampler, IN.TexCoord).rgb; float finalColor = 0.0f; for (int i = 0; i < 16; i++) { float3 ray = reflect(samples[i].xyz,randNormal) * sampleRadius; //if (dot(ray, normal) < 0) // ray += normal * sampleRadius; float4 sample = float4(se + ray, 1.0f); float4 ss = mul(sample, xProjection); float2 sampleTexCoord = 0.5f * ss.xy/ss.w + float2(0.5f, 0.5f); sampleTexCoord.x += 1.0/1600.0; sampleTexCoord.y += 1.0/1200.0; float sampleDepth = tex2D(depthSampler, sampleTexCoord).a; if (sampleDepth == 1.0) { finalColor ++; } else { float occlusion = distanceScale* max(sampleDepth - depth, 0.0f); finalColor += 1.0f / (1.0f + occlusion * occlusion * 0.1); } } return float4(finalColor/16, finalColor/16, finalColor/16, 1.0f); } technique SSAO { pass P0 { VertexShader = compile vs_3_0 VertexShaderFunction(); PixelShader = compile ps_3_0 PixelShaderFunction(); } }

    Read the article

  • glsl shader to allow color change of skydome ogre3d

    - by Tim
    I'm still very new to all this but learning a lot. I'm putting together an application using Ogre3d as the rendering engine. So far I've got it running, with a simple scene, a day/night cycle system which is working okay. I'm now moving on to looking at changing the color of the skydome material based on the time of day. What I've done so far is to create a struct to hold the ColourValues for the different aspects of the scene. struct todColors { Ogre::ColourValue sky; Ogre::ColourValue ambient; Ogre::ColourValue sun; }; I created an array to store all the colours todColors sceneColours [4]; I populated the array with the colours I want to use for the various times of the day. For instance DayTime (when the sun is high in the sky) sceneColours[2].sky = Ogre::ColourValue(135/255, 206/255, 235/255, 255); sceneColours[2].ambient = Ogre::ColourValue(135/255, 206/255, 235/255, 255); sceneColours[2].sun = Ogre::ColourValue(135/255, 206/255, 235/255, 255); I've got code to work out the time of the day using a float currentHours to store the current hour of the day 10.5 = 10:30 am. This updates constantly and updates the sun as required. I am then calculating the appropriate colours for the time of day when relevant using else if( currentHour >= 4 && currentHour < 7) { // Lerp from night to morning Ogre::ColourValue lerp = Ogre::Math::lerp<Ogre::ColourValue, float>(sceneColours[GT_TOD_NIGHT].sky , sceneColours[GT_TOD_MORNING].sky, (currentHour - 4) / (7 - 4)); } My original attempt to get this to work was to dynamically generate a material with the new colour and apply that material to the skydome. This, as you can probably guess... didn't go well. I know it's possible to use shaders where you can pass information such as colour to the shader from the code but I am unsure if there is an existing simple shader to change a colour like this or if I need to create one. What is involved in creating a shader and material definition that would allow me to change the colour of a material without the overheads of dynamically generating materials all the time? EDIT : I've created a glsl vertex and fragment shaders as follows. Vertex uniform vec4 newColor; void main() { gl_FrontColor = newColor; gl_Position = ftransform(); } Fragment void main() { gl_FragColor = gl_Color; } I can pass a colour to it using ShaderDesigner and it seems to work. I now need to investigate how to use it within Ogre as a material. EDIT : I created a material file like this : vertex_program colour_vs_test glsl { source test.vert default_params { param_named newColor float4 0.0 0.0 0.0 1 } } fragment_program colour_fs_glsl glsl { source test.frag } material Test/SkyColor { technique { pass { lighting off fragment_program_ref colour_fs_glsl { } vertex_program_ref colour_vs_test { } } } } In the code I have tried : Ogre::MaterialPtr material = Ogre::MaterialManager::getSingleton().getByName("Test/SkyColor"); Ogre::GpuProgramParametersSharedPtr params = material->getTechnique(0)->getPass(0)->getVertexProgramParameters(); params->setNamedConstant("newcolor", Ogre::Vector4(0.7, 0.5, 0.3, 1)); I've set that as the Skydome material which seems to work initially. I am doing the same with the code that is attempting to lerp between colours, but when I include it there, it all goes black. Seems like there is now a problem with my colour lerping.

    Read the article

  • HLSL Shader not working right?

    - by dvds414
    Okay so I have this shader for ambient occlusion. It loads to world correctly, but it just shows all the models as being white. I do not know why. I am just running the shader while the model is rendering, is that correct? or do I need to make a render target or something? if so then how? I'm using C++. Here is my shader. float sampleRadius; float distanceScale; float4x4 xProjection; float4x4 xView; float4x4 xWorld; float3 cornerFustrum; struct VS_OUTPUT { float4 pos : POSITION; float2 TexCoord : TEXCOORD0; float3 viewDirection : TEXCOORD1; }; VS_OUTPUT VertexShaderFunction( float4 Position : POSITION, float2 TexCoord : TEXCOORD0) { VS_OUTPUT Out = (VS_OUTPUT)0; float4 WorldPosition = mul(Position, xWorld); float4 ViewPosition = mul(WorldPosition, xView); Out.pos = mul(ViewPosition, xProjection); Position.xy = sign(Position.xy); Out.TexCoord = (float2(Position.x, -Position.y) + float2( 1.0f, 1.0f ) ) * 0.5f; float3 corner = float3(-cornerFustrum.x * Position.x, cornerFustrum.y * Position.y, cornerFustrum.z); Out.viewDirection = corner; return Out; } texture depthTexture; texture randomTexture; sampler2D depthSampler = sampler_state { Texture = <depthTexture>; ADDRESSU = CLAMP; ADDRESSV = CLAMP; MAGFILTER = LINEAR; MINFILTER = LINEAR; }; sampler2D RandNormal = sampler_state { Texture = <randomTexture>; ADDRESSU = WRAP; ADDRESSV = WRAP; MAGFILTER = LINEAR; MINFILTER = LINEAR; }; float4 PixelShaderFunction(VS_OUTPUT IN) : COLOR0 { float4 samples[16] = { float4(0.355512, -0.709318, -0.102371, 0.0 ), float4(0.534186, 0.71511, -0.115167, 0.0 ), float4(-0.87866, 0.157139, -0.115167, 0.0 ), float4(0.140679, -0.475516, -0.0639818, 0.0 ), float4(-0.0796121, 0.158842, -0.677075, 0.0 ), float4(-0.0759516, -0.101676, -0.483625, 0.0 ), float4(0.12493, -0.0223423, -0.483625, 0.0 ), float4(-0.0720074, 0.243395, -0.967251, 0.0 ), float4(-0.207641, 0.414286, 0.187755, 0.0 ), float4(-0.277332, -0.371262, 0.187755, 0.0 ), float4(0.63864, -0.114214, 0.262857, 0.0 ), float4(-0.184051, 0.622119, 0.262857, 0.0 ), float4(0.110007, -0.219486, 0.435574, 0.0 ), float4(0.235085, 0.314707, 0.696918, 0.0 ), float4(-0.290012, 0.0518654, 0.522688, 0.0 ), float4(0.0975089, -0.329594, 0.609803, 0.0 ) }; IN.TexCoord.x += 1.0/1600.0; IN.TexCoord.y += 1.0/1200.0; normalize (IN.viewDirection); float depth = tex2D(depthSampler, IN.TexCoord).a; float3 se = depth * IN.viewDirection; float3 randNormal = tex2D( RandNormal, IN.TexCoord * 200.0 ).rgb; float3 normal = tex2D(depthSampler, IN.TexCoord).rgb; float finalColor = 0.0f; for (int i = 0; i < 16; i++) { float3 ray = reflect(samples[i].xyz,randNormal) * sampleRadius; //if (dot(ray, normal) < 0) // ray += normal * sampleRadius; float4 sample = float4(se + ray, 1.0f); float4 ss = mul(sample, xProjection); float2 sampleTexCoord = 0.5f * ss.xy/ss.w + float2(0.5f, 0.5f); sampleTexCoord.x += 1.0/1600.0; sampleTexCoord.y += 1.0/1200.0; float sampleDepth = tex2D(depthSampler, sampleTexCoord).a; if (sampleDepth == 1.0) { finalColor ++; } else { float occlusion = distanceScale* max(sampleDepth - depth, 0.0f); finalColor += 1.0f / (1.0f + occlusion * occlusion * 0.1); } } return float4(finalColor/16, finalColor/16, finalColor/16, 1.0f); } technique SSAO { pass P0 { VertexShader = compile vs_3_0 VertexShaderFunction(); PixelShader = compile ps_3_0 PixelShaderFunction(); } }

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >