How to make other semantics behave like SV_Position?
- by object
I'm having a lot of trouble with shadow mapping, and I believe I've found the problem.
When passing vectors from the vertex shader to the pixel shader, does the hardware automatically change any of the values based on the semantic?
I've compiled a barebones pair of shaders which should illustrate the problem.
Vertex shader :
struct Vertex {
float3 position : POSITION;
};
struct Pixel {
float4 position : SV_Position;
float4 light_position : POSITION;
};
cbuffer Matrices {
matrix projection;
};
Pixel RenderVertexShader(Vertex input) {
Pixel output;
output.position = mul(float4(input.position, 1.0f), projection);
output.light_position = output.position;
// We simply pass the same vector in screenspace through different semantics.
return output;
}
And a simple pixel shader to go along with it:
struct Pixel {
float4 position : SV_Position;
float4 light_position : POSITION;
};
float4 RenderPixelShader(Pixel input) : SV_Target {
// At this point, (input.position.z / input.position.w) is a normal depth value.
// However, (input.light_position.z / input.light_position.w) is 0.999f or similar.
// If the primitive is touching the near plane, it very quickly goes to 0.
return (0.0f).rrrr;
}
How is it possible to make the hardware treat light_position in the same way which position is being treated between the vertex and pixel shaders?
EDIT: Aha! (input.position.z) without dividing by W is the same as (input.light_position.z / input.light_position.w). Not sure why this is.