I'm calculating my shadow term using a VSM method. This works correctly when using forward rendered lights but fails with deferred lights.
// Shadow term (1 = no shadow)
float shadow = 1;
// [Light Space -> Shadow Map Space]
// Transform the surface into light space and project
// NB: Could be done in the vertex shader, but doing it here keeps the
// "light shader" abstraction and doesn't limit the number of shadowed lights
float4x4 LightViewProjection = mul(LightView, LightProjection);
float4 surf_tex = mul(position, LightViewProjection);
// Re-homogenize
// 'w' component is not used in later calculations so no need to homogenize (it will equal '1' if homogenized)
surf_tex.xyz /= surf_tex.w;
// Rescale viewport to be [0,1] (texture coordinate system)
float2 shadow_tex;
shadow_tex.x = surf_tex.x * 0.5f + 0.5f;
shadow_tex.y = -surf_tex.y * 0.5f + 0.5f;
// Half texel offset
//shadow_tex += (0.5 / 512);
// Scaled distance to light (instead of 'surf_tex.z')
float rescaled_dist_to_light = dist_to_light / LightAttenuation.y;
//float rescaled_dist_to_light = surf_tex.z;
// [Variance Shadow Map Depth Calculation]
// No filtering
float2 moments = tex2D(ShadowSampler, shadow_tex).xy;
// Flip the moments values to bring them back to their original values
moments.x = 1.0 - moments.x;
moments.y = 1.0 - moments.y;
// Compute variance
float E_x2 = moments.y;
float Ex_2 = moments.x * moments.x;
float variance = E_x2 - Ex_2;
variance = max(variance, Bias.y);
// Surface is fully lit if the current pixel is before the light occluder (lit_factor == 1)
// One-tailed inequality valid if
float lit_factor = (rescaled_dist_to_light <= moments.x - Bias.x);
// Compute probabilistic upper bound (mean distance)
float m_d = moments.x - rescaled_dist_to_light;
// Chebychev's inequality
float p = variance / (variance + m_d * m_d);
p = ReduceLightBleeding(p, Bias.z);
// Adjust the light color based on the shadow attenuation
shadow *= max(lit_factor, p);
This is what I know for certain so far:
The lighting is correct if I do not try and calculate the shadow term. (No shadows)
The shadow term is correct when calculated using forward rendered lighting. (VSM works with forward rendered lights)
With the current rescaled light distance (lightAttenuation.y is the far plane value):
float rescaled_dist_to_light = dist_to_light / LightAttenuation.y;
The light is correct and the shadow appears to be zoomed in and misses the blurring:
When I do not rescale the light and use the homogenized 'surf_tex':
float rescaled_dist_to_light = surf_tex.z;
the shadows are blurred correctly but the lighting is incorrect and the cube model is no longer lit
Why is scaling by the far plane value (LightAttenuation.y) zooming in too far?
The only other factor involved is my world pixel position, which is calculated as follows:
// [Position]
float4 position;
// [Screen Position]
position.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above
position.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component
position.z = 1.0 - position.z;
position.w = 1.0; // 1.0 = position.w / position.w
// [World Position]
position = mul(position, CameraViewProjectionInverse);
// Re-homogenize position (xyz AND w, otherwise shadows will bend when camera is close)
position.xyz /= position.w;
position.w = 1.0;
Using the inverse matrix of the camera's view x projection matrix does work for lighting but maybe it is incorrect for shadow calculation?
EDIT:
Light calculations for shadow including 'dist_to_light'
// Work out the light position and direction in world space
float3 light_position = float3(LightViewInverse._41, LightViewInverse._42, LightViewInverse._43);
// Direction might need to be negated
float3 light_direction = float3(-LightViewInverse._31, -LightViewInverse._32, -LightViewInverse._33);
// Unnormalized light vector
float3 dir_to_light = light_position - position; // Direction from vertex
float dist_to_light = length(dir_to_light);
// Normalise 'toLight' vector for lighting calculations
dir_to_light = normalize(dir_to_light);
EDIT2:
These are the calculations for the moments (depth)
//=============================================
//---[Vertex Shaders]--------------------------
//=============================================
DepthVSOutput depth_VS(
float4 Position : POSITION,
uniform float4x4 shadow_view,
uniform float4x4 shadow_view_projection)
{
DepthVSOutput output = (DepthVSOutput)0;
// First transform position into world space
float4 position_world = mul(Position, World);
output.position_screen = mul(position_world, shadow_view_projection);
output.light_vec = mul(position_world, shadow_view).xyz;
return output;
}
//=============================================
//---[Pixel Shaders]---------------------------
//=============================================
DepthPSOutput depth_PS(DepthVSOutput input)
{
DepthPSOutput output = (DepthPSOutput)0;
// Work out the depth of this fragment from the light, normalized to [0, 1]
float2 depth;
depth.x = length(input.light_vec) / FarPlane;
depth.y = depth.x * depth.x;
// Flip depth values to avoid floating point inaccuracies
depth.x = 1.0f - depth.x;
depth.y = 1.0f - depth.y;
output.depth = depth.xyxy;
return output;
}
EDIT 3:
I have tried the folloiwng:
float4 pp;
pp.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above
pp.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component
pp.z = 1.0 - pp.z;
pp.w = 1.0; // 1.0 = position.w / position.w
// Determine the depth of the pixel with respect to the light
float4x4 LightViewProjection = mul(LightView, LightProjection);
float4x4 matViewToLightViewProj = mul(CameraViewProjectionInverse, LightViewProjection);
float4 vPositionLightCS = mul(pp, matViewToLightViewProj);
float fLightDepth = vPositionLightCS.z / vPositionLightCS.w;
// Transform from light space to shadow map texture space.
float2 vShadowTexCoord = 0.5 * vPositionLightCS.xy / vPositionLightCS.w + float2(0.5f, 0.5f);
vShadowTexCoord.y = 1.0f - vShadowTexCoord.y;
// Offset the coordinate by half a texel so we sample it correctly
vShadowTexCoord += (0.5f / 512); //g_vShadowMapSize
This suffers the same problem as the second picture.
I have tried storing the depth based on the view x projection matrix:
output.position_screen = mul(position_world, shadow_view_projection);
//output.light_vec = mul(position_world, shadow_view);
output.light_vec = output.position_screen;
depth.x = input.light_vec.z / input.light_vec.w;
This gives a shadow that has lots surface acne due to horrible floating point precision errors. Everything is lit correctly though.
EDIT 4:
Found an OpenGL based tutorial here
I have followed it to the letter and it would seem that the uv coordinates for looking up the shadow map are incorrect.
The source uses a scaled matrix to get the uv coordinates for the shadow map sampler
/// <summary>
/// The scale matrix is used to push the projected vertex into the 0.0 - 1.0 region.
/// Similar in role to a * 0.5 + 0.5, where -1.0 < a < 1.0.
/// <summary>
const float4x4 ScaleMatrix = float4x4
(
0.5, 0.0, 0.0, 0.0,
0.0, -0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0
);
I had to negate the 0.5 for the y scaling (M22) in order for it to work but the shadowing is still not correct.
Is this really the correct way to scale?
float2 shadow_tex;
shadow_tex.x = surf_tex.x * 0.5f + 0.5f;
shadow_tex.y = surf_tex.y * -0.5f + 0.5f;
The depth calculations are exactly the same as the source code yet they still do not work, which makes me believe something about the uv calculation above is incorrect.