Use depth bias for shadows in deferred shading
Posted
by
cubrman
on Game Development
See other posts from Game Development
or by cubrman
Published on 2013-11-12T06:49:25Z
Indexed on
2013/11/12
10:23 UTC
Read the original article
Hit count: 481
depth-buffer
|shadow-mapping
We are building a deferred shading engine and we have a problem with shadows.
To add shadows we use two maps: the first one stores the depth of the scene captured by the player's camera and the second one stores the depth of the scene captured by the light's camera. We then ran a shader that analyzes the two maps and outputs the third one with the ready shadow areas for the current frame.
The problem we face is a classic one: Self-Shadowing:
A standard way to solve this is to use the slope-scale depth bias and depth offsets, however as we are doing things in a deferred way we cannot employ this algorithm. Any attempts to set depth bias when capturing light's view depth produced no or unsatisfying results.
So here is my question: MSDN article has a convoluted explanation of the slope-scale:
bias = (m × SlopeScaleDepthBias) + DepthBias
Where m is the maximum depth slope of the triangle being rendered, defined as:
m = max( abs(delta z / delta x), abs(delta z / delta y) )
Could you explain how I can implement this algorithm manually in a shader? Maybe there are better ways to fix this problem for deferred shadows?
© Game Development or respective owner