Computing pixel's screen position in a vertex shader: right or wrong?

Posted by cubrman on Game Development See other posts from Game Development or by cubrman
Published on 2013-10-22T12:13:49Z Indexed on 2013/10/22 16:04 UTC
Read the original article Hit count: 450

I am building a deferred rendering engine and I have a question. The article I took the sample code from suggested computing screen position of the pixel as follows:

VertexShaderFunction()
{
    ...
    output.Position = mul(worldViewProj, input.Position);
    output.ScreenPosition = output.Position;
}

PixelShaderFunction()
{
    input.ScreenPosition.xy /= input.ScreenPosition.w;
    float2 TexCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1);
    ...
}

The question is what if I compute the position in the vertex shader (which should optimize the performance as VSF is launched significantly less number of times than PSF) would I get the per-vertex lighting insted. Here is how I want to do this:

VertexShaderFunction()
{
    ...
    output.Position = mul(worldViewProj, input.Position);
    output.ScreenPosition.xy = output.Position / output.Position.w;
}

PixelShaderFunction()
{
    float2 TexCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1);
    ...
}

What exactly happens with the data I pass from VS to PS? How exactly is it interpolated? Will it give me the right per-pixel result in this case? I tried launching the game both ways and saw no visual difference. Is my assumption right?

Thanks.

P.S. I am optimizing the point light shader, so I actually pass a sphere geometry into the VS.

© Game Development or respective owner

Related posts about hlsl

Related posts about vertex