I had been debugging a problem I was having in a single shader file with 2 functions in it. I'm using DirectX 11, vs_5_0 and ps_5_0. I have stripped it down to its basic components to understand what was going wrong with the shaders, because the different named components of the Pixel and Vertex shaders were swapping the data being input:
void QuadVertex ( inout float4 position : SV_Position,
inout float4 color : COLOR0,
inout float2 tex : TEXCOORD0 ) {
// ViewProject is a 4x4 matrix,
// just included here to show the simple passthrough of the data
position = mul(position, ViewProjection);
}
And a Pixel Shader:
float4 QuadPixel ( float4 color : COLOR0,
float2 tex : TEXCOORD0 ) : SV_Target0 {
// Color is filled with position data and tex is
// filled with color values from the Vertex Shader
return color;
}
The ID3D11InputLayout and associated C++ code correctly compiles the shaders and sets them up with some simple primitive data:
data[0].Position.x = 0.0f * 210;
data[0].Position.y = 1.0f * 160;
data[0].Position.z = 0.0f;
data[1].Position.x = 0.0f * 210;
data[1].Position.y = 0.0f * 160;
data[1].Position.z = 0.0f;
data[2].Position.x = 1.0f * 210;
data[2].Position.y = 1.0f * 160;
data[2].Position.z = 0.0f;
data[0].Colour = Colors::Red;
data[1].Colour = Colors::Red;
data[2].Colour = Colors::Red;
data[0].Texture = Vector2::Zero;
data[1].Texture = Vector2::Zero;
data[2].Texture = Vector2::Zero;
When used with the shader, the float4 color always ended up with the position data, and the float2 tex always ended up with the color data. After a moment, I figured out that the shader's input and output signatures needed to be in the correct order and the correct format and be laid out in the exact order of the output from the Vertex Shader, regardless of the semantics:
float4 QuadPixel ( float4 pos : SV_Position,
float4 color : COLOR0,
float2 tex : TEXCOORD0 ) : SV_Target0 {
return color;
}
After finding this out, My question is: Why don't the semantics map the appropriate components when going from Vertex Shader to Pixel Shader? Is there any way that I can make it so certain semantics are always mapped to other semantics, or do I always have to follow the rigid Shader Signature (in this case, Position, Color, and Texture) ?
As a side note for why I'm asking: I know that when using XNA, my shader signatures for functions could differ in position and even drop items from Vertex Shader to Pixel Shader function parameters, having only the COLOR0 and TEXCOORD0 components being used (and it would still match up correctly). However, I also know that XNA relied on DX9 (and maybe a little DX10) implementation, and that maybe this kind of flexibility no longer exists in DX11?