Can someone explain the (reasons for the) implications of colum vs row major in multiplication/concatenation?
- by sebf
I am trying to learn how to construct view and projection matrices, and keep reaching difficulties in my implementation owing to my confusion about the two standards for matrices.
I know how to multiply a matrix, and I can see that transposing before multiplication would completely change the result, hence the need to multiply in a different order.
What I don't understand though is whats meant by only 'notational convention' - from the articles here and here the authors appear to assert that it makes no difference to how the matrix is stored, or transferred to the GPU, but on the second page that matrix is clearly not equivalent to how it would be laid out in memory for row-major; and if I look at a populated matrix in my program I see the translation components occupying the 4th, 8th and 12th elements.
Given that:
"post-multiplying with column-major matrices produces the same result
as pre-multiplying with row-major matrices.
"
Why in the following snippet of code:
Matrix4 r = t3 * t2 * t1;
Matrix4 r2 = t1.Transpose() * t2.Transpose() * t3.Transpose();
Does r != r2 and why does pos3 != pos for:
Vector4 pos = wvpM * new Vector4(0f, 15f, 15f, 1);
Vector4 pos3 = wvpM.Transpose() * new Vector4(0f, 15f, 15f, 1);
Does the multiplication process change depending on whether the matrices are row or column major, or is it just the order (for an equivalent effect?)
One thing that isn't helping this become any clearer, is that when provided to DirectX, my column major WVP matrix is used successfully to transform vertices with the HLSL call: mul(vector,matrix) which should result in the vector being treated as row-major, so how can the column major matrix provided by my math library work?