I'm trying to get an object from object space, into projected space using these intermediate matrices:
The first matrix (I) is the one that transforms from object space into inertial space, but since my object is not rotated or translated in any way inside the object space, this matrix is the 4x4 identity matrix.
The second matrix (W) is the one that transforms from inertial space into world space, which is just a scale transform matrix of factor a = 14.1 on all coordinates, since the inertial space origin coincides with the world space origin.
/a 0 0 0\
W = |0 a 0 0|
|0 0 a 0|
\0 0 0 1/
The third matrix (C) is the one that transforms from world space, into camera space. This matrix is a translation matrix with a translation of (0, 0, 10), because I want the camera to be located behind the object, so the object must be positioned 10 units into the z axis.
/1 0 0 0\
C = |0 1 0 0|
|0 0 1 10|
\0 0 0 1/
And finally, the fourth matrix is the projection matrix (P). Bearing in mind that the eye is at the origin of the world space and the projection plane is defined by z = 1, the projection matrix is:
/1 0 0 0\
P = |0 1 0 0|
|0 0 1 0|
\0 0 1/d 0/
where d is the distance from the eye to the projection plane, so d = 1.
I'm multiplying them like this: (((P x C) x W) x I) x V, where V is the vertex' coordinates in column vector form:
/x\
V = |y|
|z|
\1/
After I get the result, I divide x and y coordinates by w to get the actual screen coordinates.
Apparenly, I'm doing something wrong or missing something completely here, because it's not rendering properly. Here's a picture of what is supposed to be the bottom side of the Stanford Dragon:
Also, I should add that this is a software renderer so no DirectX or OpenGL stuff here.