how can I specify interleaved vertex attributes and vertex indices
- by freefallr
I'm writing a generic ShaderProgram class that compiles a set of Shader objects, passes args to the shader (like vertex position, vertex normal, tex coords etc), then links the shader components into a shader program, for use with glDrawArrays.
My vertex data already exists in a VertexBufferObject that uses the following data structure to create a vertex buffer:
class CustomVertex
{
public:
float m_Position[3]; // x, y, z // offset 0, size = 3*sizeof(float)
float m_TexCoords[2]; // u, v // offset 3*sizeof(float), size = 2*sizeof(float)
float m_Normal[3]; // nx, ny, nz;
float colour[4]; // r, g, b, a
float padding[20]; // padded for performance
};
I've already written a working VertexBufferObject class that creates a vertex buffer object from an array of CustomVertex objects. This array is said to be interleaved. It renders successfully with the following code:
void VertexBufferObject::Draw()
{
if( ! m_bInitialized )
return;
glBindBuffer( GL_ARRAY_BUFFER, m_nVboId );
glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, m_nVboIdIndex );
glEnableClientState( GL_VERTEX_ARRAY );
glEnableClientState( GL_TEXTURE_COORD_ARRAY );
glEnableClientState( GL_NORMAL_ARRAY );
glEnableClientState( GL_COLOR_ARRAY );
glVertexPointer( 3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 0) );
glTexCoordPointer(3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 12));
glNormalPointer(GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 20));
glColorPointer(3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 32));
glDrawElements( GL_TRIANGLES, m_nNumIndices, GL_UNSIGNED_INT, ((char*)NULL + 0) );
glDisableClientState( GL_VERTEX_ARRAY );
glDisableClientState( GL_TEXTURE_COORD_ARRAY );
glDisableClientState( GL_NORMAL_ARRAY );
glDisableClientState( GL_COLOR_ARRAY );
glBindBuffer( GL_ARRAY_BUFFER, 0 );
glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, 0 );
}
Back to the Vertex Array Object though. My code for creating the Vertex Array object is as follows. This is performed before the ShaderProgram runtime linking stage, and no glErrors are reported after its steps.
// Specify the shader arg locations (e.g. their order in the shader code)
for( int n = 0; n < vShaderArgs.size(); n ++)
glBindAttribLocation( m_nProgramId, n, vShaderArgs[n].sFieldName.c_str() );
// Create and bind to a vertex array object, which stores the relationship between
// the buffer and the input attributes
glGenVertexArrays( 1, &m_nVaoHandle );
glBindVertexArray( m_nVaoHandle );
// Enable the vertex attribute array (we're using interleaved array, since its faster)
glBindBuffer( GL_ARRAY_BUFFER, vShaderArgs[0].nVboId );
glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, vShaderArgs[0].nVboIndexId );
// vertex data
for( int n = 0; n < vShaderArgs.size(); n ++ )
{
glEnableVertexAttribArray(n);
glVertexAttribPointer(
n,
vShaderArgs[n].nFieldSize,
GL_FLOAT,
GL_FALSE,
vShaderArgs[n].nStride,
(GLubyte *) NULL + vShaderArgs[n].nFieldOffset
);
AppLog::Ref().OutputGlErrors();
}
This doesn't render correctly at all. I get a pattern of white specks onscreen, in the shape of the terrain rectangle, but there are no regular lines etc. Here's the code I use for rendering:
void ShaderProgram::Draw()
{
using namespace AntiMatter;
if( ! m_nShaderProgramId || ! m_nVaoHandle )
{
AppLog::Ref().LogMsg("ShaderProgram::Draw() Couldn't draw object, as initialization of ShaderProgram is incomplete");
return;
}
glUseProgram( m_nShaderProgramId );
glBindVertexArray( m_nVaoHandle );
glDrawArrays( GL_TRIANGLES, 0, m_nNumTris );
glBindVertexArray(0);
glUseProgram(0);
}
Can anyone see errors or omissions in either the VAO creation code or rendering code?
thanks!