Search Results

Search found 145 results on 6 pages for 'normals'.

Page 2/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • Any reliable polygon normal calculation code?

    - by Jenko
    I'm currently calculating the normal vector of a polygon using this code, but for some faces here and there it calculates a wrong normal. I don't really know what's going on or where it fails but its not reliable. Do you have any polygon normal calculation that's tested and found to be reliable? // calculate normal of a polygon using all points var n:int = points.length; var x:Number = 0; var y:Number = 0; var z:Number = 0 // ensure all points above 0 var minx:Number = 0, miny:Number = 0, minz:Number = 0; for (var p:int = 0, pl:int = points.length; p < pl; p++) { var po:_Point3D = points[p] = points[p].clone(); if (po.x < minx) { minx = po.x; } if (po.y < miny) { miny = po.y; } if (po.z < minz) { minz = po.z; } } for (p = 0; p < pl; p++) { po = points[p]; po.x -= minx; po.y -= miny; po.z -= minz; } var cur:int = 1, prev:int = 0, next:int = 2; for (var i:int = 1; i <= n; i++) { // using Newell method x += points[cur].y * (points[next].z - points[prev].z); y += points[cur].z * (points[next].x - points[prev].x); z += points[cur].x * (points[next].y - points[prev].y); cur = (cur+1) % n; next = (next+1) % n; prev = (prev+1) % n; } // length of the normal var length:Number = Math.sqrt(x * x + y * y + z * z); // turn large values into a unit vector if (length != 0){ x = x / length; y = y / length; z = z / length; }else { throw new Error("Cannot calculate normal since triangle has an area of 0"); }

    Read the article

  • What is wrong with my specular phong shading

    - by Thijser
    I'm sorry if this should be placed on stackoverflow instead however seeing as this is graphics related I was hoping you guys could help me: I'm attempting to write a phong shader and currently working on the specular. I came acros the following formula: base*pow(dot(V,R),shininess) and attempted to implement it (V is the posion of the viewer and R the reflective vector). This gave the following result and code: Vec3Df phongSpecular(const Vec3Df & vertexPos, Vec3Df & normal, const Vec3Df & lightPos, const Vec3Df & cameraPos, unsigned int index) { Vec3Df relativeLightPos=(lightPos-vertexPos); relativeLightPos.normalize(); Vec3Df relativeCameraPos= (cameraPos-vertexPos); relativeCameraPos.normalize(); int DotOfNormalAndLight = Vec3Df::dotProduct(normal,relativeLightPos); Vec3Df reflective =(relativeLightPos-(2*DotOfNormalAndLight*normal))*-1; reflective.normalize(); float phongyness= Vec3Df::dotProduct(reflective,relativeCameraPos); if (phongyness<0){ phongyness=0; } float shininess= Shininess[index]; float speculair = powf(phongyness,shininess); return Ks[index]*speculair; } I'm looking for something more like this:

    Read the article

  • Surface normal to screen angle

    - by Tannz0rz
    I've been struggling to get this working. I simply wish to take a surface normal and convert it to a screen angle. As an example, assuming we're working with the highlighted surface on the sphere below, where the arrow is the normal, the 2D angle would obviously be PI/4 radians. Here's one of the many things I've tried to no avail: float4 A = v.vertex; float4 B = v.vertex + float4(v.normal, 0.0); A = mul(VP, A); B = mul(VP, B); A.xy = (0.5 * (A.xy / A.w)) + 0.5; B.xy = (0.5 * (B.xy / B.w)) + 0.5; o.theta = atan2(B.y - A.y, B.x - A.x); I'm finally at my wit's end. Thanks for any and all help.

    Read the article

  • Make Gameobject Stand On Surface Facing Certain Direction

    - by Julian
    I want to make a biped character stand on any surface I click on. Surfaces have up vectors of any of positive or negative X,Y,Z. So imagine a cube with each face being a gameobject whose up vector pointing directly away from the cube. If my character is facing "forward" and I click on a surface which is to the left or right of me ( left or right walls), I want my character to now be standing on that surface but still be facing in the direction he initially was. If I click on a wall which is in the forward path of my character i want him to now be standing on that surface and his forward to now be what was once "up" relative to my character. Here is the code I am working with now. void Update() { if (Input.GetMouseButtonUp (0)) { RaycastHit hit; var ray = Camera.main.ScreenPointToRay(Input.mousePosition); if (Physics.Raycast(ray, out hit)) { Vector3 upVectBefore = transform.up; Vector3 forwardVectBefore = transform.forward; Quaternion rotationVectBefore = transform.rotation; Vector3 hitPosition = hit.transform.position; transform.position = hitPosition; float lookDifference = Vector3.Distance(hit.transform.up, forwardVectBefore); if(Vector3.Distance(hit.transform.up, upVectBefore) < .23) //Same normal { transform.rotation = rotationVectBefore; } else if(lookDifference > 1.412 && lookDifference <= 1.70607) //side wall { transform.up = hit.transform.up; transform.forward = forwardVectBefore; } else //head on wall { transform.up = hit.transform.up; transform.forward = upVectBefore; } } } } The first case "Same normal" works fine, however the other two do not work as I would like them to. Sometimes my character is laying down on the surface or on the wrong side of the surface. Does anyone know nice way of solving this problem?

    Read the article

  • What is wrong with my speculair phong shading

    - by Thijser
    I'm sorry if this should be placed on stackoverflow instead however seeing as this is graphics related I was hoping you guys could help me: I'm attempting to write a phong shader and currently working on the specular. I came acros the following formula: base*pow(dot(V,R),shininess) and attempted to implement it (V is the posion of the viewer and R the reflective vector). This gave the following result and code: Vec3Df phongSpecular(const Vec3Df & vertexPos, Vec3Df & normal, const Vec3Df & lightPos, const Vec3Df & cameraPos, unsigned int index) { Vec3Df relativeLightPos=(lightPos-vertexPos); relativeLightPos.normalize(); Vec3Df relativeCameraPos= (cameraPos-vertexPos); relativeCameraPos.normalize(); int DotOfNormalAndLight = Vec3Df::dotProduct(normal,relativeLightPos); Vec3Df reflective =(relativeLightPos-(2*DotOfNormalAndLight*normal))*-1; reflective.normalize(); float phongyness= Vec3Df::dotProduct(reflective,relativeCameraPos); if (phongyness<0){ phongyness=0; } float shininess= Shininess[index]; float speculair = powf(phongyness,shininess); return Ks[index]*speculair; } I'm looking for something more like this:

    Read the article

  • Saving new indicies, triangles and normals after WPF 3D transform

    - by sklitzz
    Hi, I have a 3D model which is lying flat currently, I wish for it to be rotated 90 degrees around the X axis. I have no problem doing this with the transforms. But to my knowledge all the transforms are a bunch of matrices multiplied. I would like to have the transform really alter all the coordinates of the indicies of the model. Is there a way to "save changes" after I apply a transform?

    Read the article

  • How to calculate vertext normals for a mesh in Java in OpenGL ES application?

    - by alan mc
    Can some one point me to Java code ( in Java not C or C++) that calculates all the normals for all the vertices of a mesh for OpenGL ES application. I need this for lighting. Lets say I have a cube with following vertices and indices: float vertices[] = { -width, -height, -depth, // 0 width, -height, -depth, // 1 width, height, -depth, // 2 -width, height, -depth, // 3 -width, -height, depth, // 4 width, -height, depth, // 5 width, height, depth, // 6 -width, height, depth // 7 }; short indices[] = { 0, 2, 1, 0, 3, 2, 1,2,6, 6,5,1, 4,5,6, 6,7,4, 2,3,6, 6,3,7, 0,7,3, 0,4,7, 0,1,5, 0,5,4 }; In above specific example how many normals we need ?

    Read the article

  • OpenGL ES Polygon with Normals rendering (Note the 'ES!')

    - by MarqueIV
    Ok... imagine I have a relatively simple solid that has six distinct normals but actually has close to 48 faces (8 faces per direction) and there are a LOT of shared vertices between faces. What's the most efficient way to render that in OpenGL? I know I can place the vertices in an array, then use an index array to render them, but I have to keep breaking my rendering steps down to change the normals (i.e. set normal 1... render 8 faces... set normal 2... render 8 faces, etc.) Because of that I have to maintain an array of index arrays... one for each normal! Not good! The other way I can do it is to use separate normal and vertex arrays (or even interleave them) but that means I need to have a one-to-one ratio for normals to vertices and that means the normals would be duplicated 8 times more than they need to be! On something with a spherical or even curved surface, every normal most likely is different, but for this, it really seems like a waste of memory. In a perfect world I'd like to have my vertex and normal arrays have different lengths, then when I go to draw my triangles or quads To specify the index to each array for that vertex. Now the OBJ file format lets you specify exactly that... a vertex array and a normal array of different lengths, then when you specify the face you are rendering, you specify a vertex and a normal index (as well as a UV coord if you are using textures too) which seems like the perfect solution! 48 vertices but only 8 normals, then pairs of indexes defining the shapes' faces. But I'm not sure how to render that in OpenGL ES (again, note the 'ES'.) Currently I have to 'denormalize' (sorry for the SQL pun there) the normals back to a 1-to-1 with the vertex array, then render. Just wastes memory to me. Anyone help? I hope I'm missing something very simple here. Mark

    Read the article

  • Can I use a vertex shader to display a models normals?

    - by geowar
    I'm currently using a VBO for the texture coordinates, normals and the vertices of a (3DS) model I'm drawing with "glDrawArrays(GL_TRIANGLES, ...);". For debugging I want to (temporarily) show the normals when drawing my model. Do I have to use immediate mode to draw each line from vert to vert+normal -OR- stuff another VBO with vert and vert+normal to draw all the normals… -OR- is there a way for the vertex shader to use the vertex and normal data already passed in when drawing the model to compute the V+N used when drawing the normals?

    Read the article

  • How to transform mesh components?

    - by Lea Hayes
    I am attempting to transform the components of a mesh directly using a 4x4 matrix. This is working for the vertex positions, but it is not working for the normals (and probably not the tangents either). Here is what I have: // Transform vertex positions - Works like a charm! vertices = mesh.vertices; for (int i = 0; i < vertices.Length; ++i) vertices[i] = transform.MultiplyPoint(vertices[i]); // Does not work, lighting is messed up on mesh normals = mesh.normals; for (int i = 0; i < normals.Length; ++i) normals[i] = transform.MultiplyVector(normals[i]); Note: The input matrix converts from local to world space and is needed to combine multiple meshes together.

    Read the article

  • Getting normal information from OpenGL render output

    - by okamiueru
    I'll try to keep this simple. I want a way to access the normal information of the scene, from the Frame Buffer output (or similar). The same way one is able to access the Depth Buffer using glGetTexImage and GL_DEPTH_COMPONENT. I know I could set up a fragment shader which outputs the normal information in RGB color space, which could in turn be read from the rendered image. I'm wondering however if there is a way to do this within the openGL API. I'll clarify anything upon request as best as I can, Thank you

    Read the article

  • Bump mapping Problem GLSL

    - by jmfel1926
    I am having a slight problem with my Bump Mapping project. Although everything works OK (at least from what I know) there is a slight mistake somewhere and I get incorrect shading on the brick wall when the light goes to the one side or the other as seen in the picture below: The light is on the right side so the shading on the wall should be the other way. I have provided the shaders to help find the issue (I do not have much experience with shaders). Shaders: varying vec3 viewVec; varying vec3 position; varying vec3 lightvec; attribute vec3 tangent; attribute vec3 binormal; uniform vec3 lightpos; uniform mat4 cameraMat; void main() { gl_TexCoord[0] = gl_MultiTexCoord0; gl_Position = ftransform(); position = vec3(gl_ModelViewMatrix * gl_Vertex); lightvec = vec3(cameraMat * vec4(lightpos,1.0)) - position ; vec3 eyeVec = vec3(gl_ModelViewMatrix * gl_Vertex); viewVec = normalize(-eyeVec); } uniform sampler2D colormap; uniform sampler2D normalmap; varying vec3 viewVec; varying vec3 position; varying vec3 lightvec; vec3 vv; uniform float diffuset; uniform float specularterm; uniform float ambientterm; void main() { vv=viewVec; vec3 normals = normalize(texture2D(normalmap,gl_TexCoord[0].st).rgb * 2.0 - 1.0); normals.y = -normals.y; //normals = (normals * gl_NormalMatrix).xyz ; vec3 distance = lightvec; float dist_number =length(distance); float final_dist_number = 2.0/pow(dist_number,diffuset); vec3 light_dir=normalize(lightvec); vec3 Halfvector = normalize(light_dir+vv); float angle=max(dot(Halfvector,normals),0.0); angle= pow(angle,specularterm); vec3 specular=vec3(angle,angle,angle); float diffuseterm=max(dot(light_dir,normals),0.0); vec3 diffuse = diffuseterm * texture2D(colormap,gl_TexCoord[0].st).rgb; vec3 ambient = ambientterm *texture2D(colormap,gl_TexCoord[0].st).rgb; vec3 diffusefinal = diffuse * final_dist_number; vec3 finalcolor=diffusefinal+specular+ambient; gl_FragColor = vec4(finalcolor, 1.0); }

    Read the article

  • How to calculate the normal of points on a 3D cubic Bézier curve given normals for its start and end points?

    - by Robert
    I'm trying to render a "3D ribbon" using a single 3D cubic Bézier curve to describe it (the width of the ribbon is some constant). The first and last control points have a normal vector associated with them (which are always perpendicular to the tangents at those points, and describe the surface normal of the ribbon at those points), and I'm trying to smoothly interpolate the normal vector over the course of the curve. For example, given a curve which forms the letter 'C', with the first and last control points both having surface normals pointing upwards, the ribbon should start flat, parallel to the ground, slowly turn, and then end flat again, facing the same way as the first control point. To do this "smoothly", it would have to face outwards half-way through the curve. At the moment (for this case), I've only been able to get all the surfaces facing upwards (and not outwards in the middle), which creates an ugly transition in the middle. It's quite hard to explain, I've attached some images below of this example with what it currently looks like (all surfaces facing upwards, sharp flip in the middle) and what it should look like (smooth transition, surfaces slowly rotate round). Silver faces represent the front, black faces the back. Incorrect, what it currently looks like: Correct, what it should look like: All I really need is to be able to calculate this "hybrid normal vector" for any point on the 3D cubic bézier curve, and I can generate the polygons no problem, but I can't work out how to get them to smoothly rotate round as depicted. Completely stuck as to how to proceed!

    Read the article

  • texture mapping with lib3ds and SOIL help

    - by Adam West
    I'm having trouble with my project for loading a texture map onto a model. Any insight into what is going wrong with my code is fantastic. Right now the code only renders a teapot which I have assinged after creating it in 3DS Max. 3dsloader.cpp #include "3dsloader.h" Object::Object(std:: string filename) { m_TotalFaces = 0; m_model = lib3ds_file_load(filename.c_str()); // If loading the model failed, we throw an exception if(!m_model) { throw strcat("Unable to load ", filename.c_str()); } // set properties of texture coordinate generation for both x and y coordinates glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR); glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR); // if not already enabled, enable texture generation if(! glIsEnabled(GL_TEXTURE_GEN_S)) glEnable(GL_TEXTURE_GEN_S); if(! glIsEnabled(GL_TEXTURE_GEN_T)) glEnable(GL_TEXTURE_GEN_T); } Object::~Object() { if(m_model) // if the file isn't freed yet lib3ds_file_free(m_model); //free up memory glDisable(GL_TEXTURE_GEN_S); glDisable(GL_TEXTURE_GEN_T); } void Object::GetFaces() { m_TotalFaces = 0; Lib3dsMesh * mesh; // Loop through every mesh. for(mesh = m_model->meshes;mesh != NULL;mesh = mesh->next) { // Add the number of faces this mesh has to the total number of faces. m_TotalFaces += mesh->faces; } } void Object::CreateVBO() { assert(m_model != NULL); // Calculate the number of faces we have in total GetFaces(); // Allocate memory for our vertices and normals Lib3dsVector * vertices = new Lib3dsVector[m_TotalFaces * 3]; Lib3dsVector * normals = new Lib3dsVector[m_TotalFaces * 3]; Lib3dsTexel* texCoords = new Lib3dsTexel[m_TotalFaces * 3]; Lib3dsMesh * mesh; unsigned int FinishedFaces = 0; // Loop through all the meshes for(mesh = m_model->meshes;mesh != NULL;mesh = mesh->next) { lib3ds_mesh_calculate_normals(mesh, &normals[FinishedFaces*3]); // Loop through every face for(unsigned int cur_face = 0; cur_face < mesh->faces;cur_face++) { Lib3dsFace * face = &mesh->faceL[cur_face]; for(unsigned int i = 0;i < 3;i++) { memcpy(&texCoords[FinishedFaces*3 + i], mesh->texelL[face->points[ i ]], sizeof(Lib3dsTexel)); memcpy(&vertices[FinishedFaces*3 + i], mesh->pointL[face->points[ i ]].pos, sizeof(Lib3dsVector)); } FinishedFaces++; } } // Generate a Vertex Buffer Object and store it with our vertices glGenBuffers(1, &m_VertexVBO); glBindBuffer(GL_ARRAY_BUFFER, m_VertexVBO); glBufferData(GL_ARRAY_BUFFER, sizeof(Lib3dsVector) * 3 * m_TotalFaces, vertices, GL_STATIC_DRAW); // Generate another Vertex Buffer Object and store the normals in it glGenBuffers(1, &m_NormalVBO); glBindBuffer(GL_ARRAY_BUFFER, m_NormalVBO); glBufferData(GL_ARRAY_BUFFER, sizeof(Lib3dsVector) * 3 * m_TotalFaces, normals, GL_STATIC_DRAW); // Generate a third VBO and store the texture coordinates in it. glGenBuffers(1, &m_TexCoordVBO); glBindBuffer(GL_ARRAY_BUFFER, m_TexCoordVBO); glBufferData(GL_ARRAY_BUFFER, sizeof(Lib3dsTexel) * 3 * m_TotalFaces, texCoords, GL_STATIC_DRAW); // Clean up our allocated memory delete vertices; delete normals; delete texCoords; // We no longer need lib3ds lib3ds_file_free(m_model); m_model = NULL; } void Object::applyTexture(const char*texfilename) { float imageWidth; float imageHeight; glGenTextures(1, & textureObject); // allocate memory for one texture textureObject = SOIL_load_OGL_texture(texfilename,SOIL_LOAD_AUTO,SOIL_CREATE_NEW_ID,SOIL_FLAG_MIPMAPS); glPixelStorei(GL_UNPACK_ALIGNMENT,1); glBindTexture(GL_TEXTURE_2D, textureObject); // use our newest texture glGetTexLevelParameterfv(GL_TEXTURE_2D,0,GL_TEXTURE_WIDTH,&imageWidth); glGetTexLevelParameterfv(GL_TEXTURE_2D,0,GL_TEXTURE_HEIGHT,&imageHeight); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // give the best result for texture magnification glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); //give the best result for texture minification glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); // don't repeat texture glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); // don't repeat textureglTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); // don't repeat texture glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_MODULATE); glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,imageWidth,imageHeight,0,GL_RGB,GL_UNSIGNED_BYTE,& textureObject); } void Object::Draw() const { // Enable vertex, normal and texture-coordinate arrays. glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_NORMAL_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); // Bind the VBO with the normals. glBindBuffer(GL_ARRAY_BUFFER, m_NormalVBO); // The pointer for the normals is NULL which means that OpenGL will use the currently bound VBO. glNormalPointer(GL_FLOAT, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, m_TexCoordVBO); glTexCoordPointer(2, GL_FLOAT, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, m_VertexVBO); glVertexPointer(3, GL_FLOAT, 0, NULL); // Render the triangles. glDrawArrays(GL_TRIANGLES, 0, m_TotalFaces * 3); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_NORMAL_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); } 3dsloader.h #include "main.h" #include "lib3ds/file.h" #include "lib3ds/mesh.h" #include "lib3ds/material.h" class Object { public: Object(std:: string filename); virtual ~Object(); virtual void Draw() const; virtual void CreateVBO(); void applyTexture(const char*texfilename); protected: void GetFaces(); unsigned int m_TotalFaces; Lib3dsFile * m_model; Lib3dsMesh* Mesh; GLuint textureObject; GLuint m_VertexVBO, m_NormalVBO, m_TexCoordVBO; }; Called in the main cpp file with: VBO,apply texture and draw (pretty simple, how ironic) and thats it, please help me forum :)

    Read the article

  • OpenGL 3.x Assimp trouble implementing phong shading (normals?)

    - by Defcronyke
    I'm having trouble getting phong shading to look right. I'm pretty sure there's something wrong with either my OpenGL calls, or the way I'm loading my normals, but I guess it could be something else since 3D graphics and Assimp are both still very new to me. When trying to load .obj/.mtl files, the problems I'm seeing are: The models seem to be lit too intensely (less phong-style and more completely washed out, too bright). Faces that are lit seem to be lit equally all over (with the exception of a specular highlight showing only when the light source position is moved to be practically right on top of the model) Because of problems 1 and 2, spheres look very wrong: picture of sphere And things with larger faces look (less-noticeably) wrong too: picture of cube I could be wrong, but to me this doesn't look like proper phong shading. Here's the code that I think might be relevant (I can post more if necessary): file: assimpRenderer.cpp #include "assimpRenderer.hpp" namespace def { assimpRenderer::assimpRenderer(std::string modelFilename, float modelScale) { initSFML(); initOpenGL(); if (assImport(modelFilename)) // if modelFile loaded successfully { initScene(); mainLoop(modelScale); shutdownScene(); } shutdownOpenGL(); shutdownSFML(); } assimpRenderer::~assimpRenderer() { } void assimpRenderer::initSFML() { windowWidth = 800; windowHeight = 600; settings.majorVersion = 3; settings.minorVersion = 3; app = NULL; shader = NULL; app = new sf::Window(sf::VideoMode(windowWidth,windowHeight,32), "OpenGL 3.x Window", sf::Style::Default, settings); app->setFramerateLimit(240); app->setActive(); return; } void assimpRenderer::shutdownSFML() { delete app; return; } void assimpRenderer::initOpenGL() { GLenum err = glewInit(); if (GLEW_OK != err) { /* Problem: glewInit failed, something is seriously wrong. */ std::cerr << "Error: " << glewGetErrorString(err) << std::endl; } // check the OpenGL context version that's currently in use int glVersion[2] = {-1, -1}; glGetIntegerv(GL_MAJOR_VERSION, &glVersion[0]); // get the OpenGL Major version glGetIntegerv(GL_MINOR_VERSION, &glVersion[1]); // get the OpenGL Minor version std::cout << "Using OpenGL Version: " << glVersion[0] << "." << glVersion[1] << std::endl; return; } void assimpRenderer::shutdownOpenGL() { return; } void assimpRenderer::initScene() { // allocate heap space for VAOs, VBOs, and IBOs vaoID = new GLuint[scene->mNumMeshes]; vboID = new GLuint[scene->mNumMeshes*2]; iboID = new GLuint[scene->mNumMeshes]; glClearColor(0.4f, 0.6f, 0.9f, 0.0f); glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LEQUAL); glEnable(GL_CULL_FACE); shader = new Shader("shader.vert", "shader.frag"); projectionMatrix = glm::perspective(60.0f, (float)windowWidth / (float)windowHeight, 0.1f, 100.0f); rot = 0.0f; rotSpeed = 50.0f; faceIndex = 0; colorArrayA = NULL; colorArrayD = NULL; colorArrayS = NULL; normalArray = NULL; genVAOs(); return; } void assimpRenderer::shutdownScene() { delete [] iboID; delete [] vboID; delete [] vaoID; delete shader; } void assimpRenderer::renderScene(float modelScale) { sf::Time elapsedTime = clock.getElapsedTime(); clock.restart(); if (rot > 360.0f) rot = 0.0f; rot += rotSpeed * elapsedTime.asSeconds(); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT); viewMatrix = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, -3.0f, -10.0f)); // move back a bit modelMatrix = glm::scale(glm::mat4(1.0f), glm::vec3(modelScale)); // scale model modelMatrix = glm::rotate(modelMatrix, rot, glm::vec3(0, 1, 0)); //modelMatrix = glm::rotate(modelMatrix, 25.0f, glm::vec3(0, 1, 0)); glm::vec3 lightPosition( 0.0f, -100.0f, 0.0f ); float lightPositionArray[3]; lightPositionArray[0] = lightPosition[0]; lightPositionArray[1] = lightPosition[1]; lightPositionArray[2] = lightPosition[2]; shader->bind(); int projectionMatrixLocation = glGetUniformLocation(shader->id(), "projectionMatrix"); int viewMatrixLocation = glGetUniformLocation(shader->id(), "viewMatrix"); int modelMatrixLocation = glGetUniformLocation(shader->id(), "modelMatrix"); int ambientLocation = glGetUniformLocation(shader->id(), "ambientColor"); int diffuseLocation = glGetUniformLocation(shader->id(), "diffuseColor"); int specularLocation = glGetUniformLocation(shader->id(), "specularColor"); int lightPositionLocation = glGetUniformLocation(shader->id(), "lightPosition"); int normalMatrixLocation = glGetUniformLocation(shader->id(), "normalMatrix"); glUniformMatrix4fv(projectionMatrixLocation, 1, GL_FALSE, &projectionMatrix[0][0]); glUniformMatrix4fv(viewMatrixLocation, 1, GL_FALSE, &viewMatrix[0][0]); glUniformMatrix4fv(modelMatrixLocation, 1, GL_FALSE, &modelMatrix[0][0]); glUniform3fv(lightPositionLocation, 1, lightPositionArray); for (unsigned int i = 0; i < scene->mNumMeshes; i++) { colorArrayA = new float[3]; colorArrayD = new float[3]; colorArrayS = new float[3]; material = scene->mMaterials[scene->mNumMaterials-1]; normalArray = new float[scene->mMeshes[i]->mNumVertices * 3]; unsigned int normalIndex = 0; for (unsigned int j = 0; j < scene->mMeshes[i]->mNumVertices * 3; j+=3, normalIndex++) { normalArray[j] = scene->mMeshes[i]->mNormals[normalIndex].x; // x normalArray[j+1] = scene->mMeshes[i]->mNormals[normalIndex].y; // y normalArray[j+2] = scene->mMeshes[i]->mNormals[normalIndex].z; // z } normalIndex = 0; glUniformMatrix3fv(normalMatrixLocation, 1, GL_FALSE, normalArray); aiColor3D ambient(0.0f, 0.0f, 0.0f); material->Get(AI_MATKEY_COLOR_AMBIENT, ambient); aiColor3D diffuse(0.0f, 0.0f, 0.0f); material->Get(AI_MATKEY_COLOR_DIFFUSE, diffuse); aiColor3D specular(0.0f, 0.0f, 0.0f); material->Get(AI_MATKEY_COLOR_SPECULAR, specular); colorArrayA[0] = ambient.r; colorArrayA[1] = ambient.g; colorArrayA[2] = ambient.b; colorArrayD[0] = diffuse.r; colorArrayD[1] = diffuse.g; colorArrayD[2] = diffuse.b; colorArrayS[0] = specular.r; colorArrayS[1] = specular.g; colorArrayS[2] = specular.b; // bind color for each mesh glUniform3fv(ambientLocation, 1, colorArrayA); glUniform3fv(diffuseLocation, 1, colorArrayD); glUniform3fv(specularLocation, 1, colorArrayS); // render all meshes glBindVertexArray(vaoID[i]); // bind our VAO glDrawElements(GL_TRIANGLES, scene->mMeshes[i]->mNumFaces*3, GL_UNSIGNED_INT, 0); glBindVertexArray(0); // unbind our VAO delete [] normalArray; delete [] colorArrayA; delete [] colorArrayD; delete [] colorArrayS; } shader->unbind(); app->display(); return; } void assimpRenderer::handleEvents() { sf::Event event; while (app->pollEvent(event)) { if (event.type == sf::Event::Closed) { app->close(); } if ((event.type == sf::Event::KeyPressed) && (event.key.code == sf::Keyboard::Escape)) { app->close(); } if (event.type == sf::Event::Resized) { glViewport(0, 0, event.size.width, event.size.height); } } return; } void assimpRenderer::mainLoop(float modelScale) { while (app->isOpen()) { renderScene(modelScale); handleEvents(); } } bool assimpRenderer::assImport(const std::string& pFile) { // read the file with some example postprocessing scene = importer.ReadFile(pFile, aiProcess_CalcTangentSpace | aiProcess_Triangulate | aiProcess_JoinIdenticalVertices | aiProcess_SortByPType); // if the import failed, report it if (!scene) { std::cerr << "Error: " << importer.GetErrorString() << std::endl; return false; } return true; } void assimpRenderer::genVAOs() { int vboIndex = 0; for (unsigned int i = 0; i < scene->mNumMeshes; i++, vboIndex+=2) { mesh = scene->mMeshes[i]; indexArray = new unsigned int[mesh->mNumFaces * sizeof(unsigned int) * 3]; // convert assimp faces format to array faceIndex = 0; for (unsigned int t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace* face = &mesh->mFaces[t]; std::memcpy(&indexArray[faceIndex], face->mIndices, sizeof(float) * 3); faceIndex += 3; } // generate VAO glGenVertexArrays(1, &vaoID[i]); glBindVertexArray(vaoID[i]); // generate IBO for faces glGenBuffers(1, &iboID[i]); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, iboID[i]); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLuint) * mesh->mNumFaces * 3, indexArray, GL_STATIC_DRAW); // generate VBO for vertices if (mesh->HasPositions()) { glGenBuffers(1, &vboID[vboIndex]); glBindBuffer(GL_ARRAY_BUFFER, vboID[vboIndex]); glBufferData(GL_ARRAY_BUFFER, mesh->mNumVertices * sizeof(GLfloat) * 3, mesh->mVertices, GL_STATIC_DRAW); glEnableVertexAttribArray((GLuint)0); glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, 0, 0); } // generate VBO for normals if (mesh->HasNormals()) { normalArray = new float[scene->mMeshes[i]->mNumVertices * 3]; unsigned int normalIndex = 0; for (unsigned int j = 0; j < scene->mMeshes[i]->mNumVertices * 3; j+=3, normalIndex++) { normalArray[j] = scene->mMeshes[i]->mNormals[normalIndex].x; // x normalArray[j+1] = scene->mMeshes[i]->mNormals[normalIndex].y; // y normalArray[j+2] = scene->mMeshes[i]->mNormals[normalIndex].z; // z } normalIndex = 0; glGenBuffers(1, &vboID[vboIndex+1]); glBindBuffer(GL_ARRAY_BUFFER, vboID[vboIndex+1]); glBufferData(GL_ARRAY_BUFFER, mesh->mNumVertices * sizeof(GLfloat) * 3, normalArray, GL_STATIC_DRAW); glEnableVertexAttribArray((GLuint)1); glVertexAttribPointer((GLuint)1, 3, GL_FLOAT, GL_FALSE, 0, 0); delete [] normalArray; } // tex coord stuff goes here // unbind buffers glBindVertexArray(0); glBindBuffer(GL_ARRAY_BUFFER, 0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); delete [] indexArray; } vboIndex = 0; return; } } file: shader.vert #version 150 core in vec3 in_Position; in vec3 in_Normal; uniform mat4 projectionMatrix; uniform mat4 viewMatrix; uniform mat4 modelMatrix; uniform vec3 lightPosition; uniform mat3 normalMatrix; smooth out vec3 vVaryingNormal; smooth out vec3 vVaryingLightDir; void main() { // derive MVP and MV matrices mat4 modelViewProjectionMatrix = projectionMatrix * viewMatrix * modelMatrix; mat4 modelViewMatrix = viewMatrix * modelMatrix; // get surface normal in eye coordinates vVaryingNormal = normalMatrix * in_Normal; // get vertex position in eye coordinates vec4 vPosition4 = modelViewMatrix * vec4(in_Position, 1.0); vec3 vPosition3 = vPosition4.xyz / vPosition4.w; // get vector to light source vVaryingLightDir = normalize(lightPosition - vPosition3); // Set the position of the current vertex gl_Position = modelViewProjectionMatrix * vec4(in_Position, 1.0); } file: shader.frag #version 150 core out vec4 out_Color; uniform vec3 ambientColor; uniform vec3 diffuseColor; uniform vec3 specularColor; smooth in vec3 vVaryingNormal; smooth in vec3 vVaryingLightDir; void main() { // dot product gives us diffuse intensity float diff = max(0.0, dot(normalize(vVaryingNormal), normalize(vVaryingLightDir))); // multiply intensity by diffuse color, force alpha to 1.0 out_Color = vec4(diff * diffuseColor, 1.0); // add in ambient light out_Color += vec4(ambientColor, 1.0); // specular light vec3 vReflection = normalize(reflect(-normalize(vVaryingLightDir), normalize(vVaryingNormal))); float spec = max(0.0, dot(normalize(vVaryingNormal), vReflection)); if (diff != 0) { float fSpec = pow(spec, 128.0); // Set the output color of our current pixel out_Color.rgb += vec3(fSpec, fSpec, fSpec); } } I know it's a lot to look through, but I'm putting most of the code up so as not to assume where the problem is. Thanks in advance to anyone who has some time to help me pinpoint the problem(s)! I've been trying to sort it out for two days now and I'm not getting anywhere on my own.

    Read the article

  • Opengl problem with texture in model from obj

    - by subSeven
    Hello! I writing small program in OpenGL, and I have problem ( textures are skew, and I dont know why, this model work in another obj viewer) What I have: http://img696.imageshack.us/i/obrazo.png/ What I want http://img88.imageshack.us/i/obraz2d.jpg/ This code where I load texture: bool success; ILuint texId; GLuint image; ilGenImages(1, &texId); ilBindImage(texId); success = ilLoadImage((WCHAR*)fileName.c_str()); if(success) { success = ilConvertImage(IL_RGB, IL_UNSIGNED_BYTE); if(!success) { return false; } } else { return false; } glGenTextures(1, &image); glBindTexture(GL_TEXTURE_2D, image); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, ilGetInteger(IL_IMAGE_BPP), ilGetInteger(IL_IMAGE_WIDTH), ilGetInteger(IL_IMAGE_HEIGHT), 0, ilGetInteger(IL_IMAGE_FORMAT), GL_UNSIGNED_BYTE, ilGetData()); ilDeleteImages(1, &texId); Code to load obj: triangles.clear(); std::ifstream in; std::string cmd; in.open (fileName.c_str()); if (in.is_open()) { while(!in.eof()) { in>>cmd; if(cmd=="v") { Vector3d vector; in>>vector.x; in>>vector.y; in>>vector.z; points.push_back(vector); } if(cmd=="vt") { Vector2d texcord; in>>texcord.x; in>>texcord.y; texcords.push_back(texcord); } if(cmd=="vn") { Vector3d normal; in>>normal.x; in>>normal.y; in>>normal.z; normals.push_back(normal); } if(cmd=="f") { Triangle triangle; std::string str; std::string str1,str2,str3; std::string delimeter("/"); int pos; int n; std::stringstream ss (std::stringstream::in | std::stringstream::out); in>>str; pos = str.find(delimeter); str1 = str.substr(0,pos); str2 = str.substr(pos+delimeter.length()); pos = str2.find(delimeter); str3 = str2.substr(pos+delimeter.length()); str2 = str2.substr(0,pos); ss<<str1; ss>>n; triangle.a= n-1; ss.clear(); ss<<str3; ss>>n; triangle.an =n-1; ss.clear(); ss<<str2; ss>>n; ss.clear(); triangle.atc = n-1; in>>str; pos = str.find(delimeter); str1 = str.substr(0,pos); str2 = str.substr(pos+delimeter.length()); pos = str2.find(delimeter); str3 = str2.substr(pos+delimeter.length()); str2 = str2.substr(0,pos); ss<<str1; ss>>n; triangle.b= n-1; ss.clear(); ss<<str3; ss>>n; triangle.bn =n-1; ss.clear(); ss<<str2; ss>>n; ss.clear(); triangle.btc = n-1; in>>str; pos = str.find(delimeter); str1 = str.substr(0,pos); str2 = str.substr(pos+delimeter.length()); pos = str2.find(delimeter); str3 = str2.substr(pos+delimeter.length()); str2 = str2.substr(0,pos); ss<<str1; ss>>n; triangle.c= n-1; ss.clear(); ss<<str3; ss>>n; triangle.cn =n-1; ss.clear(); ss<<str2; ss>>n; ss.clear(); triangle.ctc = n-1; triangles.push_back(triangle); } cmd = ""; } in.close(); return true; } return false; Code to draw model: glEnable(GL_TEXTURE_2D); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL); glBindTexture(GL_TEXTURE_2D,image); glBegin(GL_TRIANGLES); for(int i=0;i<triangles.size();i++) { glTexCoord2f(texcords[triangles[i].ctc].x, texcords[triangles[i].ctc].y); glNormal3f(normals[triangles[i].cn].x, normals[triangles[i].cn].y, normals[triangles[i].cn].z); glVertex3f( points[triangles[i].c].x, points[triangles[i].c].y, points[triangles[i].c].z); glTexCoord2f(texcords[triangles[i].btc].x, texcords[triangles[i].btc].y); glNormal3f(normals[triangles[i].bn].x, normals[triangles[i].bn].y, normals[triangles[i].bn].z); glVertex3f( points[triangles[i].b].x, points[triangles[i].b].y, points[triangles[i].b].z); glTexCoord2f(texcords[triangles[i].atc].x, texcords[triangles[i].atc].y); glNormal3f(normals[triangles[i].an].x, normals[triangles[i].an].y, normals[triangles[i].an].z); glVertex3f( points[triangles[i].a].x, points[triangles[i].a].y, points[triangles[i].a].z); } glEnd(); glDisable(GL_TEXTURE_2D); Mayby someone find mistake in this

    Read the article

  • When should I use indexed arrays of OpenGL vertices?

    - by Tartley
    I'm trying to get a clear idea of when I should be using indexed arrays of OpenGL vertices, drawn with gl[Multi]DrawElements and the like, versus when I should simply use contiguous arrays of vertices, drawn with gl[Multi]DrawArrays. (Update: The consensus in the replies I got is that one should always be using indexed vertices.) I have gone back and forth on this issue several times, so I'm going to outline my current understanding, in the hopes someone can either tell me I'm now finally more or less correct, or else point out where my remaining misunderstandings are. Specifically, I have three conclusions, in bold. Please correct them if they are wrong. One simple case is if my geometry consists of meshes to form curved surfaces. In this case, the vertices in the middle of the mesh will have identical attributes (position, normal, color, texture coord, etc) for every triangle which uses the vertex. This leads me to conclude that: 1. For geometry with few seams, indexed arrays are a big win. Follow rule 1 always, except: For geometry that is very 'blocky', in which every edge represents a seam, the benefit of indexed arrays is less obvious. To take a simple cube as an example, although each vertex is used in three different faces, we can't share vertices between them, because for a single vertex, the surface normals (and possible other things, like color and texture co-ord) will differ on each face. Hence we need to explicitly introduce redundant vertex positions into our array, so that the same position can be used several times with different normals, etc. This means that indexed arrays are of less use. e.g. When rendering a single face of a cube: 0 1 o---o |\ | | \ | | \| o---o 3 2 (this can be considered in isolation, because the seams between this face and all adjacent faces mean than none of these vertices can be shared between faces) if rendering using GL_TRIANGLE_FAN (or _STRIP), then each face of the cube can be rendered thus: verts = [v0, v1, v2, v3] colors = [c0, c0, c0, c0] normal = [n0, n0, n0, n0] Adding indices does not allow us to simplify this. From this I conclude that: 2. When rendering geometry which is all seams or mostly seams, when using GL_TRIANGLE_STRIP or _FAN, then I should never use indexed arrays, and should instead always use gl[Multi]DrawArrays. (Update: Replies indicate that this conclusion is wrong. Even though indices don't allow us to reduce the size of the arrays here, they should still be used because of other performance benefits, as discussed in the comments) The only exception to rule 2 is: When using GL_TRIANGLES (instead of strips or fans), then half of the vertices can still be re-used twice, with identical normals and colors, etc, because each cube face is rendered as two separate triangles. Again, for the same single cube face: 0 1 o---o |\ | | \ | | \| o---o 3 2 Without indices, using GL_TRIANGLES, the arrays would be something like: verts = [v0, v1, v2, v2, v3, v0] normals = [n0, n0, n0, n0, n0, n0] colors = [c0, c0, c0, c0, c0, c0] Since a vertex and a normal are often 3 floats each, and a color is often 3 bytes, that gives, for each cube face, about: verts = 6 * 3 floats = 18 floats normals = 6 * 3 floats = 18 floats colors = 6 * 3 bytes = 18 bytes = 36 floats and 18 bytes per cube face. (I understand the number of bytes might change if different types are used, the exact figures are just for illustration.) With indices, we can simplify this a little, giving: verts = [v0, v1, v2, v3] (4 * 3 = 12 floats) normals = [n0, n0, n0, n0] (4 * 3 = 12 floats) colors = [c0, c0, c0, c0] (4 * 3 = 12 bytes) indices = [0, 1, 2, 2, 3, 0] (6 shorts) = 24 floats + 12 bytes, and maybe 6 shorts, per cube face. See how in the latter case, vertices 0 and 2 are used twice, but only represented once in each of the verts, normals and colors arrays. This sounds like a small win for using indices, even in the extreme case of every single geometry edge being a seam. This leads me to conclude that: 3. When using GL_TRIANGLES, one should always use indexed arrays, even for geometry which is all seams. Please correct my conclusions in bold if they are wrong.

    Read the article

  • OBJ model loaded in LWJGL has a black area with no texture

    - by gambiting
    I have a problem with loading an .obj file in LWJGL and its textures. The object is a tree(it's a paid model from TurboSquid, so I can't post it here,but here's the link if you want to see how it should look like): http://www.turbosquid.com/FullPreview/Index.cfm/ID/701294 I wrote a custom OBJ loader using the LWJGL tutorial from their wiki. It looks like this: public class OBJLoader { public static Model loadModel(File f) throws FileNotFoundException, IOException { BufferedReader reader = new BufferedReader(new FileReader(f)); Model m = new Model(); String line; Texture currentTexture = null; while((line=reader.readLine()) != null) { if(line.startsWith("v ")) { float x = Float.valueOf(line.split(" ")[1]); float y = Float.valueOf(line.split(" ")[2]); float z = Float.valueOf(line.split(" ")[3]); m.verticies.add(new Vector3f(x,y,z)); }else if(line.startsWith("vn ")) { float x = Float.valueOf(line.split(" ")[1]); float y = Float.valueOf(line.split(" ")[2]); float z = Float.valueOf(line.split(" ")[3]); m.normals.add(new Vector3f(x,y,z)); }else if(line.startsWith("vt ")) { float x = Float.valueOf(line.split(" ")[1]); float y = Float.valueOf(line.split(" ")[2]); m.texVerticies.add(new Vector2f(x,y)); }else if(line.startsWith("f ")) { Vector3f vertexIndicies = new Vector3f(Float.valueOf(line.split(" ")[1].split("/")[0]), Float.valueOf(line.split(" ")[2].split("/")[0]), Float.valueOf(line.split(" ")[3].split("/")[0])); Vector3f textureIndicies = new Vector3f(Float.valueOf(line.split(" ")[1].split("/")[1]), Float.valueOf(line.split(" ")[2].split("/")[1]), Float.valueOf(line.split(" ")[3].split("/")[1])); Vector3f normalIndicies = new Vector3f(Float.valueOf(line.split(" ")[1].split("/")[2]), Float.valueOf(line.split(" ")[2].split("/")[2]), Float.valueOf(line.split(" ")[3].split("/")[2])); m.faces.add(new Face(vertexIndicies,textureIndicies,normalIndicies,currentTexture.getTextureID())); }else if(line.startsWith("g ")) { if(line.length()>2) { String name = line.split(" ")[1]; currentTexture = TextureLoader.getTexture("PNG", ResourceLoader.getResourceAsStream("res/" + name + ".png")); System.out.println(currentTexture.getTextureID()); } } } reader.close(); System.out.println(m.verticies.size() + " verticies"); System.out.println(m.normals.size() + " normals"); System.out.println(m.texVerticies.size() + " texture coordinates"); System.out.println(m.faces.size() + " faces"); return m; } } Then I create a display list for my model using this code: objectDisplayList = GL11.glGenLists(1); GL11.glNewList(objectDisplayList, GL11.GL_COMPILE); Model m = null; try { m = OBJLoader.loadModel(new File("res/untitled4.obj")); } catch (Exception e1) { e1.printStackTrace(); } int currentTexture=0; for(Face face: m.faces) { if(face.texture!=currentTexture) { currentTexture = face.texture; GL11.glBindTexture(GL11.GL_TEXTURE_2D, currentTexture); } GL11.glColor3f(1f, 1f, 1f); GL11.glBegin(GL11.GL_TRIANGLES); Vector3f n1 = m.normals.get((int) face.normal.x - 1); GL11.glNormal3f(n1.x, n1.y, n1.z); Vector2f t1 = m.texVerticies.get((int) face.textures.x -1); GL11.glTexCoord2f(t1.x, t1.y); Vector3f v1 = m.verticies.get((int) face.vertex.x - 1); GL11.glVertex3f(v1.x, v1.y, v1.z); Vector3f n2 = m.normals.get((int) face.normal.y - 1); GL11.glNormal3f(n2.x, n2.y, n2.z); Vector2f t2 = m.texVerticies.get((int) face.textures.y -1); GL11.glTexCoord2f(t2.x, t2.y); Vector3f v2 = m.verticies.get((int) face.vertex.y - 1); GL11.glVertex3f(v2.x, v2.y, v2.z); Vector3f n3 = m.normals.get((int) face.normal.z - 1); GL11.glNormal3f(n3.x, n3.y, n3.z); Vector2f t3 = m.texVerticies.get((int) face.textures.z -1); GL11.glTexCoord2f(t3.x, t3.y); Vector3f v3 = m.verticies.get((int) face.vertex.z - 1); GL11.glVertex3f(v3.x, v3.y, v3.z); GL11.glEnd(); } GL11.glEndList(); The currentTexture is an int - it contains the ID of the currently used texture. So my model looks absolutely fine without textures: (sorry I cannot post hyperlinks since I am a new user) i.imgur.com/VtoK0.png But look what happens if I enable GL_TEXTURE_2D: i.imgur.com/z8Kli.png i.imgur.com/5e9nn.png i.imgur.com/FAHM9.png As you can see an entire side of the tree appears to be missing - and it's not transparent, since it's not in the colour of the background - it's rendered black. It's not a problem with the model - if I load it using Kanji's OBJ loader it works fine(but the thing is,that I need to write my own OBJ loader) i.imgur.com/YDATo.png this is my OpenGL init section: //init display try { Display.setDisplayMode(new DisplayMode(Support.SCREEN_WIDTH, Support.SCREEN_HEIGHT)); Display.create(); Display.setVSyncEnabled(true); } catch (LWJGLException e) { e.printStackTrace(); System.exit(0); } GL11.glLoadIdentity(); GL11.glEnable(GL11.GL_TEXTURE_2D); GL11.glClearColor(1.0f, 0.0f, 0.0f, 1.0f); GL11.glShadeModel(GL11.GL_SMOOTH); GL11.glEnable(GL11.GL_DEPTH_TEST); GL11.glDepthFunc(GL11.GL_LESS); GL11.glDepthMask(true); GL11.glEnable(GL11.GL_NORMALIZE); GL11.glMatrixMode(GL11.GL_PROJECTION); GLU.gluPerspective (90.0f,800f/600f, 1f, 500.0f); GL11.glMatrixMode(GL11.GL_MODELVIEW); GL11.glEnable(GL11.GL_CULL_FACE); GL11.glCullFace(GL11.GL_BACK); //enable lighting GL11.glEnable(GL11.GL_LIGHTING); ByteBuffer temp = ByteBuffer.allocateDirect(16); temp.order(ByteOrder.nativeOrder()); GL11.glMaterial(GL11.GL_FRONT, GL11.GL_DIFFUSE, (FloatBuffer)temp.asFloatBuffer().put(lightDiffuse).flip()); GL11.glMaterialf(GL11.GL_FRONT, GL11.GL_SHININESS,(int)material_shinyness); GL11.glLight(GL11.GL_LIGHT2, GL11.GL_DIFFUSE, (FloatBuffer)temp.asFloatBuffer().put(lightDiffuse2).flip()); // Setup The Diffuse Light GL11.glLight(GL11.GL_LIGHT2, GL11.GL_POSITION,(FloatBuffer)temp.asFloatBuffer().put(lightPosition2).flip()); GL11.glLight(GL11.GL_LIGHT2, GL11.GL_AMBIENT,(FloatBuffer)temp.asFloatBuffer().put(lightAmbient).flip()); GL11.glLight(GL11.GL_LIGHT2, GL11.GL_SPECULAR,(FloatBuffer)temp.asFloatBuffer().put(lightDiffuse2).flip()); GL11.glLightf(GL11.GL_LIGHT2, GL11.GL_CONSTANT_ATTENUATION, 0.1f); GL11.glLightf(GL11.GL_LIGHT2, GL11.GL_LINEAR_ATTENUATION, 0.0f); GL11.glLightf(GL11.GL_LIGHT2, GL11.GL_QUADRATIC_ATTENUATION, 0.0f); GL11.glEnable(GL11.GL_LIGHT2); Could somebody please help me?

    Read the article

  • OpenGL problem with FBO integer texture and color attachment

    - by Grieverheart
    In my simple renderer, I have 2 FBOs one that contains diffuse, normals, instance ID and depth in that order and one that I use store the ssao result. The textures I use for the first FBO are RGB8, RGBA16F, R32I and GL_DEPTH_COMPONENT32F for the depth. For the second FBO I use an R16F texture. My rendering process is to first render to everything I mentioned in the first FBO, then bind depth and normals textures for reading for the ssao pass and write to the second FBO. After that I bind the second FBO's texture for reading in my blur shader and bind the first FBO for writing. What I intend to do is to write the blurred ssao value to the alpha component of the Normals texture. Here are where the problems start. First of all, I use shading language 3.3, which my graphics card does support. I manage ouputs in my shaders using layout(location = #). Now, the normals texture should be bound to color attachment 1, but when I use 1, it seems to write to my diffuse texture which should be in color attachment 0. When I instead use layout(location = 0), it gets correctly written to my normals texture. Besides this, my instance ID texture also gets resets after running the blur shader which is weird because if I use a float texture and write to it instanceID / nInstances, the texture doesn't get reset after the blur shader has ran. Here is how I prepare my first FBO: bool CGBuffer::Init(unsigned int WindowWidth, unsigned int WindowHeight){ //Create FBO glGenFramebuffers(1, &m_fbo); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_fbo); //Create gbuffer and Depth Buffer Textures glGenTextures(GBUFF_NUM_TEXTURES, &m_textures[0]); glGenTextures(1, &m_depthTexture); //prepare gbuffer for(unsigned int i = 0; i < GBUFF_NUM_TEXTURES; i++){ glBindTexture(GL_TEXTURE_2D, m_textures[i]); if(i == GBUFF_TEXTURE_TYPE_NORMAL) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, WindowWidth, WindowHeight, 0, GL_RGBA, GL_FLOAT, NULL); else if(i == GBUFF_TEXTURE_TYPE_DIFFUSE) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, WindowWidth, WindowHeight, 0, GL_RGB, GL_FLOAT, NULL); else if(i == GBUFF_TEXTURE_TYPE_ID) glTexImage2D(GL_TEXTURE_2D, 0, GL_R32I, WindowWidth, WindowHeight, 0, GL_RED_INTEGER, GL_INT, NULL); else{ std::cout << "Error in FBO initialization" << std::endl; return false; } glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, m_textures[i], 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); } //prepare depth buffer glBindTexture(GL_TEXTURE_2D, m_depthTexture); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, WindowWidth, WindowHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_depthTexture, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); GLenum DrawBuffers[] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2}; glDrawBuffers(GBUFF_NUM_TEXTURES, DrawBuffers); GLenum Status = glCheckFramebufferStatus(GL_FRAMEBUFFER); if(Status != GL_FRAMEBUFFER_COMPLETE){ std::cout << "FB error, status 0x" << std::hex << Status << std::endl; return false; } //Restore default framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); return true; } where I use an enum defined as, enum GBUFF_TEXTURE_TYPE{ GBUFF_TEXTURE_TYPE_DIFFUSE, GBUFF_TEXTURE_TYPE_NORMAL, GBUFF_TEXTURE_TYPE_ID, GBUFF_NUM_TEXTURES }; Am I missing some kind of restriction? Does the color attachment of the FBO's textures somehow gets reset i.e. I'm using a re-size function which re-sizes the textures of the FBO but should I perhaps call glFramebufferTexture2D again too? EDIT: Here is the shader in question: #version 330 core uniform sampler2D aoSampler; uniform vec2 TEXEL_SIZE; // x = 1/res x, y = 1/res y uniform bool use_blur; noperspective in vec2 TexCoord; layout(location = 0) out vec4 out_AO; void main(void){ if(use_blur){ float result = 0.0; for(int i = -1; i < 2; i++){ for(int j = -1; j < 2; j++){ vec2 offset = vec2(TEXEL_SIZE.x * i, TEXEL_SIZE.y * j); result += texture(aoSampler, TexCoord + offset).r; // -0.004 because the texture seems to be a bit displaced } } out_AO = vec4(vec3(0.0), result / 9); } else out_AO = vec4(vec3(0.0), texture(aoSampler, TexCoord).r); }

    Read the article

  • Toon/cel shading with variable line width?

    - by Nick Wiggill
    I see a few broad approaches out there to doing cel shading: Duplication & enlargement of model with flipped normals (not an option for me) Sobel filter / fragment shader approaches to edge detection Stencil buffer approaches to edge detection Geometry (or vertex) shader approaches that calculate face and edge normals Am I correct in assuming the geometry-centric approach gives the greatest amount of control over lighting and line thickness, as well eg. for terrain where you might see the silhouette line of a hill merging gradually into a plain? What if I didn't need pixel lighting on my terrain surfaces? (And I probably won't as I plan to use cell-based vertex- or texturemap-based lighting/shadowing.) Would I then be better off sticking with the geometry-type approach, or go for a screen space / fragment approach instead to keep things simpler? If so, how would I get the "inking" of hills within the mesh silhouette, rather than only the outline of the entire mesh (with no "ink" details inside that outline? Lastly, is it possible to cheaply emulate the flipped-normals approach, using a geometry shader? Is that exactly what the GS approaches do? What I want - varying line thickness with intrusive lines inside the silhouette... What I don't want...

    Read the article

  • Changing input name attribute

    - by Parhs
    Hello! I have some hidden inputs like this <input name="exam.normals[1].blahblah" ..../> I would like somehow to replace the [1] with a number that I want (index). I aint lazy but I am trying to find a good way to do this... A solution would be a replace of exam.normals[1] with exam.normals[+ index +] but I should substr the whole string first.... With regexp I don’t know how to do the replace. good...

    Read the article

  • Why aren't tangent space normal maps completely blue?

    - by seahorse
    Why aren't normal maps just blue? I would think that normal maps should be predominantly blue in color because the Z component of the normal is represented by blue. Normals point out of the surface in the Z direction so we should see blue as the predominant colour since the Z component is dominant. By definition tangent space is perpendicular to the surface. At any point we should have the normal always pointing in the Z (blue direction) with no X (red direction) or Y (green direction). Thus the normal map (since it is a "normal map") should have the colour of the normals which is just blue (R = x = 0, G = y = 0, B = z = 1) with no shades in between. But normal maps are not so, and they have gradients of shades in them. Why is this so?

    Read the article

  • Is it possible to construct a cube with fewer than 24 vertices

    - by Telanor
    I have a cube-based world like Minecraft and I'm wondering if there's a way to construct a cube with fewer than 24 vertices so I can reduce memory usage. It doesn't seem possible to me for 2 reasons: the normals wouldn't come out right and per-face textures wouldn't work. Is this the case or am I wrong? Maybe there's some fancy new DX11 tech that can help? Edit: Just to clarify, I have 2 requirements: I need surface normals for each cube face in order to do proper lighting and I need a way to address a different indexes in a texture array for each cube face

    Read the article

  • Why don't Normal maps in tangent space have a single blue color?

    - by seahorse
    Normal maps are predominantly blue in color because the z component maps to Blue and since normals point out of the surface in the z direction we see Blue as the predominant component. If the above is true then why are normal maps just of one color i.e. blue and they should not be having any other shades(not even shades of blue) Since by definition tangent space is perpendicular to normal at any point we should have the normal always pointing in the Z (Blue direction) with no X(Red component) and Y(Green component). Thus the normal map(since it is a "normal map") should have had color of normals which is just the Blue(Z =Blue compoennt = 1, R=0, G=0) and the normal map should have been of only Blue color with no shades in between. But even then normal maps are not so, and they have gradients of shades in them, why is this so?

    Read the article

  • Is it possible to construct a cube with less than 24 vertices

    - by Telanor
    I have a cube-based world like minecraft and I'm wondering if there's a way to construct a cube with less than 24 vertices so I can reduce memory usage. It doesn't seem possible to me for 2 reasons: the normals wouldn't come out right and per-face textures wouldn't work. Is this the case or am I wrong? Maybe there's some fancy new dx11 tech that can help? Edit: Just to clarify, I have 2 requirements: I need surface normals for each cube face in order to do proper lighting and I need a way to address a different indexes in a texture array for each cube face

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >