Search Results

Search found 2515 results on 101 pages for 'opengl es2'.

Page 74/101 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Updating Textures on Runtime in OpenSceneGraph

    - by Abhishek Bansal
    I am working on a project in which i am required to capture frames from external device video and render them on openSceneGraph Node. I am also using GLSL shaders. But i dont know how to update textures on runtime. For other uniforms we need to make callbacks but do we also need to make callbacks for samplers in glsl and openSceneGraph ? My code looks like this. All i am getting right now is a black window. osg::ref_ptr<osg::Geometry> pictureQuad = osg::createTexturedQuadGeometry(osg::Vec3(0.0f,0.0f,0.0f), osg::Vec3(_deviceNameToImageFrameMap[deviceName].frame->s(),0.0f,0.0f), osg::Vec3(0.0f,0.0f,_deviceNameToImageFrameMap[deviceName].frame->t()), 0.0f, 1.0f,_deviceNameToImageFrameMap[deviceName].frame->s(), _deviceNameToImageFrameMap[deviceName].frame->t()); //creating texture and setting up parameters for video frame osg::ref_ptr<osg::TextureRectangle> myTex= new osg::TextureRectangle(_deviceNameToImageFrameMap[deviceName].frame.get()); myTex->setFilter(osg::Texture::MIN_FILTER,osg::Texture::LINEAR); myTex->setFilter(osg::Texture::MAG_FILTER,osg::Texture::LINEAR); myTex->setWrap(osg::Texture::WRAP_S, osg::Texture::CLAMP_TO_EDGE); myTex->setWrap(osg::Texture::WRAP_T, osg::Texture::CLAMP_TO_EDGE); _videoSourceNameToNodeMap[sourceName].geode = new osg::Geode(); _videoSourceNameToNodeMap[sourceName].geode->setDataVariance(osg::Object::DYNAMIC); _videoSourceNameToNodeMap[sourceName].geode->addDrawable(pictureQuad.get()); //apply texture to node _videoSourceNameToNodeMap[sourceName].geode->getOrCreateStateSet()->setTextureAttributeAndModes(0, myTex.get(), osg::StateAttribute::ON); _videoSourceNameToNodeMap[sourceName].geode->getOrCreateStateSet()->setMode(GL_DEPTH_TEST, osg::StateAttribute::OFF); _videoSourceNameToNodeMap[sourceName].geode->setDataVariance(osg::Object::DYNAMIC); //Set uniform sampler osg::Uniform* srcFrame = new osg::Uniform( osg::Uniform::SAMPLER_2D, "srcFrame" ); srcFrame->set(0); //Set Uniform Alpha osg::Uniform* alpha = new osg::Uniform( osg::Uniform::FLOAT, "alpha" ); alpha->set(.5f); alpha->setUpdateCallback(new ExampleCallback()); //Enable blending _videoSourceNameToNodeMap[sourceName].geode->getOrCreateStateSet()->setMode( GL_BLEND, osg::StateAttribute::ON); //Adding blend function to node osg::BlendFunc *bf = new osg::BlendFunc(); bf->setFunction(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); _videoSourceNameToNodeMap[sourceName].geode->getOrCreateStateSet()->setAttributeAndModes(bf); //apply shader to quad _videoSourceNameToNodeMap[sourceName].geode->getOrCreateStateSet()->setAttributeAndModes(program, osg::StateAttribute::ON); //add Uniform to shader _videoSourceNameToNodeMap[sourceName].geode->getOrCreateStateSet()->addUniform( srcFrame ); _videoSourceNameToNodeMap[sourceName].geode->getOrCreateStateSet()->addUniform( alpha );

    Read the article

  • Preserving the GLBlendFunc

    - by Michael Minerva
    I need to preserve the current GlBlendFunc so can restore it after I do some work. It seems that this is not one of the attributes that can be saved with GLPushAttrib, is there some other similar method I can use to preserve the state?

    Read the article

  • Texture mapping an NGon?

    - by user146780
    I'm not sure how to go about figuring out how to map texture cooridnates for a 2D NGon (N sided polygon) How can this be done? The effect i'm trying to achieve is for the texture to fit on the polygon and stretch out accordingly so the whole texture fits on it. Thanks

    Read the article

  • Implemeting "drawing modes" in a graphics library?

    - by banister
    i would like to implement 'drawing modes' (in my own graphics library). That is drawing with AND, OR, etc However i am storing colors using floats, each channel between 0 and 1.0 Do i have to first convert each color channel to 0-255 before i can use the AND, OR, etc drawing modes? and then convert back to float (0.0-1.0) ? Or is there another way of doing it? thanks

    Read the article

  • Dynamically generate Triangle Lists for a Complex 3D Mesh

    - by Vulcan Eager
    In my application, I have the shape and dimensions of a complex 3D solid (say a Cylinder Block) taken from user input. I need to construct vertex and index buffers for it. Since the dimensions are taken from user input, I cannot user Blender or 3D Max to manually create my model. What is the textbook method to dynamically generate such a mesh? Edit: I am looking for something that will generate the triangles given the vertices, edges and holes. Something like TetGen. As for TetGen itself, I have no way of excluding the triangles which fall on the interior of the solid/mesh.

    Read the article

  • Error during installation of Mesa on Linux

    - by rodnower
    Hello, I have a problem. I try to install Mesa 7.8 on CentOS 3.9 for i386 on VMVare 7.0.1 like described here: http://linux-sxs.org/multimedia/mesa.html When I perform configuration stage: ++++++++++++++++++++++++++++++++++++++++++++++++++ [root@CentOS Mesa-7.8]# ./configure --prefix=/usr --sysconfdir=/etc ++++++++++++++++++++++++++++++++++++++++++++++++++ (root is actualy root) This is what I get: ++++++++++++++++++++++++++++++++++++++++++++++++++ . . . checking pkg-config files for X11 are available... no checking for X... no configure: error: X11 development libraries needed for dri driver ++++++++++++++++++++++++++++++++++++++++++++++++++ (Three dots says that there is some output before) Put attention to error. Any idea? Thank you very much forahead.

    Read the article

  • Transform shape built of contour splines to simple polygons

    - by Cheery
    I've dumped glyphs from truetype file so I can play with them. They have shape contours that consist from quadratic bezier curves and lines. I want to output triangles for such shapes so I can visualize them for the user. Traditionally I might use libfreetype or scan-rasterise this kind of contours. But I want to produce extruded 3D meshes from the fonts and make other distortions with them. So, how to polygonise shapes consisting from quadratic bezier curves and lines? There's many contours that form the shape together. Some contours are additive and others are subtractive. The contours are never open. They form a loop. (Actually, I get only contour vertices from ttf glyphs, those vertices define whether they are part of the curve or not. Even though it is easy to decompose these into bezier curves and lines, knowing the data is represented this way may be helpful for polygonizing the contours to triangles)

    Read the article

  • Can I use a vertex shader to display a models normals?

    - by geowar
    I'm currently using a VBO for the texture coordinates, normals and the vertices of a (3DS) model I'm drawing with "glDrawArrays(GL_TRIANGLES, ...);". For debugging I want to (temporarily) show the normals when drawing my model. Do I have to use immediate mode to draw each line from vert to vert+normal -OR- stuff another VBO with vert and vert+normal to draw all the normals… -OR- is there a way for the vertex shader to use the vertex and normal data already passed in when drawing the model to compute the V+N used when drawing the normals?

    Read the article

  • Edges on polygon outlines not always correct

    - by user146780
    I'm using the algorithm below to generate quads which are then rendered to make an outline like this http://img810.imageshack.us/img810/8530/uhohz.png The problem as seen on the image, is that sometimes the lines are too thin when they should always be the same width. My algorithm finds the 4 verticies for the first one then the top 2 verticies of the next ones are the bottom 2 of the previous. This creates connected lines, but it seems to not always work. How could I fix this? This is my algorithm: void OGLENGINEFUNCTIONS::GenerateLinePoly(const std::vector<std::vector<GLdouble>> &input, std::vector<GLfloat> &output, int width) { output.clear(); if(input.size() < 2) { return; } int temp; float dirlen; float perplen; POINTFLOAT start; POINTFLOAT end; POINTFLOAT dir; POINTFLOAT ndir; POINTFLOAT perp; POINTFLOAT nperp; POINTFLOAT perpoffset; POINTFLOAT diroffset; POINTFLOAT p0, p1, p2, p3; for(unsigned int i = 0; i < input.size() - 1; ++i) { start.x = static_cast<float>(input[i][0]); start.y = static_cast<float>(input[i][1]); end.x = static_cast<float>(input[i + 1][0]); end.y = static_cast<float>(input[i + 1][1]); dir.x = end.x - start.x; dir.y = end.y - start.y; dirlen = sqrt((dir.x * dir.x) + (dir.y * dir.y)); ndir.x = static_cast<float>(dir.x * 1.0 / dirlen); ndir.y = static_cast<float>(dir.y * 1.0 / dirlen); perp.x = dir.y; perp.y = -dir.x; perplen = sqrt((perp.x * perp.x) + (perp.y * perp.y)); nperp.x = static_cast<float>(perp.x * 1.0 / perplen); nperp.y = static_cast<float>(perp.y * 1.0 / perplen); perpoffset.x = static_cast<float>(nperp.x * width * 0.5); perpoffset.y = static_cast<float>(nperp.y * width * 0.5); diroffset.x = static_cast<float>(ndir.x * 0 * 0.5); diroffset.y = static_cast<float>(ndir.y * 0 * 0.5); // p0 = start + perpoffset - diroffset //p1 = start - perpoffset - diroffset //p2 = end + perpoffset + diroffset // p3 = end - perpoffset + diroffset p0.x = start.x + perpoffset.x - diroffset.x; p0.y = start.y + perpoffset.y - diroffset.y; p1.x = start.x - perpoffset.x - diroffset.x; p1.y = start.y - perpoffset.y - diroffset.y; if(i > 0) { temp = (8 * (i - 1)); p2.x = output[temp + 2]; p2.y = output[temp + 3]; p3.x = output[temp + 4]; p3.y = output[temp + 5]; } else { p2.x = end.x + perpoffset.x + diroffset.x; p2.y = end.y + perpoffset.y + diroffset.y; p3.x = end.x - perpoffset.x + diroffset.x; p3.y = end.y - perpoffset.y + diroffset.y; } output.push_back(p2.x); output.push_back(p2.y); output.push_back(p0.x); output.push_back(p0.y); output.push_back(p1.x); output.push_back(p1.y); output.push_back(p3.x); output.push_back(p3.y); } } Thanks

    Read the article

  • How do games move around objects (in general) (OGL)

    - by user146780
    I'm sure there's not just 1 answer to this but, do game engines actually change the vectors in memory, or use gltransformations? Because pushing and popping the matrix all the time seems inefficient, but if you keep modifying the verticies you cant make use of display lists. So I'm wondering how it's done in general. Thanks

    Read the article

  • How to get the location (x,y) of a CCLabel that is a child of a CCScene?

    - by RexOnRoids
    (learning Cocos2D) After creating a CCLabel and adding it to a CCLayer like this: //From HelloWorldScene.m // create and initialize a Label CCLabel* label1 = [CCLabel labelWithString:@"Hello" fontName:@"Marker Felt" fontSize:10]; label1.position = ccp(35, 435); // add the label as a child to this Layer [self addChild: label1]; How do I determine when a user has TOUCHED the label on the screen?

    Read the article

  • Passing a pointer to an array to glGenBuffers

    - by Josh Elsasser
    I'm currently passing an array to a function, then attempting to use glGenBuffers with the array that is passed to the function. I can't figure out a way to get glGenBuffers to work with the array that I've passed. I have a decent grasp of the basics of pointers, but this is beyond me. This is basically how the render code works. It's a bit more complex, (colours using the same array idea, also not working) but the basic idea is as follows: void drawFoo(const GLfloat *renderArray, GLuint verticeBuffer) { glBindBuffer(GL_ARRAY_BUFFER, verticeBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(verticeBuffer)*sizeof(GLfloat), verticeBuffer, GL_STATIC_DRAW); glVertexPointer(2, GL_FLOAT, 0, 0); glEnableClientState(GL_VERTEX_BUFFER); glDrawArrays(GL_TRIANGLE_FAN, 0, 45); glDisableClientState(GL_VERTEX_BUFFEr); } Thanks in advance for the help

    Read the article

  • How to avoid the linebreak inside a word (Static Layout

    - by Addev
    I'm trying to make a text as big as I can making it fit a Rect. basically I use a StaticLayout for pre-calculate the text size and make it fit the Rect's height: // Since the width is fixed for the StaticLayout it should only fit the height while (currentHeight>Rect.getHeight()){ size-=2; } textPaint.setTextSize(size); The problem is that if the Rect is very high, the exit condition is reached but breaking the words (see the capture). Is there a way for avoid this? Goal: Actual: Current code: textSize=MAX_TEXT_SIZE do { if (textSize < mMinTextSize) { Log.i(TAG, "Min reached"); textSize = mMinTextSize; textPaint.setTextSize(textSize); fits = true; } else { textPaint.setTextSize(textSize); StaticLayout layout = new StaticLayout(text, textPaint, targetWidth, Alignment.ALIGN_NORMAL, 1.0, 0, true); layout.draw(canvas); float heightRatio= (float) layout.getHeight() / (float) targetHeight; boolean fitsHeight = heightRatio<= 1f; if (fitsHeight) { fits = true; } else { textSize -= 2; } } Log.i(TAG, "textSize=" + textSize + " fits=" + fits); } while (!fits); thanks

    Read the article

  • How much market shares OpenGL2.0 in iPhone os hardwares(iPhone/iPot Touch)

    - by Eonil
    I'm planning making a game for AppStore, so I'm studying GLES. But, GLES 1.1 and 2.0 APIs are different about handling in some features.(and limitations) I have not enough time to consider both of them, I have to choosing one. 2.0 is clearly better in developer's view, but I'm worry about it's market share. I wish most users moved on newer SGX based hardware, but in fact, I don't know. Does anybody have information about location of those hardware ratio data in iPhone OS supported hardwares? (iPhone/iPod touch, per GPU) Please let me know.

    Read the article

  • GLSL point inside box test

    - by wcochran
    Below is a GLSL fragment shader that outputs a texel if the given texture coord is inside a box, otherwise a color is output. This just feels silly and the there must be a way to do this without branching? uniform sampler2D texUnit; varying vec4 color; varying vec2 texCoord; void main() { vec4 texel = texture2D(texUnit, texCoord); if (any(lessThan(texCoord, vec2(0.0, 0.0))) || any(greaterThan(texCoord, vec2(1.0, 1.0)))) gl_FragColor = color; else gl_FragColor = texel; } Below is a version without branching, but it still feels clumsy. What is the best practice for "texture coord clamping"? uniform sampler2D texUnit; varying vec4 color; varying vec4 labelColor; varying vec2 texCoord; void main() { vec4 texel = texture2D(texUnit, texCoord); bool outside = any(lessThan(texCoord, vec2(0.0, 0.0))) || any(greaterThan(texCoord, vec2(1.0, 1.0))); gl_FragColor = mix(texel*labelColor, color, vec4(outside,outside,outside,outside)); } I am clamping texels to the region with the label is -- the texture s & t coordinates will be between 0 and 1 in this case. Otherwise, I use a brown color where the label ain't. Note that I could also construct a branching version of the code that does not perform a texture lookup when it doesn't need to. Would this be faster than a non-branching version that always performed a texture lookup? Maybe time for some tests...

    Read the article

  • cocos2d - how to draw a bottle sprite with dynamically changing water level

    - by Oliver
    I am trying to draw a (2d) sprite in cocos2d showing a bottle. The bottle shall be able to have a dynamic water level (i.e. the amount of water in the bottle can change over the lifetime of the sprite). I am wondering how to do this. I currently have a PNG file of the empty bottle. I adjusted the alpha channel of that PNG so when rendering the sprite I can draw a blue rectangle and render the bottle texture over it. That will give the impression of the water being inside the bottle. However, the bottle's shape is not a rectangle itself of course, so the water can be seen out of the bounds of the bottle. I can change the bottle image in a way that only the bottle itself is transparent and set the "outside world" to an opaque color & alpha channel value, but that again prevents the "world background" to be visible in that area. I simply don't have a clue how to realize this in a sane manner. Do I really have to read every pixel of the bottle image, identify which pixel is "inside" of the bottle and then draw the water pixel by pixel? There must be an easier way, right? ;) Any best practices for these kinds of tasks? edit: see picture below, to make somewhat clearer, what I am talking about ;) http://i47.tinypic.com/10rqww0.png

    Read the article

  • How to stop rendering invisible faces

    - by TheMorfeus
    I am making a voxel-based game, and for needs of it, i am creating a block rendering engine. Point is, that i need to generate lots of cubes. Every time i render more than 16x16x16 chunk of theese blocks, my FPS is dropped down hardly, because it renders all 6 faces of all of theese cubes. THat's 24 576 quads, and i dont want that. So, my question is, How to stop rendering vertices(or quads) that are not visible, and therefore increase performance of my game? Here is class for rendering of a block: public void renderBlock(int posx, int posy, int posz) { try{ //t.bind(); glEnable(GL_CULL_FACE); glCullFace(GL_BACK);// or even GL_FRONT_AND_BACK */); glPushMatrix(); GL11.glTranslatef((2*posx+0.5f),(2*posy+0.5f),(2*posz+0.5f)); // Move Right 1.5 Units And Into The Screen 6.0 GL11.glRotatef(rquad,1.0f,1.0f,1.0f); glBegin(GL_QUADS); // Draw A Quad GL11.glColor3f(0.5f, 0.4f, 0.4f); // Set The Color To Green GL11.glTexCoord2f(0,0); GL11.glVertex3f( 1f, 1f,-1f); // Top Right Of The Quad (Top) GL11.glTexCoord2f(1,0); GL11.glVertex3f(-1f, 1f,-1f); // Top Left Of The Quad (Top) GL11.glTexCoord2f(1,1); GL11.glVertex3f(-1f, 1f, 1f); // Bottom Left Of The Quad (Top) GL11.glTexCoord2f(0,1); GL11.glVertex3f( 1f, 1f, 1f); // Bottom Right Of The Quad (Top) //GL11.glColor3f(1.2f,0.5f,0.9f); // Set The Color To Orange GL11.glTexCoord2f(0,0); GL11.glVertex3f( 1f,-1f, 1f); // Top Right Of The Quad (Bottom) GL11.glTexCoord2f(0,1); GL11.glVertex3f(-1f,-1f, 1f); // Top Left Of The Quad (Bottom) GL11.glTexCoord2f(1,1); GL11.glVertex3f(-1f,-1f,-1f); // Bottom Left Of The Quad (Bottom) GL11.glTexCoord2f(1,0); GL11.glVertex3f( 1f,-1f,-1f); // Bottom Right Of The Quad (Bottom) //GL11.glColor3f(1.0f,0.0f,0.0f); // Set The Color To Red GL11.glTexCoord2f(0,0); GL11.glVertex3f( 1f, 1f, 1f); // Top Right Of The Quad (Front) GL11.glTexCoord2f(1,0); GL11.glVertex3f(-1f, 1f, 1f); // Top Left Of The Quad (Front) GL11.glTexCoord2f(1,1); GL11.glVertex3f(-1f,-1f, 1f); // Bottom Left Of The Quad (Front) GL11.glTexCoord2f(0,1); GL11.glVertex3f( 1f,-1f, 1f); // Bottom Right Of The Quad (Front) //GL11.glColor3f(1f,0.5f,0.0f); // Set The Color To Yellow GL11.glTexCoord2f(0,0); GL11.glVertex3f( 1f,-1f,-1f); // Bottom Left Of The Quad (Back) GL11.glTexCoord2f(1,0); GL11.glVertex3f(-1f,-1f,-1f); // Bottom Right Of The Quad (Back) GL11.glTexCoord2f(1,1); GL11.glVertex3f(-1f, 1f,-1f); // Top Right Of The Quad (Back) GL11.glTexCoord2f(0,1); GL11.glVertex3f( 1f, 1f,-1f); // Top Left Of The Quad (Back) //GL11.glColor3f(0.0f,0.0f,0.3f); // Set The Color To Blue GL11.glTexCoord2f(0,1); GL11.glVertex3f(-1f, 1f, 1f); // Top Right Of The Quad (Left) GL11.glTexCoord2f(1,1); GL11.glVertex3f(-1f, 1f,-1f); // Top Left Of The Quad (Left) GL11.glTexCoord2f(1,0); GL11.glVertex3f(-1f,-1f,-1f); // Bottom Left Of The Quad (Left) GL11.glTexCoord2f(0,0); GL11.glVertex3f(-1f,-1f, 1f); // Bottom Right Of The Quad (Left) //GL11.glColor3f(0.5f,0.0f,0.5f); // Set The Color To Violet GL11.glTexCoord2f(0,0); GL11.glVertex3f( 1f, 1f,-1f); // Top Right Of The Quad (Right) GL11.glTexCoord2f(1,0); GL11.glVertex3f( 1f, 1f, 1f); // Top Left Of The Quad (Right) GL11.glTexCoord2f(1,1); GL11.glVertex3f( 1f,-1f, 1f); // Bottom Left Of The Quad (Right) GL11.glTexCoord2f(0,1); GL11.glVertex3f( 1f,-1f,-1f); // Bottom Right Of The Quad (Right) //rquad+=0.0001f; glEnd(); glPopMatrix(); }catch(NullPointerException t){t.printStackTrace(); System.out.println("rendering block failed");} } Here is code that renders them: private void render() { GL11.glClear(GL11.GL_COLOR_BUFFER_BIT|GL11.GL_DEPTH_BUFFER_BIT); for(int y=0; y<32; y++){ for(int x=0; x<16; x++){ for(int z=0; z<16; z++) { b.renderBlock(x, y, z); } } } }

    Read the article

  • aspectRatio = backingWidth / backingHeight ???

    - by carrots
    What am I doing wrong here, I can't get the result of this division: aspectRatio = backingWidth / backingHeight; I've thought I might try casting to (GLfloat) but that didn't do anything. As I step through the code I see that aspectRatio is 0 after the operation, while backingWidth is clearly 768 and backingHeight is 1029. Here are the types: GLfloat aspectRatio; and // The pixel dimensions of the CAEAGLLayer GLint backingWidth; GLint backingHeight; It must be something basic I'm doing wrong here..

    Read the article

  • Space between 2 vertices on a heightmap

    - by Sietse
    First off, I am sorry for the crazy title. I couldn't really think of something else. I am working on a hobby project, it's a infinite world generated with Perlin Noise, Java and LWJGL. But I am having a problem, it is kinda hard to explain, so I made a video: http://youtu.be/D_NUBJZ_5Kw Obviously the problem is the black spaces in between all the pieces of ground. I have no idea what is causing it. I already tried making all the values doubles instead of floats, but that didn't fix it. Here is a piece of code I am using: float height2, height = (float)getHeight(x, y); height2 = (float) ((getHeight(x-1, y+1) + height) / 2); vertexhelper.addVertexColorAndTexture(x, height2, y+1, r, g, b, a, 0f, 1f); height2 = (float) ((getHeight(x+1, y+1) + height) / 2); vertexhelper.addVertexColorAndTexture(x+1, height2, y+1, r, g, b, a, 1f, 1f); height2 = (float) ((getHeight(x+1, y-1) + height) / 2); vertexhelper.addVertexColorAndTexture(x+1, height2, y, r, g, b, a, 1f, 0f); height2 = (float) ((getHeight(x-1, y-1) + height) / 2); vertexhelper.addVertexColorAndTexture(x, height2, y, r, g, b, a, 0f, 0f); I loop through this at the initialization of a chunk with x-16 and y-16. vertexhelper is a class I made that just puts everything in a array. (I am using floats here, but that's after doing the maths, so that shouldn't be a problem) I highly appreciate you reading this.

    Read the article

  • Trying to zoom in on an arbitrary rect within a screen-aligned quad.

    - by mos
    I've got a screen-aligned quad, and I'd like to zoom into an arbitrary rectangle within that quad, but I'm not getting my math right. I think I've got the translate worked out, just not the scaling. Basically, my code is the following: // // render once zoomed in glPushMatrix(); glTranslatef(offX, offY, 0); glScalef(?wtf?, ?wtf?, 1.0f); RenderQuad(); glPopMatrix(); // // render PIP display glPushMatrix(); glTranslatef(0.7f, 0.7f, 0); glScalef(0.175f, 0.175f, 1.0f); RenderQuad(); glPopMatrix(); Anyone have any tips? The user selects a rect area, and then those values are passed to my rendering object as [x, y, w, h], where those values are percentages of the viewport's width and height.

    Read the article

  • iPhone Quartz - Open GL

    - by user345473
    Hi all, Does anyone have an idea what do the mind mapping applications use for drawing: Quartz 2D or Open GL? What would be the best way to implement this kind of application? Any advice is welcomed! Thanks!

    Read the article

  • Noob question: Draw a quad parallel to the view.

    - by Jack
    Hi all, ok what I want to do is to draw a quad in the scene that lays on a plane parallel to the view. So it should appear flat. More in particular, I think I didn't get very well how the mechanism of gluLookAt works in comparison with the functions glTranslate and glRotate: If I position the view "manually" using the functions glTranslate and glRotate whenever I draw an object its position is relative to the current view. And I understand that this is due to the transformation matrix in the stack. However when I use the gluLookAt that should automatically set the view, the coordinates of the object I want to draw must be "absolute" to show it properly. Thanks in advance.

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >