Search Results

Search found 2513 results on 101 pages for 'opengl'.

Page 36/101 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Adaptative interface with Open GL and machine learning in C#

    - by Afnan
    For my Semester project I have to go for any Adaptative Interface Design. My language is C# and I have to Use OpenTK (wrapper for OpenGL). I have an idea that I should show two points and some obstacles and my subject (user) would drag an object from one place to the final place avoiding the Obstacles. Also (s)he can place obstacles randomly. My software should be able to learn some paths by doing test runs and then after learning it should be able to predict the shortest path. I do not know how stupid this idea sounds but it is just an idea. I need help regarding any ideas for adaptative interface possible small projects or if my idea is ok then please can you tell me what should be used to implement it? I mean that along with OpenGl for the Graphics what can I use for machine learning?

    Read the article

  • How can I fix these errors with Panda3D's sample projects?

    - by lhk
    I just installed the latest Panda3D packages on a Mint 12 32-bit virtual machine. Then I downloaded and configured Eclipse and tried to run the Asteroids sample project. The window is created properly. But after rendering the scence once the game freezes. This happens with the other sample apps, too. Here's the error log: DirectStart: Starting the game. Known pipe types: glxGraphicsPipe (all display modules loaded.) :display:gsg:glgsg(warning): Occlusion queries advertised as supported by OpenGL runtime, but could not get pointers to extension functions. OpenGL Warning: glXChooseFBConfig returning NULL, due to attrib=0x6, next=0xffffffff :display:glxdisplay(warning): No suitable FBConfig contexts available; using XVisual only. depth_bits=16 color_bits=24 alpha_bits=8 stencil_bits=8 accum_bits=64 back_buffers=1 stereo=1 force_hardware=1 AL lib: pulseaudio.c:331: PulseAudio returned minreq > tlength/2; expect break up :display:gsg:glgsg(error): at 4765 of panda/src/glstuff/glGraphicsStateGuardian_src.cxx : invalid enumerant :display:gsg:glgsg(error): at 5703 of panda/src/glstuff/glGraphicsStateGuardian_src.cxx : invalid operation :display:gsg:glgsg(error): at 4654 of panda/src/glstuff/glGraphicsStateGuardian_src.cxx : invalid operation :display:gsg:glgsg(error): at 4654 of panda/src/glstuff/glGraphicsStateGuardian_src.cxx : invalid operation :display:gsg:glgsg(error): at 4765 of panda/src/glstuff/glGraphicsStateGuardian_src.cxx : invalid enumerant :display:gsg:glgsg(error): at 5703 of panda/src/glstuff/glGraphicsStateGuardian_src.cxx : invalid operation :display:gsg:glgsg(error): at 3057 of panda/src/glstuff/glGraphicsStateGuardian_src.cxx : invalid operation :display:gsg:glgsg(error): at 3057 of panda/src/glstuff/glGraphicsStateGuardian_src.cxx : invalid operation :display:gsg:glgsg(error): at 4765 of panda/src/glstuff/glGraphicsStateGuardian_src.cxx : invalid enumerant OpenGL Warning: No pincher, please call crStateSetCurrentPointers() in your SPU :display:gsg:glgsg(error): at 4765 of panda/src/glstuff/glGraphicsStateGuardian_src.cxx : invalid enumerant :display:gsg:glgsg(error): at 5703 of panda/src/glstuff/glGraphicsStateGuardian_src.cxx : invalid operation :display:gsg:glgsg(error): at 4765 of panda/src/glstuff/glGraphicsStateGuardian_src.cxx : invalid enumerant :display:gsg:glgsg(error): at 4765 of panda/src/glstuff/glGraphicsStateGuardian_src.cxx : invalid operation :display:gsg:glgsg(error): at 4765 of panda/src/glstuff/glGraphicsStateGuardian_src.cxx : invalid operation :display:gsg:glgsg(error): at 5703 of panda/src/glstuff/glGraphicsStateGuardian_src.cxx : invalid operation :display:gsg:glgsg(error): at 3661 of panda/src/glstuff/glGraphicsStateGuardian_src.cxx : invalid operation :display:gsg:glgsg(error): at 3661 of panda/src/glstuff/glGraphicsStateGuardian_src.cxx : invalid operation :display:gsg:glgsg(error): at 4765 of panda/src/glstuff/glGraphicsStateGuardian_src.cxx : invalid enumerant :display:gsg:glgsg(error): at 4765 of panda/src/glstuff/glGraphicsStateGuardian_src.cxx : invalid enumerant :display:gsg:glgsg(error): at 5703 of panda/src/glstuff/glGraphicsStateGuardian_src.cxx : invalid operation :display(error): Deactivating glxGraphicsStateGuardian. What can I do to fix the problem ?

    Read the article

  • Looking for literature about graphics pipeline optimization

    - by zacharmarz
    I am looking for some books, articles or tutorials about graphics architecture and graphics pipeline optimizations. It shouldn't be too old (2008 or newer) - the newer, the better. I have found something in [Optimising the Graphics Pipeline, NVIDIA, Koji Ashida] - too old, [Real-time rendering, Akenine Moller], [OpenGL Bindless Extensions, NVIDIA, Jeff Bolz], [Efficient multifragment effects on graphics processing units, Louis Frederic Bavoil] and some internet discussions. But there is not too much information and I want to read more. It should contain something about application, driver, memory and shader units communication and data transfers. About vertices and attributes. Also pre and post T&L cache (if they still exist in nowadays architectures) etc. I don't need anything about textures, frame buffers and rasterization. It can also be about OpenGL (not about DirecX) and optimizing extensions (not old extensions like VBOs, but newer like vertex_buffer_unified_memory).

    Read the article

  • How can I pass an array of floats to the fragment shader using textures?

    - by James
    I want to map out a 2D array of depth elements for the fragment shader to use to check depth against to create shadows. I want to be able to copy a float array into the GPU, but using large uniform arrays causes segfaults in openGL so that is not an option. I tried texturing but the best i got was to use GL_DEPTH_COMPONENT glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 512, 512, 0, GL_DEPTH_COMPONENT, GL_FLOAT, smap); Which doesn't work because that stores depth components (0.0 - 1.0) which I don't want because I have no idea how to calculate them using the depth value produced by the light sources MVP matrix multiplied by the coordinate of each vertex. Is there any way to store and access large 2D arrays of floats in openGL?

    Read the article

  • View matrix question (rotate by 180 degrees)

    - by King Snail
    I am using a third party rendering API on top of OpenGL code and I cannot get my matrices correct. The API states this: We're right handed by default, and we treat y as up by convention. Since IwGx's coordinate system has (0,0) as the top left, you typically need a 180 degree rotation around Z in your view matrix. I think the viewer does this by default. In my OpenGL app I have access to the view and projection matrices separately. How can I convert them to fit the criteria used by my third party rendering API? I don't understand what they mean to rotate 180 degrees around Z, is that in the view matrix itself or something in the camera before making the view matrix. Any code would be helpful, thanks.

    Read the article

  • What is the easiest and shortest way to draw a 2d line in c/c++?

    - by Mike
    I am fairly new to c/c++ but I do have experiance with directx and opengl with java and c#. My goal is to create a 2d game in c with under 2 pages of code. Most of what I have seen requires 3 pages of code to just get a window running. I would like to know the shortest code to get a window running where I can draw lines. I believe this can be done in less lines with opengl versus directx. Is there maybe an api or framework i can use to shorten it more? Also, it would be nice if the solution were cross platform compatible.

    Read the article

  • how do I set quad buffering with jogl 2.0

    - by tony danza
    I'm trying to create a 3d renderer for stereo vision with quad buffering with Processing/Java. The hardware I'm using is ready for this so that's not the problem. I had a stereo.jar library in jogl 1.0 working for Processing 1.5, but now I have to use Processing 2.0 and jogl 2.0 therefore I have to adapt the library. Some things are changed in the source code of Jogl and Processing and I'm having a hard time trying to figure out how to tell Processing I want to use quad buffering. Here's the previous code: public class Theatre extends PGraphicsOpenGL{ protected void allocate() { if (context == null) { // If OpenGL 2X or 4X smoothing is enabled, setup caps object for them GLCapabilities capabilities = new GLCapabilities(); // Starting in release 0158, OpenGL smoothing is always enabled if (!hints[DISABLE_OPENGL_2X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(2); } else if (hints[ENABLE_OPENGL_4X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); } capabilities.setStereo(true); // get a rendering surface and a context for this canvas GLDrawableFactory factory = GLDrawableFactory.getFactory(); drawable = factory.getGLDrawable(parent, capabilities, null); context = drawable.createContext(null); // need to get proper opengl context since will be needed below gl = context.getGL(); // Flag defaults to be reset on the next trip into beginDraw(). settingsInited = false; } else { // The following three lines are a fix for Bug #1176 // http://dev.processing.org/bugs/show_bug.cgi?id=1176 context.destroy(); context = drawable.createContext(null); gl = context.getGL(); reapplySettings(); } } } This was the renderer of the old library. In order to use it, I needed to do size(100, 100, "stereo.Theatre"). Now I'm trying to do the stereo directly in my Processing sketch. Here's what I'm trying: PGraphicsOpenGL pg = ((PGraphicsOpenGL)g); pgl = pg.beginPGL(); gl = pgl.gl; glu = pg.pgl.glu; gl2 = pgl.gl.getGL2(); GLProfile profile = GLProfile.get(GLProfile.GL2); GLCapabilities capabilities = new GLCapabilities(profile); capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); capabilities.setStereo(true); GLDrawableFactory factory = GLDrawableFactory.getFactory(profile); If I go on, I should do something like this: drawable = factory.getGLDrawable(parent, capabilities, null); but drawable isn't a field anymore and I can't find a way to do it. How do I set quad buffering? If I try this: gl2.glDrawBuffer(GL.GL_BACK_RIGHT); it obviously doesn't work :/ Thanks.

    Read the article

  • Shadowmap first phase and shaders

    - by KaiserJohaan
    I am using OpenGL 3.3 and am tryin to implement shadow mapping using cube maps. I have a framebuffer with a depth attachment and a cube map texture. My question is how to design the shaders for the first pass, when creating the shadowmap. This is my vertex shader: in vec3 position; uniform mat4 lightWVP; void main() { gl_Position = lightWVP * vec4(position, 1.0); } Now, do I even need a fragment shader in this shader pass? from what I understand after reading http://www.opengl.org/wiki/Fragment_Shader, by default gl_FragCoord.z is written to the currently attached depth component (to which my cubemap texture is bound to). Thus I shouldnt even need a fragment shader for this pass and from what I understand, there is no other work to do in the fragment shader other than writing this value. Is this correct?

    Read the article

  • iPhone 3d Model format: .h file, .obj, or some other?

    - by T Reddy
    I'm beginning to write an iPhone game using OpenGL-ES and I've come across a problem with deciding what format my 3D models should be in. I've read (link escapes me at the moment) that some developers prefer the models compiled in Objective-C .h files. Still, others prefer having .obj as these are more portable (i.e., for deployment on non-iPhone platforms). Various 3D game engines seem to support many(?) formats, but I'm not going to use any of these engines as I would like to actually learn OpenGL-ES. Am I putting myself at a disadvantage here by not using a packaged engine? Thanks!

    Read the article

  • High CPU usage on Pong clone

    - by max
    I just made my first game, a clone of Pong, using OpenGL and C++. But its using ~50% of the CPU, which I guess is very high for a game like this. How can I improve that? Can you please look up my code and tell me what all things I am doing wrong? Any feedback is welcome. http://pastebin.com/L5zE3axh Also it would be extremely helpful if you give some general points on how to develop games in OpenGL efficiently.. Thanks in advance!

    Read the article

  • How to perform a detailed and quick 3D performance test

    - by gsedej
    I am wondering how to quickly test performance of my 3D graphics. Since glxgears is not benchmark what should I use. Also glxgears sometimes stuck at 60FPS, you cannot even compare before/after driver update (e.g. adding xorg-edgers PPA). Even glxgears doesn't really work out of box. One possibility is screensavers, but you can't see FPS. I am also not willing to install 600MB nexuiz, specially if I am running on Live-CD. Other 3D games are also very big... Unigine tests are too demanding for opensource drivers (problems with too low OpenGL and probably texture compression (S3TC...)). I would also like to test OpenGL 2.x extentions. How to quickly test your 3D performance?

    Read the article

  • Image loaded from TGA texture isn't displayed correctly

    - by Ramy Al Zuhouri
    I have a TGA texture containing this image: The texture is 256x256. So I'm trying to load it and map it to a cube: #import <OpenGL/OpenGL.h> #import <GLUT/GLUT.h> #import <stdlib.h> #import <stdio.h> #import <assert.h> GLuint width=640, height=480; GLuint texture; const char* const filename= "/Users/ramy/Documents/C/OpenGL/Test/Test/texture.tga"; void init() { // Initialization glEnable(GL_DEPTH_TEST); glViewport(-500, -500, 1000, 1000); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45, width/(float)height, 1, 1000); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); gluLookAt(0, 0, -100, 0, 0, 0, 0, 1, 0); // Texture char bitmap[256][256][3]; FILE* fp=fopen(filename, "r"); assert(fp); assert(fread(bitmap, 3*sizeof(char), 256*256, fp) == 256*256); fclose(fp); glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 256, 256, 0, GL_RGB, GL_UNSIGNED_BYTE, bitmap); } void display() { glClearColor(0, 0, 0, 0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, texture); glColor3ub(255, 255, 255); glBegin(GL_QUADS); glVertex3f(0, 0, 0); glTexCoord2f(0.0, 0.0); glVertex3f(40, 0, 0); glTexCoord2f(0.0, 1.0); glVertex3f(40, 40, 0); glTexCoord2f(1.0, 1.0); glVertex3f(0, 40, 0); glTexCoord2f(1.0, 0.0); glEnd(); glDisable(GL_TEXTURE_2D); glutSwapBuffers(); } int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGB | GLUT_DEPTH | GLUT_DOUBLE); glutInitWindowPosition(100, 100); glutInitWindowSize(width, height); glutCreateWindow(argv[0]); glutDisplayFunc(display); init(); glutMainLoop(); return 0; } But this is what I get when the window loads: So just half of the image is correctly displayed, and also with different colors.Then if I resize the window I get this: Magically the image seems to fix itself, even if the colors are wrong.Why?

    Read the article

  • why would you use textures that are not a power of 2?

    - by Will
    In the early days of OpenGL and DirectX, it was required that textures were powers of two. This meant that interpolation of float values could be done very quickly using shifting and such. Since OpenGL 2.0, and preceding that via an extension, non-power-of-two texture dimensions has been supported. Are there performance advantages to sticking to power-of-two textures on modern integrated and discrete GPUs? What advantages do non-power-of-two textures have, if any? Are there large populations of desktop users who don't have cards that support non-power-of-two textures?

    Read the article

  • GLSL, all in one or many shader programs?

    - by stjepano
    I am doing some 3D demos using OpenGL and I noticed that GLSL is somewhat "limited" (or is it just me?). Anyway I have many different types of materials. Some materials have ambient and diffuse color, some materials have ambient occlusion map, some have specular map and bump map etc. Is it better to support everything in one vertex/fragment shader pair or is it better to create many vertex/fragment shaders and select them based on currently selected material? What is the usual shader strategy in OpenGL or D3D?

    Read the article

  • Selection of a mesh with arbitrary region

    - by Tigran
    Considering example: I have a mesh(es) on the OpenGL screen and would like to select a part of it (say for delete purpose). There is a clear way to do the selction via Ray Tracing, or via Selection provided by OpenGL itself. But, for my users, considering that meshes can get wired surfaces, I need to implement a selection via a Arbitrary closed region, so all triangles that appears present inside that region has to be selected. To be more clear, here is screen shot: I want all triangles inside black polygon to be selected, identified, whatever in some way. How can I achieve that ?

    Read the article

  • Parsing glGetShaderInfoLog() to get error info. Is this reliable, or is there a better way?

    - by m4ttbush
    I want to get a list of errors and their line numbers so I can display the error information different to how it's formatted in the error string, and also show the line in error. It looks easy enough to just parse the result of glGetShaderInfoLog(), look for "ERROR:" then read the next number up to : and then the next, and then the error description up to the next newline. But the OpenGL docs say "Application developers should not expect different OpenGL implementations to produce identical information logs." Which makes me worry that my code may behave incorrectly on different systems. I don't need them to be identical, I just need them to follow the same format. So is there a better way to get a list of errors with line number separate, is it safe to assume that they'll always follow the "ERROR: 0:123:" format, or is there simply no reliable way to do this? Thanks!

    Read the article

  • Is it possible to update my graphic card drivers?

    - by madmaxpt
    This is what the lspci -v command shows in the VGA compatible controller section: 00:02.0 VGA compatible controller: Intel Corporation Atom Processor D4xx/D5xx/N4xx/N5xx Integrated Graphics Controller (prog-if 00 [VGA controller]) Subsystem: Dell Device 048e Flags: bus master, fast devsel, latency 0, IRQ 44 Memory at f0200000 (32-bit, non-prefetchable) [size=512K] I/O ports at 18d0 [size=8] Memory at d0000000 (32-bit, prefetchable) [size=256M] Memory at f0000000 (32-bit, non-prefetchable) [size=1M] Expansion ROM at <unassigned> [disabled] Capabilities: <access denied> Kernel driver in use: i915 Kernel modules: i915 i915 is only compatible with OpenGL 1.4 and cannot use shaders and I'd like to use some Open GL 2.0 functionalities. Is it possible to update the drivers or is my hardware limited to OpenGL 1.4 ?

    Read the article

  • How are vertex shader outs sent as inputs to the fragment shader?

    - by Jeffrey
    I'm learning some OpenGL 3.2 way of doing things and I think it's quite great, I'm actually understanding more of shaders and non-fixed pipeline in 1 week rather than those 2 years I tried to learn OpenGL fixed pipeline functions. But here's my question: From what I think I've understood the vertex shader is run for each vertexes in the VBO. But the fragments shader is run per each pixel (is that right?) which is a huge number compared to let's say 3 vertexes of a triangle. Now it seems that in the vertex shader the out variables (like colors and stuff) are passed 1 to 1 to the fragment shader. But let's say that I pass to the fragment shader the position of the vertex in the vertex shader. How is all executed? What vertex (A, B or C of the hipothetical triangle) is passed per each fragment and why?

    Read the article

  • 2-components color model

    - by Cyan
    RGB is the natural color model for OpenGL. But a lot of other color models exist. For example, CMY(K) for printers, YUV for JPEG, the little cousins YCbCr and YCoCg, HSL & HSV from the 70's, and so on. All these models tend to share a common property : they are based on 3 components. Therefore my question is : Does it exist a 2-components color model ? I'm surprised to not find any. I was expecting something along the line of Hue+light could exist. I guess it cannot be as "complete" as a true 3-components color model, but a fine-enough approximation will be good for my usecase. The end objective is to store the 2 components into a single BC5 texture (GL_COMPRESSED_RED_GREEN_RGTC2 in OpenGL). The 3rd component requires a second fetch into a second texture, which hurts performance.

    Read the article

  • Java and Eclipse setup properly, how do I install JOGL or LWJGL?

    - by shadowprotocol
    I have my Java environment installed alongside Eclipse, and I was successfully able to create and run a new project (simple System.out.println("Yay I work!"); I have the OpenGL SuperBible, and I primarily want to code 3D things (I'll take my time using the book to learn how to draw shapes in 3D space, etc..) Can you help me get setup with OpenGL in Java? I dont really need LWJGL, although I WILL make games eventually. I just can't even figure out with all of these terrible (and old) tutorials floating around on the net how to install either JOGL or LWJGL. If you can give me a hand with that, I'd appreciate it. I'd like to feel I contributed by having this page show the answer to the question, so that other poor souls googling for this same information can benefit.

    Read the article

  • Is this a reliable method of parsing glGetShaderInfoLog()?

    - by m4ttbush
    I want to get a list of errors and their line numbers so I can display the error information differently from how it's formatted in the error string and also to show the line in the output. It looks easy enough to just parse the result of glGetShaderInfoLog(), look for ERROR:, then read the next number up to :, and then the next, and finally the error description up to the next newline. However, the OpenGL docs say: Application developers should not expect different OpenGL implementations to produce identical information logs. This makes me worry that my code may behave incorrectly on different systems. I don't need them to be identical, I just need them to follow the same format. So is there a better way to get a list of errors with the line number separate, is it safe to assume that they'll always follow the "ERROR: 0:123:" format, or is there simply no reliable way to do this?

    Read the article

  • OpenGL dans Qt5, tour d'horizon des nouveautés de l'intégration d'OpenGL dans Qt, un billet de Guillaume Belz

    Bonjour, En attendant un article plus détaillé sur Qt5, voici une présentation des classes et des modules utilisant OpenGL dans Qt5. Je rappelle également comment activer l'accélération matérielle dans Qt4 (QGLWidget, QGraphicsView, QDeclarativeView) et explique comment installer les binaires de Qt5 dans Ubuntu en utilisant les dépôts ppa. L'article du blog : OpenGL dans Qt5. Prochainement, j'aborderai en détail dans ce blog le nouveau module Qt3D de Qt5, avec des projets d'exemple. Que pensez-vous de cette réorgan...

    Read the article

  • Filtres d'écran en 2D avec OpenGL dans le Blender Game Engine, une traduction de Guillaume Belz

    Bonjour à tous Blender est à l'origine un logiciel libre de dessin 3D, mais propose de plus en plus de fonctionnalités avancées d'animation. En particulier, Blender intègre un moteur de jeux appelé Blender Game Engine (BGE), qui permet aux utilisateurs d'écrire leurs propres shaders en utilisant les langages Python et OpenGL Shading Language (GLSL). Dans cet article, l'auteur présente les bases pour écrire ses propres shaders et les paramétrer dans Blender à partir d'un exemple simple : un filtre de flou. Filtres d'écran en 2D avec OpenGL dans le Blender Game Engine A...

    Read the article

  • Which language is best for OpenGL?

    - by Neurofluxation
    Well, this is silly - I've had to post a question about programming on a site other than stackoverflow... Worst thing they have ever done.. I have the following list of languages: C++ C# VB.Net D Delphi Euphoria GLUT Java Power Basic Python REALbasic Visual Basic I would like to know what people think is the: *a) Best (most efficient) language to work with OpenGL* and b) What is the easiest language to work with OpenGL Thanks in advance guys n gals!

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >