Search Results

Search found 3627 results on 146 pages for 'opengl es 2 0'.

Page 17/146 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • A KSH adattárháza: Oracle Essbase és Oracle Database alapon

    - by Fekete Zoltán
    A magyar Központi Statisztikai Hivatal metaadat vezérelt adattárháza három fontos Oracle terméken nyugszik. Az interneten elérhetok az adatok a KSH Tájékoztatási adatbázis-ból. Data from KSH in English. Amikor ezeket a sorokat írom, péntek éjjel 21:36-kor 81 online felhasználó kérdezte le az adatokat. :) - Oracle Essbase multidimenziós OLAP szerver, technikai infó - Hyperion Interactive Reporting lekérdezo eszköz, technikai infó - Oracle Database Enterprise Edition Az angol nyelvu customer snapshot, azaz ügyfél történet: Hungarian Central Statistical Office Provides 200,000 External Users with Secure Online Access to Data. A magyar nyelvu sikersztori: A KSH statisztikai adatainak 60 százaléka elérheto böngészo és platform függetlenül évi mintegy 200 000 internetes felhasználó számára. A termék kiválasztásában, a projekt kialakításában és bevezetésében nagy szerepet vállalt a DSS Consulting Kft. és az Oracle Konzultáció. A projekt során elért legfontosabb eredmények: - adattárház: 150-200 egyideju felhasználó, éves szinten 200 000 felhasználót jelent - Essbase memória alapú tárolási struktúrája: közel valósideju hozzáférés - A rendszer platform és böngészo független, ezért a felhasználók széles köre érheti el a statisztikai adatokat. - Natív Java API és XMLA támogatással egyedi karbantartó alkalmazás - A statisztikus munkatársak speciális informatikai eloképzettség nélkül építik fel és gondozzák a multidimenzionális adatbázisokat - Az Oracle Hyperion Interactive Reporting: oszlopos, kereszttáblás, szekcionált, grafikonos, webes lekérdezések Letöltheto a következo KSH eloadás a HOUG konferenciáról 2009-bol: Hyperalea iacta est - a KSH Essbase alapú adattárház rendszere. A most megjelent sikersztori: angolul és magyarul.

    Read the article

  • Complete Math Library for use in OpenGL ES 2.0 Game?

    - by Bunkai.Satori
    Are you aware of a complete (or almost complete) cross platform math library for use in OpenGL ES 2.0 games? The library should contain: Matrix2x2, Matrix 3x3, Matrix4x4 classes Quaternions Vector2, Vector3, Vector4 Classes Euler Angle Class Operations amongh the above mentioned classes, conversions, etc.. Standardly used math operations in 3D graphics (Dot Product, Cross Product, SLERP, etc...) Is there such Math API available either standalone or as a part of any package? Programming Language: Visual C++ but planned to be ported to OS X and Android OS.

    Read the article

  • How effects found in "Autodesk Fluid FX" are implemented using OpenGL ES?

    - by afds
    How this kind of effects are technically implemented using OpenGL ES? Are they performing simulation on GPU (using Shaders) or CPU while using some smart vertex positioning and texturing? Why it appears so fast (in terms of performance)? You might check the video of that app here: http://www.youtube.com/watch?v=F4KOk6QP6kQ edit Here is the presentation for the app: http://www.futuregameon.com/FGO2010_JosStam.pdf

    Read the article

  • What OpenGL functions are not GPU accelerated?

    - by Xavier Ho
    I was shocked when I read this (from the OpenGL wiki): glTranslate, glRotate, glScale Are these hardware accelerated? No, there are no known GPUs that execute this. The driver computes the matrix on the CPU and uploads it to the GPU. All the other matrix operations are done on the CPU as well : glPushMatrix, glPopMatrix, glLoadIdentity, glFrustum, glOrtho. This is the reason why these functions are considered deprecated in GL 3.0. You should have your own math library, build your own matrix, upload your matrix to the shader. For a very, very long time I thought most of the OpenGL functions use the GPU to do computation. I'm not sure if this is a common misconception, but after a while of thinking, this makes sense. Old OpenGL functions (2.x and older) are really not suitable for real-world applications, due to too many state switches. This makes me realise that, possibly, many OpenGL functions do not use the GPU at all. So, the question is: Which OpenGL function calls don't use the GPU? I believe knowing the answer to the above question would help me become a better programmer with OpenGL. Please do share some of your insights.

    Read the article

  • Which OpenGL functions are not GPU-accelerated?

    - by Xavier Ho
    I was shocked when I read this (from the OpenGL wiki): glTranslate, glRotate, glScale Are these hardware accelerated? No, there are no known GPUs that execute this. The driver computes the matrix on the CPU and uploads it to the GPU. All the other matrix operations are done on the CPU as well : glPushMatrix, glPopMatrix, glLoadIdentity, glFrustum, glOrtho. This is the reason why these functions are considered deprecated in GL 3.0. You should have your own math library, build your own matrix, upload your matrix to the shader. For a very, very long time I thought most of the OpenGL functions use the GPU to do computation. I'm not sure if this is a common misconception, but after a while of thinking, this makes sense. Old OpenGL functions (2.x and older) are really not suitable for real-world applications, due to too many state switches. This makes me realise that, possibly, many OpenGL functions do not use the GPU at all. So, the question is: Which OpenGL function calls don't use the GPU? I believe knowing the answer to the above question would help me become a better programmer with OpenGL. Please do share some of your insights.

    Read the article

  • How can I resolve naming conflict in given precompiled libraries?

    - by asm
    I'm linking two different libraries that have functions with exactly same name (it's opengl32.lib and libgles_cm.lib - OpenGL ES emulation under Win32 platform), and I want to be able to specify, which version I'm calling. I'm porting a game to OpenGL ES, and what I want to achieve, is a split-screen rendering, where left side is an OpenGL version, and right side is a ES version. To produce the same result, they will recieve slightly different calls, and I'll be able to visually compare them, effectively finding visual artifacts. It worked perfectly with OpenGL/DirectX at the same window, but now the problem is that both versions imports the functions with the same name, like glDrawArrays, and only one version is imported. Unfortunately, I don't have sources of any of that libraries. Is there a way to... I dont' know, wrap one library into additional namespace before linking (with calls like ES::glDrawArrays), somehow rename some of functions or do anything else? I'm using microsoft compiler now, but if there will be solution with another one (GCC/ICC), I'll switch to it.

    Read the article

  • iPhone OpenGL ES - How to Pick

    - by Ali Nadalizadeh
    I'm working on an OpenGL ES1 app which displays a 2D grid and allows user to navigate and scale/rotate it. I need to know the exact translation of View Touch coordinates into my opengl world and grid cell. Are there any helpers to do the reverse of last few transforms which I do for navigation ? or I should calculate and do the matrix stuff by hand ?

    Read the article

  • Asynchronous readback from opengl front buffer using multiple PBO's

    - by KillianDS
    I am developing an application that needs to read back the whole frame from the front buffer of an openGL application. I can hijack the application's opengl library and insert my code on swapbuffers. At the moment I am successfully using a simple but excruciating slow glReadPixels command without PBO's. Now I read about using multiple PBO's to speed things up. While I think I've found enough resources to actually program that (isn't that hard), I have some operational questions left. I would do something like this: create a series (e.g. 3) of PBO's use glReadPixels in my swapBuffers override to read data from front buffer to a PBO (should be fast and non-blocking, right?) Create a seperate thread to call glMapBufferARB, once per PBO after a glReadPixels, because this will block until the pixels are in client memory. Process the data from step 3. Now my main concern is of course in steps 2 and 3. I read about glReadPixels used on PBO's being non-blocking, will this be an issue if I issue new opengl commands after that very fast? Will those opengl commands block? Or will they continue (my guess), and if so, I guess only swapbuffers can be a problem, will this one stall or will glReadPixels from front buffer be many times faster than swapping (about each 15-30ms) or, worst case scenario, will swapbuffers be executed while glReadPixels is still reading data to the PBO? My current guess is this logic will do something like this: copy FRONT_BUFFER - generic place in VRAM, copy VRAM-RAM. But I have no idea which of those 2 is the real bottleneck and more, what the influence on the normal opengl command stream is. Then in step 3. Is it wise to do this asynchronously in a thread separated from normal opengl logic? At the moment I think not, It seems you have to restore buffer operations to normal after doing this and I can't install synchronization objects in the original code to temporarily block those. So I think my best option is to define a certain swapbuffer delay before reading them out, so e.g. calling glReadPixels on PBO i%3 and glMapBufferARB on PBO (i+2)%3 in the same thread, resulting in a delay of 2 frames. Also, when I call glMapBufferARB to use data in client memory, will this be the bottleneck or will glReadPixels (asynchronously) be the bottleneck? And finally, if you have some better ideas to speed up frame readback from GPU in opengl, please tell me, because this is a painful bottleneck in my current system. I hope my question is clear enough, I know the answer will probably also be somewhere on the internet but I mostly came up with results that used PBO's to keep buffers in video memory and do processing there. I really need to read back the front buffer to RAM and I do not find any clear explanations about performance in that case (which I need, I cannot rely on "it's faster", I need to explain why it's faster). Thank you

    Read the article

  • Nonblocking texture upload on iPhone and other OpenGL ES platforms

    - by spurserh
    Hello, I am doing some work which involves drawing video frames in real time in OpenGL ES. Right now I am using glTexImage2D to transfer the data, in the absence of Pixel Buffer Objects and the like. A below answer suggests that glTexImage2D is always blocking, even if texture object referenced does is not used for any drawing. Is there a way to do a nonblocking texture upload with OpenGL ES (any version)? Thank you very much, Sean

    Read the article

  • OpenGL ES 2.0 and glPushMatrix, glPopMatrix

    - by MrDatabase
    Does OpenGL ES 2.0 still support glPushMatrix and glPopMatrix? I'm currently using these in the following way: glPushMatrix(); glTranslatef(xLoc, yLoc, 0); [myTexturePointer drawAtPoint:CGPointZero]; glPopMatrix(); I'm asking because I've read a few things about 2.0 "removing the matrix stack from the spec". Since I'm relatively new to OpenGL I'm not sure where to find a definitive answer.

    Read the article

  • Fastest possible way to render 480 x 320 background as iPhone OpenGL ES textures

    - by unknownthreat
    I need to display 480 x 320 background image in OpenGL ES. The thing is I experienced a bit of a slow down in iPhone when I use 512 x 512 texture size. So I am finding an optimum case for rendering iPhone resolution size background in OpenGL ES. How should I slice the background in this case to obtain the best possible performance? My main concern is speed. Should I go for 256 x 256 or other texture sizes here?

    Read the article

  • OpenGL ES iPhone Textures

    - by techy
    For one of my new games I'm using OpenGL ES since it has multiple enemies and bullets, etc. How do you draw images on the screen with Opengl ES? I have a player.png image that is a 48x48 pixel image; how would I draw that on the screen?

    Read the article

  • Unable to use OpenGL or install nVidia driver on openSUSE 12.2

    - by djechelon
    I have an ASUS N76VZ laptop with 12.2 openSUSE and GeForce GT650M card. I found that KDE doesn't allow me to use OpenGL rendering. I tried to install nVidia's driver from script but once it writes the xorg.conf file I'm unable to boot desktop. I have the following errors in system log Oct 30 08:28:13 RAYNOR kdm[2727]: X server died during startup Oct 30 08:28:13 RAYNOR kdm[2727]: X server for display :0 cannot be started, session disabled I noticed that the /etc/X11/xorg.conf backup file was empty, so I renamed the new xorg.conf and left none: the desktop booted!!! How can I fix OpenGL rendering with or without driver installation? [Update]: Xorg.0.log says [ 1434.207] compiled for 4.0.2, module version = 1.0.0 [ 1434.207] Module class: X.Org Server Extension [ 1434.207] (II) NVIDIA GLX Module 304.60 Sun Oct 14 20:44:54 PDT 2012 [ 1434.207] (II) Loading extension GLX [ 1434.207] (II) LoadModule: "record" [ 1434.207] (II) Loading /usr/lib64/xorg/modules/extensions/librecord.so [ 1434.207] (II) Module record: vendor="X.Org Foundation" [ 1434.207] compiled for 1.12.3, module version = 1.13.0 [ 1434.207] Module class: X.Org Server Extension [ 1434.207] ABI class: X.Org Server Extension, version 6.0 [ 1434.207] (II) Loading extension RECORD [ 1434.207] (II) LoadModule: "dri" [ 1434.207] (II) Loading /usr/lib64/xorg/modules/extensions/libdri.so [ 1434.207] (II) Module dri: vendor="X.Org Foundation" [ 1434.207] compiled for 1.12.3, module version = 1.0.0 [ 1434.207] ABI class: X.Org Server Extension, version 6.0 [ 1434.207] (II) Loading extension XFree86-DRI [ 1434.207] (II) LoadModule: "nvidia" [ 1434.208] (II) Loading /usr/lib64/xorg/modules/drivers/nvidia_drv.so [ 1434.208] (II) Module nvidia: vendor="NVIDIA Corporation" [ 1434.208] compiled for 4.0.2, module version = 1.0.0 [ 1434.208] Module class: X.Org Video Driver [ 1434.208] (II) NVIDIA dlloader X Driver 304.60 Sun Oct 14 20:24:42 PDT 2012 [ 1434.208] (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs [ 1434.208] (++) using VT number 8 [ 1434.320] (EE) No devices detected. [ 1434.320] Fatal server error: [ 1434.320] no screens found [ 1434.320] Please consult the The X.Org Foundation support at http://wiki.x.org for help.

    Read the article

  • How do I convert my matrix from OpenGL to Marmalade?

    - by King Snail
    I am using a third party rendering API, Marmalade, on top of OpenGL code and I cannot get my matrices correct. One of the API's authors states this: We're right handed by default, and we treat y as up by convention. Since IwGx's coordinate system has (0,0) as the top left, you typically need a 180 degree rotation around Z in your view matrix. I think the viewer does this by default. In my OpenGL app I have access to the view and projection matrices separately. How can I convert them to fit the criteria used by my third party rendering API? I don't understand what they mean to rotate 180 degrees around Z, is that in the view matrix itself or something in the camera before making the view matrix. Any code would be helpful, thanks.

    Read the article

  • Getting Started with 2d Game Dev (C++): DirectX or OpenGL?

    - by Dfowj
    So, i'm a student looking to get my foot in the door of game development and im looking to do something 2D, maybe a tetris/space invaders/something-with-a-little-mouse-interaction clone. I pointed my searches in the direction of C++ and 2d and was eventually lead to DirectX/OpenGL Now as i understand it, all these packages will do for me is draw stuff on a screen. And thats all i really care about at this point. Sound isn't necessary. Input can be handled with stdlib probably. So, for a beginner trying to create a basic game in C++, would you recommend DirectX or OpenGL? Why? What are some key feature differences between the two? Which is more usable?

    Read the article

  • How can I record an OpenGL game in Ubuntu?

    - by fish
    I would like to create a short clip of me playing Minecraft, an OpenGL game. The usual screencast recorders do not properly record OpenGL. What kind of software is available for this purpose? My experience with the software in the similar (but no longer duplicate) question: kazam: very low framerate despite setting to 60 FPS, no sound, unity menubar constantly flashing through the fullscreen window. RecordMyDesktop: max framerate setting is 50 FPS, but the video becomes extremely fast if not using the default 15 FPS. xvidcap: not available on 12.04 tibesti: not available on 12.04 wink: does not run ffmpeg: very low quality video and no sound with the recommended settings, might be tunable though (no gui unfortunately). kdenlive: uses recordmydesktop, and the recorded clip becomes corrupted aconv: video sped up, often broken image, no sound

    Read the article

  • For 2D games, is there any reason NOT to use a 3D API like Direct3D or OpenGL?

    - by Eric Palakovich Carr
    I've been out of hobby Game Development for quite a while now. Back when I did it, most people used Direct Draw to create 2D games. By the time I stopped people were saying OpenGL or Direct3D with an orthogonal projection is just the way to go. I'm thinking about getting back into creating 2D games, in particular on mobile phone but maybe on the XNA platform as well. To make something using OpenGL I'd have a (hopefullly) small learning curve to acclimate myself to 3D development. Is there any reason to skip that and instead work with a 2D framework where I just have a Width x Height frame buffer I need to fill with pixels?

    Read the article

  • OpenGL ES 2. How do I Create a Basic Fading Streak Effect?

    - by dugla
    For the iPad app I am writing using OpenGL ES 2 I have a single quad - shaded using GLSL - that is dragged around the screen. Very basic. This works fine. But is rather boring. I want to increase the coolness a bit in the following way: when the user drags the quad it leaves a streak behind that fades over time. Continuous dragging would be a bit like a streaking comet across the night sky. What is the simplest way to implement this? Thanks.

    Read the article

  • How to calculate vertext normals for a mesh in Java in OpenGL ES application?

    - by alan mc
    Can some one point me to Java code ( in Java not C or C++) that calculates all the normals for all the vertices of a mesh for OpenGL ES application. I need this for lighting. Lets say I have a cube with following vertices and indices: float vertices[] = { -width, -height, -depth, // 0 width, -height, -depth, // 1 width, height, -depth, // 2 -width, height, -depth, // 3 -width, -height, depth, // 4 width, -height, depth, // 5 width, height, depth, // 6 -width, height, depth // 7 }; short indices[] = { 0, 2, 1, 0, 3, 2, 1,2,6, 6,5,1, 4,5,6, 6,7,4, 2,3,6, 6,3,7, 0,7,3, 0,4,7, 0,1,5, 0,5,4 }; In above specific example how many normals we need ?

    Read the article

  • In OpenGl ES 2, should I allocate multiple transformation matrices?

    - by thm4ter
    In OpenGl ES 2, should I declare just one transformation matrix, and share it across all objects or should I declare a transformation matrix in each object that needs it? for clarification... something like this: public class someclass{ public static float[16] transMatrix = new float[16]; ... public static void translate(int x, int y){ //do translation here } } public class someotherclass{ ... void draw(GL10 unused){ someclass.translate(10,10); //draw } } verses something like this: public class obj1{ public static float[16] transMatrix = new float[16]; ... void draw(GL10 unused){ //translate //draw } } public class obj2{ public static float[16] transMatrix = new float[16]; ... void draw(GL10 unused){ //translate //draw } }

    Read the article

  • How can I read from multiple textures in an OpenGL ES 2 shader?

    - by Peyman Tahghighi
    How can I enable more than one texture in OpenGL ES 2 so that I can sample from all of them in my shader? For example, I'm trying to read from two different textures in my shader for the player's car. This is how I'm currently dealing with the texture for my car: glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, this->texture2DObj); glUniform1i(1, 0); glBindBuffer(GL_ARRAY_BUFFER, this->vertexBuffer); glEnableVertexAttribArray(0); int offset = 0; glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, this->vertexBufferSize,(const void *)offset); offset += 3 * sizeof(GLfloat); glEnableVertexAttribArray(1); glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, this->vertexBufferSize, (const void*)offset); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, this->indexBuffer); glDrawElements(GL_TRIANGLES, this->indexBufferSize, GL_UNSIGNED_SHORT, 0); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1);

    Read the article

  • iPod/iPhone OpenGL ES UIView flashes when updating

    - by Dave Viner
    I have a simple iPhone application which uses OpenGL ES (v1) to draw a line based on the touches of the user. In the XCode Simulator, the code works perfectly. However, when I install the app onto an iPod or iPhone, the OpenGL ES view "flashes" when drawing the line. If I disable the line drawing, the flash disappears. By "flash", I mean that the background image (which is an OpenGL texture) disappears momentarily, and then reappears. It appears as if the entire scene is completely erased and redrawn. The code which handles the line drawing is the following: renderLineFromPoint:(CGPoint)start toPoint:(CGPoint)end { static GLfloat* vertexBuffer = NULL; static NSUInteger vertexMax = 64; NSUInteger vertexCount = 0, count, i; //Allocate vertex array buffer if(vertexBuffer == NULL) vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat)); //Add points to the buffer so there are drawing points every X pixels count = MAX(ceilf(sqrtf((end.x - start.x) * (end.x - start.x) + (end.y - start.y) * (end.y - start.y)) / kBrushPixelStep), 1); for(i = 0; i < count; ++i) { if(vertexCount == vertexMax) { vertexMax = 2 * vertexMax; vertexBuffer = realloc(vertexBuffer, vertexMax * 2 * sizeof(GLfloat)); } vertexBuffer[2 * vertexCount + 0] = start.x + (end.x - start.x) * ((GLfloat)i / (GLfloat)count); vertexBuffer[2 * vertexCount + 1] = start.y + (end.y - start.y) * ((GLfloat)i / (GLfloat)count); vertexCount += 1; } //Render the vertex array glVertexPointer(2, GL_FLOAT, 0, vertexBuffer); glDrawArrays(GL_POINTS, 0, vertexCount); //Display the buffer [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } (This function is based on the function of the same name from the GLPaint sample application.) For the life of me, I can not figure out why this causes the screen to flash. The line is drawn properly (both in the Simulator and in the iPod). But, the flash makes it unusable. Anyone have ideas on how to prevent the "flash"?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >