Search Results

Search found 2292 results on 92 pages for 'ian scho es'.

Page 6/92 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • problem loading texture with transparency with OpenGL ES and Android

    - by Evan Kimia
    Im trying to load an image that has background transparency that will be layered over another texture. When i try and load it, all i get is a white screen. The texture is 512 by 512, and its saved in photoshop as a 24 bit PNG (standard PNG specs in the Photoshop Save for Web and Devices config window). Any idea why its not showing? The texture without transparency shows without a problem. Here is my loadTextures method: public void loadGLTexture(GL10 gl, Context context) { //Get the texture from the Android resource directory Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.m1); Bitmap normalScheduleLines = BitmapFactory.decodeResource(context.getResources(), R.drawable.m1n); //Generate texture pointers... gl.glGenTextures(3, textures, 0); //...and bind it to our array gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[1]); //Create Nearest Filtered Texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR_MIPMAP_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL11.GL_TEXTURE_2D, GL11.GL_GENERATE_MIPMAP, GL11.GL_TRUE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE); GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); bitmap.recycle(); //Bind our normal schedule bus map lines gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]); //Create Nearest Filtered Texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR_MIPMAP_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL11.GL_TEXTURE_2D, GL11.GL_GENERATE_MIPMAP, GL11.GL_TRUE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE); GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_RGBA, normalScheduleLines, 0); normalScheduleLines.recycle(); }

    Read the article

  • Create a bubble effect on a grid OpenGL-ES

    - by Charles Michael
    Hi there. I have created a grid with 40 x 40 vertex3D (small but useful) I can pick a single vertex out of that grid by simply calling a function with the position array[X][Y], And therefore neighbors too. How can I raise up neighbor vertex Z value so they kinda look like a bubble or sphere kind of thingy? My first tough was to use: Neighbor_vertex.Z = sin(PI/4 * 1 - ( 1/ distance_between_Neighbor_and_Pivot) ) * desired_Max_Height But all I got is something like a wave.... and I would like to have a bubble or Sphere like shape. THX dudes and dudettes

    Read the article

  • openGL ES retina support

    - by Bryan
    I'm trying to get the avTouch sample code app to run on the retina display. Has anyone done this? In the CALevelMeter class, I've tried the following: - (id)initWithCoder:(NSCoder *)coder { if (self = [super initWithCoder:coder]) { CGFloat f = self.contentScaleFactor; if ([self respondsToSelector:@selector(contentScaleFactor)]) { self.contentScaleFactor = [[UIScreen mainScreen] scale]; } f = self.contentScaleFactor; _showsPeaks = YES; _channelNumbers = [[NSArray alloc] initWithObjects:[NSNumber numberWithInt:0], nil]; _vertical = NO; _useGL = YES; _meterTable = new MeterTable(kMinDBvalue); [self layoutSubLevelMeters]; [self registerForBackgroundNotifications]; } return self; } and it sets the contentScaleFactor to "2". Great, that was expected. But then in the layoutSubviews, CALevelMeter frame is still 1/2 of what it should be. Any ideas?

    Read the article

  • displaying a saved buffer in OpenGL ES

    - by Adam
    Hi everyone, So basically I have a screenshot that I've saved that I want to later display on the screen. I've saved it with: glReadPixels(0, 0, self.bounds.size.width, self.bounds.size.height, GL_RGBA, GL_UNSIGNED_BYTE, savedBuffer); And later I'm trying to write it to the screen with: GLuint RenderedTex; glGenTextures(1, &RenderedTex); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, RenderedTex); glTexImage2D (GL_TEXTURE_2D, 0, GL_RGBA, self.bounds.size.width, self.bounds.size.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, savedBuffer); glDisable(GL_TEXTURE_2D); I'm pretty new to OpenGL so I don't know if I'm doing things right... actually I know I'm not, because nothing shows up. Also not sure how to dispose of the texture when I'm done with it. Anyone know the correct way to do this? Edit: I think I might be having a problem loading the texture because it's not a power of 2, it's 320x480... also, I think this code is just loading the texture, but not drawing it, I'd need a call to glDrawArrays(GL_TRIANGLE_STRIP, 0, 4) in there somewhere...

    Read the article

  • OpenGL ES functions not accepting values originating outside of it's view

    - by Josh Elsasser
    I've been unable to figure this out on my own. I currently have an Open GLES setup where a view controller both updates a game world (with a dt), fetches the data I need to render, passes it off to an EAGLView through two structures (built of Apple's ES1Renderer), and draws the scene. Whenever a value originates outside of the Open GL view, it can't be used to either translate objects using glTranslatef, or set up the scene using glOrthof. If I assign a new value to something, it will work - even if it is the exact same number. The two structures I have each contain a variety of floating-point numbers and booleans, along with two arrays. I can log the values from within my renderer - they make it there - but I receive errors from OpenGL if I try to do anything with them. No crashes result, but the glOrthof call doesn't work if I don't set the camera values to anything different. Code used to set up scene: [EAGLContext setCurrentContext:context]; glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer); //clears the color buffer bit glClear(GL_COLOR_BUFFER_BIT); glMatrixMode(GL_PROJECTION); //sets up the scene w/ ortho projection glViewport(0, 0, 320, 480); glLoadIdentity(); glOrthof(320, 0, dynamicData.cam_x2, dynamicData.cam_x1, 1.0, -1.0); glClearColor(1.0, 1.0, 1.0, 1.0); /*error checking code here*/ "dynamicData" (which is replaced every frame) is created within my game simulation. From within my controller, I call a method (w/in my simulation) that returns it, and pass the result on to the EAGLView, which passes it on to the renderer. I haven't been able to come up with a better solution for this - suggestions in this regard would be greatly appreciated as well. Also, this function doesn't work as well (values originate in the same place): glTranslatef(dynamicData.ship_x, dynamicData.ship_y, 0.0); Thanks in advance. Additional Definitions: Structure (declared in a separate header): typedef struct { float ship_x, ship_y; float cam_x1, cam_x2; } dynamicRenderData; Render data getter (and builder) (every frame) - (dynamicData)getDynRenderData { //d_rd is an ivar, zeroed on initialization d_rd.ship_x = mainShip.position.x; d_rd.ship_y = mainShip.position.y; d_rd.cam_x1 = d_rd.ship_x - 30.0f; d_rd.cam_x2 = d_rd.cam_x1 + 480.0f; return d_rd; } Zeroed at start. (d_rd.ship_x = 0;, etc…) Setting up the view. Prototype (GLView): - (void)draw: (dynamicRenderData)dynamicData Prototype (Renderer): - (void)drawView: (dynamicRenderData)dynamicData How it's called w/in the controller: //controller [glview draw: [world getDynRenderData]]; //glview (within draw) [renderer drawView: dynamicData];

    Read the article

  • another question about OpenGL ES rendering to texture

    - by ensoreus
    Hello, pros and gurus! Here is another question about rendering to texture. The whole stuff is all about saving texture between passing image into different filters. Maybe all iPhone developers knows about Apple's sample code with OpenGL processing where they used GL filters(functions), but pass into them the same source image. I need to edit an image by passing it sequentelly with saving the state of the image to edit. I am very noob in OpenGL, so I spent increadibly a lot of to solve the issue. So, I desided to create 2 FBO's and attach source image and temporary image as a textures to render in. Here is my init routine: glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnable(GL_TEXTURE_2D); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, (GLint *)&SystemFBO); glImage = [self loadTexture:preparedImage]; //source image for (int i = 0; i < 4; i++) { fullquad[i].s *= glImage->s; fullquad[i].t *= glImage->t; flipquad[i].s *= glImage->s; flipquad[i].t *= glImage->t; } tmpImage = [self loadEmptyTexture]; //editing image glGenFramebuffersOES(1, &tmpImageFBO); glBindFramebufferOES(GL_FRAMEBUFFER_OES, tmpImageFBO); glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, tmpImage->texID, 0); GLenum status = glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES); if(status != GL_FRAMEBUFFER_COMPLETE_OES) { NSLog(@"failed to make complete tmp framebuffer object %x", status); } glBindTexture(GL_TEXTURE_2D, 0); glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0); glGenRenderbuffersOES(1, &glImageFBO); glBindFramebufferOES(GL_FRAMEBUFFER_OES, glImageFBO); glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, glImage->texID, 0); status = glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) ; if(status != GL_FRAMEBUFFER_COMPLETE_OES) { NSLog(@"failed to make complete cur framebuffer object %x", status); } glBindTexture(GL_TEXTURE_2D, 0); glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0); When user drag the slider, this routine invokes to apply changes -(void)setContrast:(CGFloat)value{ contrast = value; if(flag!=mfContrast){ NSLog(@"contrast: dumped"); flag = mfContrast; glBindFramebufferOES(GL_FRAMEBUFFER_OES, glImageFBO); glClearColor(1,1,1,1); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(0, 512, 0, 512, -1, 1); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glScalef(512, 512, 1); glBindTexture(GL_TEXTURE_2D, tmpImage->texID); glViewport(0, 0, 512, 512); glVertexPointer(2, GL_FLOAT, sizeof(V2fT2f), &fullquad[0].x); glTexCoordPointer(2, GL_FLOAT, sizeof(V2fT2f), &fullquad[0].s); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0); } glBindFramebufferOES(GL_FRAMEBUFFER_OES,tmpImageFBO); glClearColor(0,0,0,1); glClear(GL_COLOR_BUFFER_BIT); glEnable(GL_TEXTURE_2D); glActiveTexture(GL_TEXTURE0); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(0, 512, 0, 512, -1, 1); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glScalef(512, 512, 1); glBindTexture(GL_TEXTURE_2D, glImage->texID); glViewport(0, 0, 512, 512); [self contrastProc:fullquad value:contrast]; glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0); [self redraw]; } Here are two cases: if it is the same filter(edit mode) to use, I bind tmpFBO to draw into tmpImage texture and edit glImage texture. contrastProc is a pure routine from Apples's sample. If it is another mode, than I save edited image by drawing tmpImage texture in source texture glImage, binded with glImageFBO. After that I call redraw: glBindFramebufferOES(GL_FRAMEBUFFER_OES, SystemFBO); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(0, kTexWidth, 0, kTexHeight, -1, 1); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glScalef(kTexWidth, kTexHeight, 1); glBindTexture(GL_TEXTURE_2D, glImage->texID); glViewport(0, 0, kTexWidth, kTexHeight); glVertexPointer(2, GL_FLOAT, sizeof(V2fT2f), &flipquad[0].x); glTexCoordPointer(2, GL_FLOAT, sizeof(V2fT2f), &flipquad[0].s); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0); And here it binds visual framebuffer and dispose glImage texture. So, the result is VERY aggresive filtering. Increasing contrast volume by just 0.2 brings image to state that comparable with 0.9 contrast volume in Apple's sample code project. I miss something obvious, I guess. Interesting, if I disabple line glBindTexture(GL_TEXTURE_2D, glImage->texID); in setContrast routine it brings no effect. At all. If I replace tmpImageFBO with SystemFBO to draw glImage directly on display(and disabling redraw invoking line), all works fine. Please, HELP ME!!! :(

    Read the article

  • iPhone OpenGL ES

    - by Chaoz
    Hello, Since I'm not familiar with iPhone development I'd like to know whether it is possible to use OpenGL ES1.0 on the iPhone 3gs rather than 2.0. I'd like to share a code base across different mobile platforms and not having to deal with the programmable pipeline from OGLES 2.0 could speed up an initial build. Thanks

    Read the article

  • opengl es render but dont show on display

    - by Sponge
    i have written a object selection algorithm which picks the objects by there color. i give every object an unique color and then i just have to use the glReadPixels method to check which object was selected this works fine and is really fast but the problem is that the frame is displayed on the screen with all the picking-colors so the screen flashes every time you select something. so my question is: how do i write everything in the correct display buffer but dont display it on the screen to avoid these flashes?

    Read the article

  • Drawing with element array in OpenGL ES

    - by FatalMojo
    Hello! I am trying to use OpenGLES to draw a x by y matrix of squares about an arbitrary point. I have an array sideVertice[] that holds a series of vertex structs defined as such typedef struct { GLfloat x; GLfloat y; GLfloat z; } Vertex3D; and an element array defined as such GLubyte elementArray[]; my draw loop is as such glLoadIdentity(); glVertexPointer(3, GL_FLOAT, 0, cube.sideVertice); for (int i=0; i<((cube.cubeSize + 1)*(cube.cubeSize + 1)); i++) { for (int j=0; j<=3; j++) { elementArray[j] = j + i*4; glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_BYTE, elementArray); } } for (int i=0; i<=3; i++) elementArray[i] = i; However, the visual output is corrupted and I cannot figure out what the problem is. here is an output of the vertice held in the array: 2010-04-15 23:44:48.816 RubixGL[4203:20b] vertex[0][0] x:-26.000000 y:1.000000 2010-04-15 23:44:48.817 RubixGL[4203:20b] vertex[1][1] x:-26.000000 y:26.000000 2010-04-15 23:44:48.826 RubixGL[4203:20b] vertex[2][2] x:-1.000000 y:1.000000 2010-04-15 23:44:48.829 RubixGL[4203:20b] vertex[3][3] x:-1.000000 y:26.000000 2010-04-15 23:44:48.830 RubixGL[4203:20b] Next Face 2010-04-15 23:44:48.830 RubixGL[4203:20b] vertex[0][4] x:1.000000 y:1.000000 2010-04-15 23:44:48.832 RubixGL[4203:20b] vertex[1][5] x:1.000000 y:26.000000 2010-04-15 23:44:48.837 RubixGL[4203:20b] vertex[2][6] x:26.000000 y:1.000000 2010-04-15 23:44:48.838 RubixGL[4203:20b] vertex[3][7] x:26.000000 y:26.000000 2010-04-15 23:44:48.848 RubixGL[4203:20b] Next Face 2010-04-15 23:44:48.849 RubixGL[4203:20b] vertex[0][8] x:-26.000000 y:-26.000000 2010-04-15 23:44:48.850 RubixGL[4203:20b] vertex[1][9] x:-26.000000 y:-1.000000 2010-04-15 23:44:48.851 RubixGL[4203:20b] vertex[2][10] x:-1.000000 y:-26.000000 2010-04-15 23:44:48.852 RubixGL[4203:20b] vertex[3][11] x:-1.000000 y:-1.000000 2010-04-15 23:44:48.853 RubixGL[4203:20b] Next Face 2010-04-15 23:44:48.853 RubixGL[4203:20b] vertex[0][12] x:1.000000 y:-26.000000 2010-04-15 23:44:48.854 RubixGL[4203:20b] vertex[1][13] x:1.000000 y:-1.000000 2010-04-15 23:44:48.854 RubixGL[4203:20b] vertex[2][14] x:26.000000 y:-26.000000 2010-04-15 23:44:48.855 RubixGL[4203:20b] vertex[3][15] x:26.000000 y:-1.000000 any ideas?

    Read the article

  • Android OpenGL es "glDrawTexfOES" draws upside down

    - by Alle
    I'm using OpenGL for Android to draw my 2D images. Whenever I draw something using the code: gl.glViewport(aspectRatioOffset, 0, screenWidth, screenHeight); gl.glMatrixMode(GL10.GL_PROJECTION); gl.glLoadIdentity(); GLU.gluOrtho2D(gl, aspectRatioOffset, screenWidth + aspectRatioOffset,screenHeight, 0); gl.glMatrixMode(GL10.GL_MODELVIEW); gl.glLoadIdentity(); gl.glEnable(GL10.GL_TEXTURE_2D); gl.glBindTexture(GL10.GL_TEXTURE_2D, myScene.neededGraphics.get(ID).get(animationID).get(animationIndex)); crop[0] = 0; crop[1] = 0; crop[2] = width; crop[3] = height; ((GL11Ext)gl).glDrawTexfOES(x, y, z, width, height) I get an upside down result. I'v seen people solve this through doing: crop[0] = 0; crop[1] = height; crop[2] = width; crop[3] = -height; This does however hurt the logic in my application, so I would like the result to not be flipped upside down. Does anyone know why it happen, and any way of avoiding or solving it?

    Read the article

  • Overlay an image over video using OpenGL ES shaders

    - by BlueVoodoo
    I am trying to understand the basic concepts of OpenGL. A week into it, I am still far from there. Once I am in glsl, I know what to do but I find getting there is the tricky bit. I am currently able to pass in video pixels which I manipulate and present. I have then been trying to add still image as an overlay. This is where I get lost. My end goal is to end up in the same fragment shader with pixel data from both my video and my still image. I imagine this means I need two textures and pass on two pixel buffers. I am currently passing the video pixels like this: glGenTextures(1, &textures[0]); //target, texture glBindTexture(GL_TEXTURE_2D, textures[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, buffer); Would I then repeat this process on textures[1] with the second buffer from the image? If so, do I then bind both GL_TEXTURE0 and GL_TEXTURE1? ...and would my shader look something like this? uniform sampler2D videoData; uniform sampler2D imageData; once I am in the shader? It seems no matter what combination I try, image and video always ends up being just video data in both these. Sorry for the many questions merged in here, just want to clear my many assumptions and move on. To clarify the question a bit, what do I need to do to add pixels from a still image in the process described? ("easy to understand" sample code or any types of hints would be appreciated).

    Read the article

  • OpenGL ES perspective projection

    - by TimeManx
    I'm having a hard time understanding how glFrustum & gluPerspective work. I understand the concept of perspective projection but the functions aren't behaving how I expect them to. For example, if I set the frustum this way glFrustumf(0, 10, 0, 10, 1, 100) and have a rectangle at points 0, 0, 1, 0, 10, 1, 10, 10, 1, 10, 0, 1 then the rectangle is drawn with its left edge at -5 & right edge at 5, so the left half of the rectangle isn't visible. And if x is translated, I'd expect y to be too. But that doesn't happen either. In whatever examples I've seen, the coordinates for the projection matrix are taken as glFrustumf(-10, 10, -10, 10, 1, 100) but either way, whatever part is shown should be dependent on the rectangle's coordinates, right?

    Read the article

  • How to properly use glDiscardFramebufferEXT

    - by Rafael Spring
    This question relates to the OpenGL ES 2.0 Extension EXT_discard_framebuffer. It is unclear to me which cases justify the use of this extension. If I call glDiscardFramebufferEXT() and it puts the specified attachable images in an undefined state this means that either: - I don't care about the content anymore since it has been used with glReadPixels() already, - I don't care about the content anymore since it has been used with glCopyTexSubImage() already, - I shouldn't have made the render in the first place. Clearly, only the 1st two cases make sense or are there other cases in which glDiscardFramebufferEXT() is useful? If yes, which are these cases?

    Read the article

  • Need guidelines for optimizing WebGL performance by minimizing shader changes

    - by brainjam
    I'm trying to get an idea of the practicality of WebGL for rendering large architectural interior scenes, consisting of 100K's of triangles. These triangles are distributed over many objects, and there are many materials in the scene. On the other hand, there are no moving parts. And the materials tend to be fairly simple, mostly based on texture maps. There is a lot of texture map sharing .. for example all the chairs in scene will share a common map. There is also some multitexturing - up to three textures overlaid in a material. I've been doing a little experimentation and reading, and gather that frequently switching materials during a rendering pass will slow things down. For example, a scene with 200K triangles will have significant performance differences, depending on whether there are 10 or 1000 objects, assuming that each time an object is displayed a new material is set up. So it seems that if performance is important the scene should be sorted by materials so as to minimize material switching. What I'm looking for is guidelines on how to think of the overhead of various state changes, and where do I get the biggest bang for the buck. For example, what are the relative performance costs of, say, gl.useProgram(), gl.uniformMatrix4fv(), gl.drawElements() should I try to write ubershaders to minimize shader switching? should I try to aggregate geometry to minimize the number of gl.drawElements() calls I realize that mileage may vary depending on browser, OS, and graphics hardware. And I'm also not looking for heroic measures. Just some guidelines from people who have already had some experience in making scenes fast. I'll add that while I've had some experience with fixed-pipeline OpenGL programming in the past, I'm rather new to the WebGL/OpenGL ES 2.0 way of doing things.

    Read the article

  • Oracle üzleti intelligencia és MySQL adatforrás

    - by Fekete Zoltán
    A tegnap Oracle sajtóhír a következo bejelentésrol szól: megjelent a MySQL Cluster 7.1 új verziója. Ez is az Oracle elkötelezettségét jelzi a MySQL fejlesztése és az Open Source mellett. A témáról nemrég irt a HWSW a következo cikkben: Az Oracle betekintést engedett a MySQL jövojébe. Idézetek a cikkbol: "Santa Clarában az O'Reilly MySQL Conference and Expo rendezvényen személyesen az Oracle fomérnöke, Edward Screven beszélt arról, milyen jövot szánnak a MySQL-nek." "Screven igyekezett megerosíteni az Oracle korábbi vállalásait. "Továbbra is fejleszteni és javítani és támogatni fogjuk a MySQL-t" - szögezte le a fomérnök..." Miért is érdekes ez? Azért mert Oracle Business Intelligence csomagok egyik adatforrása a MySQL adatbázis. Azért mert az Oracle BI csomagok lelke, az Oracle BI Server egyedülállóan jól integrál heterogén adatforrásokat, mindezt egyetlen közös üzleti metaadat réteggel! Többek között erre nem képesek más szállítók.

    Read the article

  • Újabb házassági terv az adatbázis piacon: Sybase és SAP

    - by Fekete Zoltán
    Az Oracle az utóbbi évben technológiai és alkalmazás oldalon több mint 50 céget vásárolt meg, legutóbb a hardver-operációs rendszer, Java, IDM, virtualizáció és számos más téren is innovatív Sun céget. Ráadásul az Oracle best-of-breed azaz iparági vezeto cégekkel és mogoldásokkal erosíti a portfólióját. Az Oracle hosszú évek óta az adattárházak (data wrehouse) területén is a Gartner szerint a piacvezetok mágikus négyzetében van. Ennek a területnek vezeto megoldása az Oracle Database optimalizált hardveren futtatása az Exadata / Database Machine területen. Az Oracle Database mind tranzakciókezelésre mind adattárház feldolgozásra, mind ezen megoldások egy környezetben futtatására optimalizált megoldás. Az SAP korábban meglehetosen elítélte az Oracle best-of-breed felvásárlási stratégiáját, mondván az nem vezet semmire. :) Most a megmaradék önálló cégek közül a Sybase-t szemelte ki. A BBC híre. Kicsit soknak tunik az 5,8 milliárd dollár? Érdekes, hogy a cikk szerint a felvásárlási terv hírére az SAP részvény árfolyam 40 centtel esett.

    Read the article

  • Fokedvenc BI és DW blogjaim 7: Oracle Data Warehousing

    - by Fekete Zoltán
    A következo tartalmas blogot ajánlom a nyájas olvasó figyelmébe: The Data Warehouse Insider: http://blogs.oracle.com/datawarehousing/ Az adattárház általános fogalmaitól és a bevezetések és tervezés "best practice" legjobb gyakorlati tapasztalatokig. Témák: csillagsémák, particionálás, OLAP, 3NF, párhuzamos feldolgozások, adatbetöltés, ETL-ELT, adatmodellek, rendezvények, Exadata, Database Machine, tömörítés, adatbányászat, ügyféltörténetek,...

    Read the article

  • how to export bind and keyframe bone poses from blender to use in OpenGL

    - by SaldaVonSchwartz
    EDIT: I decided to reformulate the question in much simpler terms to see if someone can give me a hand with this. Basically, I'm exporting meshes, skeletons and actions from blender into an engine of sorts that I'm working on. But I'm getting the animations wrong. I can tell the basic motion paths are being followed but there's always an axis of translation or rotation which is wrong. I think the problem is most likely not in my engine code (OpenGL-based) but rather in either my misunderstanding of some part of the theory behind skeletal animation / skinning or the way I am exporting the appropriate joint matrices from blender in my exporter script. I'll explain the theory, the engine animation system and my blender export script, hoping someone might catch the error in either or all of these. The theory: (I'm using column-major ordering since that's what I use in the engine cause it's OpenGL-based) Assume I have a mesh made up of a single vertex v, along with a transformation matrix M which takes the vertex v from the mesh's local space to world space. That is, if I was to render the mesh without a skeleton, the final position would be gl_Position = ProjectionMatrix * M * v. Now assume I have a skeleton with a single joint j in bind / rest pose. j is actually another matrix. A transform from j's local space to its parent space which I'll denote Bj. if j was part of a joint hierarchy in the skeleton, Bj would take from j space to j-1 space (that is to its parent space). However, in this example j is the only joint, so Bj takes from j space to world space, like M does for v. Now further assume I have a a set of frames, each with a second transform Cj, which works the same as Bj only that for a different, arbitrary spatial configuration of join j. Cj still takes vertices from j space to world space but j is rotated and/or translated and/or scaled. Given the above, in order to skin vertex v at keyframe n. I need to: take v from world space to joint j space modify j (while v stays fixed in j space and is thus taken along in the transformation) take v back from the modified j space to world space So the mathematical implementation of the above would be: v' = Cj * Bj^-1 * v. Actually, I have one doubt here.. I said the mesh to which v belongs has a transform M which takes from model space to world space. And I've also read in a couple textbooks that it needs to be transformed from model space to joint space. But I also said in 1 that v needs to be transformed from world to joint space. So basically I'm not sure if I need to do v' = Cj * Bj^-1 * v or v' = Cj * Bj^-1 * M * v. Right now my implementation multiples v' by M and not v. But I've tried changing this and it just screws things up in a different way cause there's something else wrong. Finally, If we wanted to skin a vertex to a joint j1 which in turn is a child of a joint j0, Bj1 would be Bj0 * Bj1 and Cj1 would be Cj0 * Cj1. But Since skinning is defined as v' = Cj * Bj^-1 * v , Bj1^-1 would be the reverse concatenation of the inverses making up the original product. That is, v' = Cj0 * Cj1 * Bj1^-1 * Bj0^-1 * v Now on to the implementation (Blender side): Assume the following mesh made up of 1 cube, whose vertices are bound to a single joint in a single-joint skeleton: Assume also there's a 60-frame, 3-keyframe animation at 60 fps. The animation essentially is: keyframe 0: the joint is in bind / rest pose (the way you see it in the image). keyframe 30: the joint translates up (+z in blender) some amount and at the same time rotates pi/4 rad clockwise. keyframe 59: the joint goes back to the same configuration it was in keyframe 0. My first source of confusion on the blender side is its coordinate system (as opposed to OpenGL's default) and the different matrices accessible through the python api. Right now, this is what my export script does about translating blender's coordinate system to OpenGL's standard system: # World transform: Blender -> OpenGL worldTransform = Matrix().Identity(4) worldTransform *= Matrix.Scale(-1, 4, (0,0,1)) worldTransform *= Matrix.Rotation(radians(90), 4, "X") # Mesh (local) transform matrix file.write('Mesh Transform:\n') localTransform = mesh.matrix_local.copy() localTransform = worldTransform * localTransform for col in localTransform.col: file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3])) file.write('\n') So if you will, my "world" matrix is basically the act of changing blenders coordinate system to the default GL one with +y up, +x right and -z into the viewing volume. Then I also premultiply (in the sense that it's done by the time we reach the engine, not in the sense of post or pre in terms of matrix multiplication order) the mesh matrix M so that I don't need to multiply it again once per draw call in the engine. About the possible matrices to extract from Blender joints (bones in Blender parlance), I'm doing the following: For joint bind poses: def DFSJointTraversal(file, skeleton, jointList): for joint in jointList: bindPoseJoint = skeleton.data.bones[joint.name] bindPoseTransform = bindPoseJoint.matrix_local.inverted() file.write('Joint ' + joint.name + ' Transform {\n') translationV = bindPoseTransform.to_translation() rotationQ = bindPoseTransform.to_3x3().to_quaternion() scaleV = bindPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) DFSJointTraversal(file, skeleton, joint.children) file.write('}\n') Note that I'm actually grabbing the inverse of what I think is the bind pose transform Bj. This is so I don't need to invert it in the engine. Also note I went for matrix_local, assuming this is Bj. The other option is plain "matrix", which as far as I can tell is the same only that not homogeneous. For joint current / keyframe poses: for kfIndex in keyframes: bpy.context.scene.frame_set(kfIndex) file.write('keyframe: {:d}\n'.format(int(kfIndex))) for i in range(0, len(skeleton.data.bones)): file.write('joint: {:d}\n'.format(i)) currentPoseJoint = skeleton.pose.bones[i] currentPoseTransform = currentPoseJoint.matrix translationV = currentPoseTransform.to_translation() rotationQ = currentPoseTransform.to_3x3().to_quaternion() scaleV = currentPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) file.write('\n') Note that here I go for skeleton.pose.bones instead of data.bones and that I have a choice of 3 matrices: matrix, matrix_basis and matrix_channel. From the descriptions in the python API docs I'm not super clear which one I should choose, though I think it's the plain matrix. Also note I do not invert the matrix in this case. The implementation (Engine / OpenGL side): My animation subsystem does the following on each update (I'm omitting parts of the update loop where it's figured out which objects need update and time is hardcoded here for simplicity): static double time = 0; time = fmod((time + elapsedTime),1.); uint16_t LERPKeyframeNumber = 60 * time; uint16_t lkeyframeNumber = 0; uint16_t lkeyframeIndex = 0; uint16_t rkeyframeNumber = 0; uint16_t rkeyframeIndex = 0; for (int i = 0; i < aClip.keyframesCount; i++) { uint16_t keyframeNumber = aClip.keyframes[i].number; if (keyframeNumber <= LERPKeyframeNumber) { lkeyframeIndex = i; lkeyframeNumber = keyframeNumber; } else { rkeyframeIndex = i; rkeyframeNumber = keyframeNumber; break; } } double lTime = lkeyframeNumber / 60.; double rTime = rkeyframeNumber / 60.; double blendFactor = (time - lTime) / (rTime - lTime); GLKMatrix4 bindPosePalette[aSkeleton.jointsCount]; GLKMatrix4 currentPosePalette[aSkeleton.jointsCount]; for (int i = 0; i < aSkeleton.jointsCount; i++) { F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.joints[i]; F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.joints[i]; GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor); GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor); GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor); GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation); currentTransform = GLKMatrix4TranslateWithVector3(currentTransform, LERPTranslation); currentTransform = GLKMatrix4ScaleWithVector3(currentTransform, LERPScaling); GLKMatrix4 inverseBindTransform = GLKMatrix4MakeWithQuaternion(aSkeleton.joints[i].inverseBindTransform.q); inverseBindTransform = GLKMatrix4TranslateWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.t); inverseBindTransform = GLKMatrix4ScaleWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.s); if (aSkeleton.joints[i].parentIndex == -1) { bindPosePalette[i] = inverseBindTransform; currentPosePalette[i] = currentTransform; } else { bindPosePalette[i] = GLKMatrix4Multiply(inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]); currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform); } aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]); } Finally, this is my vertex shader: #version 100 uniform mat4 modelMatrix; uniform mat3 normalMatrix; uniform mat4 projectionMatrix; uniform mat4 skinningPalette[6]; uniform lowp float skinningEnabled; attribute vec4 position; attribute vec3 normal; attribute vec2 tCoordinates; attribute vec4 jointsWeights; attribute vec4 jointsIndices; varying highp vec2 tCoordinatesVarying; varying highp float lIntensity; void main() { tCoordinatesVarying = tCoordinates; vec4 skinnedVertexPosition = vec4(0.); for (int i = 0; i < 4; i++) { skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position; } vec4 skinnedNormal = vec4(0.); for (int i = 0; i < 4; i++) { skinnedNormal += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * vec4(normal, 0.); } vec4 finalPosition = mix(position, skinnedVertexPosition, skinningEnabled); vec4 finalNormal = mix(vec4(normal, 0.), skinnedNormal, skinningEnabled); vec3 eyeNormal = normalize(normalMatrix * finalNormal.xyz); vec3 lightPosition = vec3(0., 0., 2.); lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition))); gl_Position = projectionMatrix * modelMatrix * finalPosition; } The result is that the animation displays wrong in terms of orientation. That is, instead of bobbing up and down it bobs in and out (along what I think is the Z axis according to my transform in the export clip). And the rotation angle is counterclockwise instead of clockwise. If I try with a more than one joint, then it's almost as if the second joint rotates in it's own different coordinate space and does not follow 100% its parent's transform. Which I assume it should from my animation subsystem which I assume in turn follows the theory I explained for the case of more than one joint. Any thoughts?

    Read the article

  • Smart View és az Office verziók

    - by Fekete Zoltán
    A Smart View többek között az Oracle Essbase (Hyperion) lekérdezo-elemzo-kontrolling-adatbeviteli stb felülete is. A Smart View egy MS Excel add-in-ként áll rendelkezésre. Teljes mértékben támogatja a tervezési, költségvetéskészítési, kontrolling és elemzési munkát. Az Essbase a kontrollerek szívéhez és kezéhez közelálló OLAP szerver, ami a Hyperion Planningnek is az alapja. Milyen MS Office verziókat támogat a Smart View? MS Office 2000 (XP), 2003, 2007 verziókat. Ezt az információt az Oracle Enterprise Performance Management Products - Supported Platforms Matrices helyen felsorolt dokumentumok írják le. Az Oracle Enterprise Performance Management aktuális verziójának 11.1.1.3 teljes dokumentácója megtalálható itt.

    Read the article

  • Database Machine, 11gR2 és a Tivoli Data Protection is együttmuködik

    - by Fekete Zoltán
    Felmerült a kérdés, hogy a Database Machine környezetben végezhetjük-e a mentéseket Tivoli Data Protection for Oracle szoftverrel. A válasz: IGEN. Az IBM Tivoli Data Protection for Oracle V5.5.2 on Linux x86_64 immár bevizsgáltatott az Oracle 11gR2 RAC-cal. Az aktuális support információ itt található: A V5.5.2-re kattintva megtaláljátok a következo adatokat: Oracle Enterprise Linux 5 with any of the following Oracle releases: * 64-bit Oracle Standard or Enterprise Server 10gR2, 11g, or 11gR2 * 64-bit Real Application Clusters (RAC) 10gR2, 11g, or 11gR2 Ez az információ elegendo és remek :), mivel a Database Machine komponensek Oracle Enterprise Linux operációs rendszeren muködnek, 64 bites architektúrában és az Oracle RMAN (Recovery Manager) elemet tudja használni a TDP.

    Read the article

  • Adatintegrációs rendezvény HOUG tagoknak, DW/BI és Database szakosztály

    - by user645740
    2011. június 29-én szerdán lesz a HOUG szervezet Oracle adatintegrációs rendezvénye. Részvételi feltétel: HOUG tagság! Ha HOUG tag, akkor azért jöjjön, ha Oracle felhasználó és még nem HOUG tag, akkor lépjen be gyorsan a HOUG egyesületbe! Témák: adatintegráció, ELT-ETL, OWB, ODI, GoldenGate, RAC, Active Data Guard, stb. Az Oracle Warehouse Buildernek sok felhasználója van Magyarországon, és nem csupán az adattárházas-BI környezetekben. Az ELT területen Oracle Data Integratorra fókuszál az Oracle, ami heterogén környezetekben kiválóan muködik, azaz nem csak Oracle adatbázisokkal. Katt ide: HOUG szervezet Oracle adatintegrációs rendezvénye.

    Read the article

  • OpenGL ES 2.0 example for JOGL

    - by fjdutoit
    I've scoured the internet for the last few hours looking for an example of how to run even the most basic OpenGL ES 2 example using JOGL but "by Jupiter!" it has been a total fail. I tried converting the android example from the OpenGL ES 2.0 Programming Guide examples (and at the same time looking at the WebGL example -- which worked fine) yet without any success. Are there any examples out there? If anyone else wants some extra help regarding this question see this thread on the official Jogamp forum.

    Read the article

  • OpenGL ES, orthopgraphics projection and viewport

    - by DarkDeny
    I want to make some simple 2D game on iOS to familiarize myself with OpenGL ES. I started with Ray Wenderlich tutorial (How To Create A Simple 2D iPhone Game with OpenGL ES 2.0 and GLKit). That tutorial is quite good, but I miss some parts of a puzzle. Ray creates orthographic projection using some magic numbers like 480 and 320. It is not clear to me why did he take these numbers, and as far as I can see - sprite is not mapped to the ipad simulator screen one-to-one pixel. I tried to play with parameters with which ortho matrix is created, but I cannot figure out what math is here. How can I calculate numbers (bottom, top, left, right, close, far) which will be parameters to orthographic projection matrix creation and have sprite on the screen shown in its original size?

    Read the article

  • OpenGL ES 1.1 vs 2.0 for 2D Graphics, with rotated sprites?

    - by Lee Olayvar
    I am having trouble finding information related to which i should choose, OpenGL ES 1.1 or 2.0 for 2D graphics. OpenGL ES 1.1 on Android is a bit limited to my knowledge, and based purely on sprite count the only useful renderer is draw_texture() (as far as i know). However, that does not have rotation and rotation is very important to me. Now with the NDK adding support for OpenGL ES 2.0, i am trying to figure out if there is anything that preforms as well as draw_texture(), but can handle rotation. Anyone have any information on if 2.0 can help me in this area?

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >