Search Results

Search found 1704 results on 69 pages for 'es'.

Page 17/69 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • iPhone openGLES performance tuning

    - by genesys
    Hey there! I'm trying now for quite a while to optimize the framerate of my game without really making progress. I'm running on the newest iPhone SDK and have a iPhone 3G 3.1.2 device. I invoke arround 150 drawcalls, rendering about 1900 Triangles in total (all objects are textured using two texturelayers and multitexturing. most textures come from the same textureAtlasTexture stored in pvrtc 2bpp compressed texture). This renders on my phone at arround 30 fps, which appears to me to be way too low for only 1900 triangles. I tried many things to optimize the performance, including batching together the objects, transforming the vertices on the CPU and rendering them in a single drawcall. this yelds 8 drawcalls (as oposed to 150 drawcalls), but performance is about the same (fps drop to arround 26fps) I'm using 32byte vertices stored in an interleaved array (12bytes position, 12bytes normals, 8bytes uv). I'm rendering triangleLists and the vertices are ordered in TriStrip order. I did some profiling but I don't really know how to interprete it. instruments-sampling using Instruments and Sampling yelds this result: http://neo.cycovery.com/instruments_sampling.gif telling me that a lot of time is spent in "mach_msg_trap". I googled for it and it seems this function is called in order to wait for some other things. But wait for what?? instruments-openGL instruments with the openGL module yelds this result: http://neo.cycovery.com/intstruments_openglES_debug.gif but here i have really no idea what those numbers are telling me shark profiling: profiling with shark didn't tell me much either: http://neo.cycovery.com/shark_profile_release.gif the largest number is 10%, spent by DrawTriangles - and the whole rest is spent in very small percentage functions Can anyone tell me what else I could do in order to figure out the bottleneck and could help me to interprete those profiling information? Thanks a lot!

    Read the article

  • How to stop OpenGL from applying blending to certain content? (cocos2d/iPhone/OpenGL)

    - by RexOnRoids
    Supporting Info: I use cocos2d to draw a sprite (graph background) on the screen (z:-1). I then use cocos2d to draw lines/points (z:0) on top of the background -- and make some calls to OpenGL blending functions before the drawing to SMOOTH out the lines. Problem: The problem is that: aside from producing smooth lines/points, calling these OpenGL blending functions seems to degrade the underlying sprite (graph background). So there is a tradeoff: I can either have (Case 1) a nice background and choppy lines/points, or I can have (Case 2) nice smooth lines/points and a degraded background. But obviously I need both. The Code: I have included code of the draw() method of the CCLayer for both cases explained above. As you can see, the code producing the difference between Case 1 and Case 2 seems to be 1 or 2 lines involving OpenGL Blending. Case 1 -- MainScene.h (CCLayer): -(void)draw{ int lastPointX = 0; int lastPointY = 0; GLfloat colorMAX = 255.0f; GLfloat valR; GLfloat valG; GLfloat valB; if([self.myGraphManager ready]){ valR = (255.0f/colorMAX)*1.0f; valG = (255.0f/colorMAX)*1.0f; valB = (255.0f/colorMAX)*1.0f; NSEnumerator *enumerator = [[self.myGraphManager.currentCanvas graphPoints] objectEnumerator]; GraphPoint* object; while ((object = [enumerator nextObject])) { if(object.filled){ /*Commenting out the following two lines induces a problem of making it impossible to have smooth lines/points, but has merit in that it does not degrade the background sprite.*/ //glEnable (GL_BLEND); //glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glHint (GL_LINE_SMOOTH_HINT, GL_DONT_CARE); glEnable (GL_LINE_SMOOTH); glLineWidth(1.5f); glColor4f(valR, valG, valB, 1.0); ccDrawLine(ccp(lastPointX, lastPointY), ccp(object.position.x, object.position.y)); lastPointX = object.position.x; lastPointY = object.position.y; glPointSize(3.0f); glEnable(GL_POINT_SMOOTH); glHint(GL_POINT_SMOOTH_HINT, GL_NICEST); ccDrawPoint(ccp(lastPointX, lastPointY)); } } } } Case 2 -- MainScene.h (CCLayer): -(void)draw{ int lastPointX = 0; int lastPointY = 0; GLfloat colorMAX = 255.0f; GLfloat valR; GLfloat valG; GLfloat valB; if([self.myGraphManager ready]){ valR = (255.0f/colorMAX)*1.0f; valG = (255.0f/colorMAX)*1.0f; valB = (255.0f/colorMAX)*1.0f; NSEnumerator *enumerator = [[self.myGraphManager.currentCanvas graphPoints] objectEnumerator]; GraphPoint* object; while ((object = [enumerator nextObject])) { if(object.filled){ /*Enabling the following two lines gives nice smooth lines/points, but has a problem in that it degrades the background sprite.*/ glEnable (GL_BLEND); glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glHint (GL_LINE_SMOOTH_HINT, GL_DONT_CARE); glEnable (GL_LINE_SMOOTH); glLineWidth(1.5f); glColor4f(valR, valG, valB, 1.0); ccDrawLine(ccp(lastPointX, lastPointY), ccp(object.position.x, object.position.y)); lastPointX = object.position.x; lastPointY = object.position.y; glPointSize(3.0f); glEnable(GL_POINT_SMOOTH); glHint(GL_POINT_SMOOTH_HINT, GL_NICEST); ccDrawPoint(ccp(lastPointX, lastPointY)); } } } }

    Read the article

  • Will OpenGL give me any FPS improvement over CoreAnimation for scrolling a large image?

    - by Ben Roberts
    Hi, I'm considering re-writing the menu system of my iPhone app to use Open GL just to improve the smoothness of scrolling a big image (480x1900px) across the screen. I'm looking at doing this as a way to improve on using the method/solution as described here (http://stackoverflow.com/questions/1443140/smoother-uiview). This solution was a big improvement over the previous implementation but it's still not perfect and as this is the first thing the user will see I'd like it to be as flawless as possible. Will switching to OpenGL give me the sort of smooth scrolling I'm looking for? I've stayed clear of OpenGL until now as this is my first app and core animation has handled everything else I've thrown at it well enough, would be good to know if this alternative implementation is likely to work! thanks

    Read the article

  • Android: 2D. OpenGl or android.graphics?

    - by DroidIn.net
    I'm working with my friend on our first Android game. Basic idea is that every frame the whole surface is redrawn (1 large bitmap) which then sprinkled all over with large number of particles which produces effect of soapy bubbles where there's a pool of about 20 bitmaps which randomly gets picked to produce illusion that all bubbles (between 200 - 300) are all different. The math engine is in C (JNI) and currently all drawing is done using android.graphics package very similar (since that was the example I was using) to Lunar Lander. It works but animation is somewhat jerky and I can feel by temperature of my phone that it is very busy. Will we benefit from switching to OpenGL? And as a bonus question: what would be a good way to optimize the drawing mechanism (Lunar Lander like) we have now?

    Read the article

  • Difference between GL10 and GLES10 on Android

    - by kayahr
    The GLSurfaceView.Renderer interface of the Android SDK gives me a GL interface as parameter which has the type GL10. This interface is implemented by some private internal jni wrapper class. But there is also the class GLES10 where all the GL methods are available as static methods. Is there an important difference between them? So what if I ignore the gl parameter of onDrawFrame and instead use the static methods of GLES10 everywhere? Here is an example. Instead of doing this: void onDrawFrame(GL10 gl) { drawSomething(gl); } void drawSomething(GL10 gl) { gl.glLoadIdentity(); ... } I could do this: void onDrawFrame(GL10 gl) { drawSomething(); } void drawSomething() { GLES10.glLoadIdentity(); ... } The advantage is that I don't have to pass the GL context to all called methods. But even it it works (And it works, I tried it) I wonder if there are any disadvantages and reasons to NOT do it like that.

    Read the article

  • Draw a line over UIViewController

    - by ghiboz
    Hi all, I have my app on iPhone with a UIViewController with some stuff inside.. image, textbox, etc... is there a way to draw a line or something like this using opengl directly inside the UIViewController? thanks in advance

    Read the article

  • AndEngine VS Android's Canvas VS OpenGLES - For rendering a 2D indoor vector map

    - by Orchestrator
    This is a big issue for me I'm trying to figure out for a long time already. I'm working on an application that should include a 2D vector indoor map in it. The map will be drawn out from an .svg file that will specify all the data of the lines, curved lines (path) and rectangles that should be drawn. My main requirement from the map are Support touch events to detect where exactly the finger is touching. Great image quality especially when considering the drawings of curved and diagonal lines (anti-aliasing) Optional but very nice to have - Built in ability to zoom, pan and rotate. So far I tried AndEngine and Android's canvas. With AndEngine I had troubles with implementing anti-aliasing for rendering smooth diagonal lines or drawing curved lines, and as far as I understand, this is not an easy thing to implement in AndEngine. Though I have to mention that AndEngine's ability to zoom in and pan with the camera instead of modifying the objects on the screen was really nice to have. I also had some little experience with the built in Android's Canvas, mainly with viewing simple bitmaps, but I'm not sure if it supports all of these things, and especially if it would provide smooth results. Last but no least, there's the option of just plain OpenGLES 1 or 2, that as far as I understand, with enough work should be able to support all the features I require. However it seems like something that would be hard to implement. And I've never programmed in OpenGL or anything like it, but I'm willing very much to learn. To sum it all up, I need a platform that would provide me with the ability to do the 3 things I mentioned before, but also very important - To allow me to implement this feature as fast as possible. Any kind of answer or suggestion would be very much welcomed as I'm very eager to solve this problem! Thanks!

    Read the article

  • How to overlay GLSurfaceView over a MapView in Android?

    - by Rajapandian
    Hi, I want to create a simple Map based application in android ,where i can display my current position.Instead of overlaying a simple Image on the MapView to represent the position, i want to overlay the GLSurfaceView on the MapView. But i don't know how to achieve this. Is there any way to do that?. Please anybody knows the solution help me.

    Read the article

  • iPad + OpenGL ES2. Why the Puzzling Virtual Memory Spike During Device Reorientation?

    - by dugla
    I've been spending the afternoon starring at Xcode Instruments memory monitor trying to decipher the following memory issue. I have a fullscreen OpenGL ES2 app running on iPad. I am fanatical about memory issues so my retains/releases are all nicely balanced. I closely monitor memory leaks. My app is basically squeeky clean. Except occassionally when I reorient the device. Portrait to Landscape. Back and forth I rock the device stress testing my discarding and rebuilding of the OpenGL framebuffer. The ambient memory footprint of my app is about 70MB Real Mems and 180MB Virtual Mems. Real memory hardly varies at all during device rotations. However the virtual mems reading sometimes briefly spikes up to 250MB and then recedes back to 180MB. No real pattern. But clearly related discarding/rebuilding the framebuffer. I see random memory warnings in my NSlogs but the app just hums along, no worries. 1) Since iPhone OS devices don't have VM could someone explain to me what the VM reading actually means? 2) My app totally leak free and generally bulletproof dispite the VM spikes. Never crashes. Rock solid. Should I be concerned about this? 3) There is clearly something happening in OpenGL framebuffer land that is causing this but I am using the API in the proper way: paraphrasing: Discarding the framebuffer: glDeleteRenderbuffers(1, &m_colorbuffer); glDeleteFramebuffers(1, &m_framebuffer); Rebuilding the framebuffer: glGenFramebuffers(1, &m_framebuffer); glGenRenderbuffers(1, &m_colorbuffer); Is there some other memory flushing trick I have missed? Thanks for any insight. Cheers, Doug

    Read the article

  • How to use onSensorChanged sensor data in combination with OpenGL

    - by Sponge
    I have written a TestSuite to find out how to calculate the rotation angles from the data you get in SensorEventListener.onSensorChanged(). I really hope you can complete my solution to help people who will have the same problems like me. Here is the code, i think you will understand it after reading it. Feel free to change it, the main idea was to implement several methods to send the orientation angles to the opengl view or any other target which would need it. method 1 to 4 are working, they are directly sending the rotationMatrix to the OpenGl view. all other methods are not working or buggy and i hope someone knows to get them working. i think the best method would be method 5 if it would work, because it would be the easiest to understand but i'm not sure how efficient it is. the complete code isn't optimized so i recommend to not use it as it is in your project. here it is: import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.FloatBuffer; import javax.microedition.khronos.egl.EGL10; import javax.microedition.khronos.egl.EGLConfig; import javax.microedition.khronos.opengles.GL10; import static javax.microedition.khronos.opengles.GL10.*; import android.app.Activity; import android.content.Context; import android.content.pm.ActivityInfo; import android.hardware.Sensor; import android.hardware.SensorEvent; import android.hardware.SensorEventListener; import android.hardware.SensorManager; import android.opengl.GLSurfaceView; import android.opengl.GLSurfaceView.Renderer; import android.os.Bundle; import android.util.Log; import android.view.WindowManager; /** * This class provides a basic demonstration of how to use the * {@link android.hardware.SensorManager SensorManager} API to draw a 3D * compass. */ public class SensorToOpenGlTests extends Activity implements Renderer, SensorEventListener { private static final boolean TRY_TRANSPOSED_VERSION = false; /* * MODUS overview: * * 1 - unbufferd data directly transfaired from the rotation matrix to the * modelview matrix * * 2 - buffered version of 1 where both acceleration and magnetometer are * buffered * * 3 - buffered version of 1 where only magnetometer is buffered * * 4 - buffered version of 1 where only acceleration is buffered * * 5 - uses the orientation sensor and sets the angles how to rotate the * camera with glrotate() * * 6 - uses the rotation matrix to calculate the angles * * 7 to 12 - every possibility how the rotationMatrix could be constructed * in SensorManager.getRotationMatrix (see * http://www.songho.ca/opengl/gl_anglestoaxes.html#anglestoaxes for all * possibilities) */ private static int MODUS = 2; private GLSurfaceView openglView; private FloatBuffer vertexBuffer; private ByteBuffer indexBuffer; private FloatBuffer colorBuffer; private SensorManager mSensorManager; private float[] rotationMatrix = new float[16]; private float[] accelGData = new float[3]; private float[] bufferedAccelGData = new float[3]; private float[] magnetData = new float[3]; private float[] bufferedMagnetData = new float[3]; private float[] orientationData = new float[3]; // private float[] mI = new float[16]; private float[] resultingAngles = new float[3]; private int mCount; final static float rad2deg = (float) (180.0f / Math.PI); private boolean mirrorOnBlueAxis = false; private boolean landscape; public SensorToOpenGlTests() { } /** Called with the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); mSensorManager = (SensorManager) getSystemService(Context.SENSOR_SERVICE); openglView = new GLSurfaceView(this); openglView.setRenderer(this); setContentView(openglView); } @Override protected void onResume() { // Ideally a game should implement onResume() and onPause() // to take appropriate action when the activity looses focus super.onResume(); openglView.onResume(); if (((WindowManager) getSystemService(WINDOW_SERVICE)) .getDefaultDisplay().getOrientation() == 1) { landscape = true; } else { landscape = false; } mSensorManager.registerListener(this, mSensorManager .getDefaultSensor(Sensor.TYPE_ACCELEROMETER), SensorManager.SENSOR_DELAY_GAME); mSensorManager.registerListener(this, mSensorManager .getDefaultSensor(Sensor.TYPE_MAGNETIC_FIELD), SensorManager.SENSOR_DELAY_GAME); mSensorManager.registerListener(this, mSensorManager .getDefaultSensor(Sensor.TYPE_ORIENTATION), SensorManager.SENSOR_DELAY_GAME); } @Override protected void onPause() { // Ideally a game should implement onResume() and onPause() // to take appropriate action when the activity looses focus super.onPause(); openglView.onPause(); mSensorManager.unregisterListener(this); } public int[] getConfigSpec() { // We want a depth buffer, don't care about the // details of the color buffer. int[] configSpec = { EGL10.EGL_DEPTH_SIZE, 16, EGL10.EGL_NONE }; return configSpec; } public void onDrawFrame(GL10 gl) { // clear screen and color buffer: gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); // set target matrix to modelview matrix: gl.glMatrixMode(GL10.GL_MODELVIEW); // init modelview matrix: gl.glLoadIdentity(); // move camera away a little bit: if ((MODUS == 1) || (MODUS == 2) || (MODUS == 3) || (MODUS == 4)) { if (landscape) { // in landscape mode first remap the rotationMatrix before using // it with glMultMatrixf: float[] result = new float[16]; SensorManager.remapCoordinateSystem(rotationMatrix, SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X, result); gl.glMultMatrixf(result, 0); } else { gl.glMultMatrixf(rotationMatrix, 0); } } else { //in all other modes do the rotation by hand: gl.glRotatef(resultingAngles[1], 1, 0, 0); gl.glRotatef(resultingAngles[2], 0, 1, 0); gl.glRotatef(resultingAngles[0], 0, 0, 1); if (mirrorOnBlueAxis) { //this is needed for mode 6 to work gl.glScalef(1, 1, -1); } } //move the axis to simulate augmented behaviour: gl.glTranslatef(0, 2, 0); // draw the 3 axis on the screen: gl.glVertexPointer(3, GL_FLOAT, 0, vertexBuffer); gl.glColorPointer(4, GL_FLOAT, 0, colorBuffer); gl.glDrawElements(GL_LINES, 6, GL_UNSIGNED_BYTE, indexBuffer); } public void onSurfaceChanged(GL10 gl, int width, int height) { gl.glViewport(0, 0, width, height); float r = (float) width / height; gl.glMatrixMode(GL10.GL_PROJECTION); gl.glLoadIdentity(); gl.glFrustumf(-r, r, -1, 1, 1, 10); } public void onSurfaceCreated(GL10 gl, EGLConfig config) { gl.glDisable(GL10.GL_DITHER); gl.glClearColor(1, 1, 1, 1); gl.glEnable(GL10.GL_CULL_FACE); gl.glShadeModel(GL10.GL_SMOOTH); gl.glEnable(GL10.GL_DEPTH_TEST); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_COLOR_ARRAY); // load the 3 axis and there colors: float vertices[] = { 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1 }; float colors[] = { 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1 }; byte indices[] = { 0, 1, 0, 2, 0, 3 }; ByteBuffer vbb; vbb = ByteBuffer.allocateDirect(vertices.length * 4); vbb.order(ByteOrder.nativeOrder()); vertexBuffer = vbb.asFloatBuffer(); vertexBuffer.put(vertices); vertexBuffer.position(0); vbb = ByteBuffer.allocateDirect(colors.length * 4); vbb.order(ByteOrder.nativeOrder()); colorBuffer = vbb.asFloatBuffer(); colorBuffer.put(colors); colorBuffer.position(0); indexBuffer = ByteBuffer.allocateDirect(indices.length); indexBuffer.put(indices); indexBuffer.position(0); } public void onAccuracyChanged(Sensor sensor, int accuracy) { } public void onSensorChanged(SensorEvent event) { // load the new values: loadNewSensorData(event); if (MODUS == 1) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); } if (MODUS == 2) { rootMeanSquareBuffer(bufferedAccelGData, accelGData); rootMeanSquareBuffer(bufferedMagnetData, magnetData); SensorManager.getRotationMatrix(rotationMatrix, null, bufferedAccelGData, bufferedMagnetData); } if (MODUS == 3) { rootMeanSquareBuffer(bufferedMagnetData, magnetData); SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, bufferedMagnetData); } if (MODUS == 4) { rootMeanSquareBuffer(bufferedAccelGData, accelGData); SensorManager.getRotationMatrix(rotationMatrix, null, bufferedAccelGData, magnetData); } if (MODUS == 5) { // this mode uses the sensor data recieved from the orientation // sensor resultingAngles = orientationData.clone(); if ((-90 > resultingAngles[1]) || (resultingAngles[1] > 90)) { resultingAngles[1] = orientationData[0]; resultingAngles[2] = orientationData[1]; resultingAngles[0] = orientationData[2]; } } if (MODUS == 6) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); final float[] anglesInRadians = new float[3]; SensorManager.getOrientation(rotationMatrix, anglesInRadians); if ((-90 < anglesInRadians[2] * rad2deg) && (anglesInRadians[2] * rad2deg < 90)) { // device camera is looking on the floor // this hemisphere is working fine mirrorOnBlueAxis = false; resultingAngles[0] = anglesInRadians[0] * rad2deg; resultingAngles[1] = anglesInRadians[1] * rad2deg; resultingAngles[2] = anglesInRadians[2] * -rad2deg; } else { mirrorOnBlueAxis = true; // device camera is looking in the sky // this hemisphere is mirrored at the blue axis resultingAngles[0] = (anglesInRadians[0] * rad2deg); resultingAngles[1] = (anglesInRadians[1] * rad2deg); resultingAngles[2] = (anglesInRadians[2] * rad2deg); } } if (MODUS == 7) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in x y z * order Rx*Ry*Rz */ resultingAngles[2] = (float) (Math.asin(rotationMatrix[2])); final float cosB = (float) Math.cos(resultingAngles[2]); resultingAngles[2] = resultingAngles[2] * rad2deg; resultingAngles[0] = -(float) (Math.acos(rotationMatrix[0] / cosB)) * rad2deg; resultingAngles[1] = (float) (Math.acos(rotationMatrix[10] / cosB)) * rad2deg; } if (MODUS == 8) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in z y x */ resultingAngles[2] = (float) (Math.asin(-rotationMatrix[8])); final float cosB = (float) Math.cos(resultingAngles[2]); resultingAngles[2] = resultingAngles[2] * rad2deg; resultingAngles[1] = (float) (Math.acos(rotationMatrix[9] / cosB)) * rad2deg; resultingAngles[0] = (float) (Math.asin(rotationMatrix[4] / cosB)) * rad2deg; } if (MODUS == 9) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in z x y * * note z axis looks good at this one */ resultingAngles[1] = (float) (Math.asin(rotationMatrix[9])); final float minusCosA = -(float) Math.cos(resultingAngles[1]); resultingAngles[1] = resultingAngles[1] * rad2deg; resultingAngles[2] = (float) (Math.asin(rotationMatrix[8] / minusCosA)) * rad2deg; resultingAngles[0] = (float) (Math.asin(rotationMatrix[1] / minusCosA)) * rad2deg; } if (MODUS == 10) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in y x z */ resultingAngles[1] = (float) (Math.asin(-rotationMatrix[6])); final float cosA = (float) Math.cos(resultingAngles[1]); resultingAngles[1] = resultingAngles[1] * rad2deg; resultingAngles[2] = (float) (Math.asin(rotationMatrix[2] / cosA)) * rad2deg; resultingAngles[0] = (float) (Math.acos(rotationMatrix[5] / cosA)) * rad2deg; } if (MODUS == 11) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in y z x */ resultingAngles[0] = (float) (Math.asin(rotationMatrix[4])); final float cosC = (float) Math.cos(resultingAngles[0]); resultingAngles[0] = resultingAngles[0] * rad2deg; resultingAngles[2] = (float) (Math.acos(rotationMatrix[0] / cosC)) * rad2deg; resultingAngles[1] = (float) (Math.acos(rotationMatrix[5] / cosC)) * rad2deg; } if (MODUS == 12) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in x z y */ resultingAngles[0] = (float) (Math.asin(-rotationMatrix[1])); final float cosC = (float) Math.cos(resultingAngles[0]); resultingAngles[0] = resultingAngles[0] * rad2deg; resultingAngles[2] = (float) (Math.acos(rotationMatrix[0] / cosC)) * rad2deg; resultingAngles[1] = (float) (Math.acos(rotationMatrix[5] / cosC)) * rad2deg; } logOutput(); } /** * transposes the matrix because it was transposted (inverted, but here its * the same, because its a rotation matrix) to be used for opengl * * @param source * @return */ private float[] transpose(float[] source) { final float[] result = source.clone(); if (TRY_TRANSPOSED_VERSION) { result[1] = source[4]; result[2] = source[8]; result[4] = source[1]; result[6] = source[9]; result[8] = source[2]; result[9] = source[6]; } // the other values in the matrix are not relevant for rotations return result; } private void rootMeanSquareBuffer(float[] target, float[] values) { final float amplification = 200.0f; float buffer = 20.0f; target[0] += amplification; target[1] += amplification; target[2] += amplification; values[0] += amplification; values[1] += amplification; values[2] += amplification; target[0] = (float) (Math .sqrt((target[0] * target[0] * buffer + values[0] * values[0]) / (1 + buffer))); target[1] = (float) (Math .sqrt((target[1] * target[1] * buffer + values[1] * values[1]) / (1 + buffer))); target[2] = (float) (Math .sqrt((target[2] * target[2] * buffer + values[2] * values[2]) / (1 + buffer))); target[0] -= amplification; target[1] -= amplification; target[2] -= amplification; values[0] -= amplification; values[1] -= amplification; values[2] -= amplification; } private void loadNewSensorData(SensorEvent event) { final int type = event.sensor.getType(); if (type == Sensor.TYPE_ACCELEROMETER) { accelGData = event.values.clone(); } if (type == Sensor.TYPE_MAGNETIC_FIELD) { magnetData = event.values.clone(); } if (type == Sensor.TYPE_ORIENTATION) { orientationData = event.values.clone(); } } private void logOutput() { if (mCount++ > 30) { mCount = 0; Log.d("Compass", "yaw0: " + (int) (resultingAngles[0]) + " pitch1: " + (int) (resultingAngles[1]) + " roll2: " + (int) (resultingAngles[2])); } } }

    Read the article

  • Texture2D problem

    - by Anders Karlsson
    I have a problem that is driving me crazy, I want to write a number of texts on the screen using Texture2D however I only seem to be able to write the first one. If I individually write one of the labels it works but not if I write all of them, only the first label is displayed. Let me show some code: -(void)drawText:(NSString*)theString AtX:(float)X Y:(float)Y withFont:(UIFont*)aFont { // set color glColor4f(1, 0, 0, 1.0); // Enable modes needed for drawing glEnableClientState(GL_TEXTURE_COORD_ARRAY); Texture2D* textTexture = [[Texture2D alloc] initWithString:theString dimensions:viewSize // 320x480 alignment:UITextAlignmentLeft font:aFont]; glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); [textTexture drawInRect:CGRectMake(X,Y,1,1)]; glDisableClientState(GL_TEXTURE_COORD_ARRAY); [textTexture release]; } When I call this drawText once it seems to display the text properly, but if I call it a second time nothing seems to be displayed. Somebody has an idea what it could be? The states like GL_BLEND and GL_TEXTURE_2D have been enabled in the view setup function. In the Texture2D the dimensions are 512x512 as I pass the whole screen to function. If I don't pass that the text gets enlarged and fuzzy. I am a bit uncertain about that parameter. TIA for any help.

    Read the article

  • Why does calling glMatrixMode(GL_PROJECTION) give me EXC_BAD_ACCESS in an iPhone app?

    - by MrDatabase
    I have an iphone app where I call these three functions in appDidFinishLaunching: glMatrixMode(GL_PROJECTION); glOrthof(0, rect.size.width, 0, rect.size.height, -1, 1); glMatrixMode(GL_MODELVIEW); When stepping through with the debugger I get EXC BAD ACCESS when I execute the first line. Any ideas why this is happening? Btw I have another application where I do the same thing and it works fine. So I've tried to duplicate everything in that app (#imports, adding OpenGLES framework, etc) but now I'm just stuck.

    Read the article

  • iPad GLSL. From within a fragment shader how do I get the surface - not vertex - normal

    - by dugla
    Is it possible to access the surface normal - the normal associated with the plane of a fragment - from within a fragment shader? Or perhaps this can be done in the vertex shader? Is all knowledge of the associated geometry lost when we go down the shader pipeline or is there some clever way of recovering that information in either the vertex of fragment shader? Thanks in advance. Cheers, Doug twitter: @dugla

    Read the article

  • iPhone cocos2d CCTMXTiledMap tile transitions

    - by Jeff Johnson
    I am using cocos2d on the iPhone and am wondering if it is possible to use a texture mask in order to create tile transitions / fringe layer. For example, a grass tile and a dirt tile, I would want a tile that had both grass and dirt in it... Has anyone done this, or is the only way to create one tile for every possible transition?

    Read the article

  • Blending transparent textures with depth

    - by l.thee.a
    I am trying to blend textures which have transparent areas: glEnable( GL_TEXTURE_2D ); glBindTexture( GL_TEXTURE_2D, ...); glVertexPointer( 2, GL_FLOAT, 0, ... ); glEnable (GL_BLEND); glBlendFunc (GL_ONE, GL_ONE_MINUS_SRC_ALPHA); glDrawArrays( GL_TRIANGLE_STRIP, 0, 4 ); Unless I add glDisable(GL_DEPTH_TEST), transparent parts of the top textures overwrite everything beneath them (instead of blending). Is there any way to do this without disabling depth? I have tried various blending functions but none of the helped.

    Read the article

  • How does glClear() improve performance?

    - by Jon-Eric
    Apple's Technical Q&A on addressing flickering (QA1650) includes the following paragraph. (Emphasis mine.) You must provide a color to every pixel on the screen. At the beginning of your drawing code, it is a good idea to use glClear() to initialize the color buffer. A full-screen clear of each of your color, depth, and stencil buffers (if you're using them) at the start of a frame can also generally improve your application's performance. On other platforms, I've always found it to be an optimization to not clear the color buffer if you're going to draw to every pixel. (Why waste time filling the color buffer if you're just going to overwrite that clear color?) How can a call to glClear() improve performance?

    Read the article

  • Android source code not working, reading frame buffer through glReadPixels

    - by Muhammad Ali Rajput
    Hi, I am new to Android development and have an assignment to read frame buffer data after a specified interval of time. I have come up with the following code: public class mainActivity extends Activity { Bitmap mSavedBM; private EGL10 egl; private EGLDisplay display; private EGLConfig config; private EGLSurface surface; private EGLContext eglContext; private GL11 gl; protected int width, height; //Called when the activity is first created. @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // get the screen width and height DisplayMetrics dm = new DisplayMetrics(); getWindowManager().getDefaultDisplay().getMetrics(dm); int screenWidth = dm.widthPixels; int screenHeight = dm.heightPixels; String SCREENSHOT_DIR = "/screenshots"; initGLFr(); //GlView initialized. savePixels( 0, 10, screenWidth, screenHeight, gl); //this gets the screen to the mSavedBM. saveBitmap(mSavedBM, SCREENSHOT_DIR, "capturedImage"); //Now we need to save the bitmap (the screen capture) to some location. setContentView(R.layout.main); //This displays the content on the screen } private void initGLFr() { egl = (EGL10) EGLContext.getEGL(); display = egl.eglGetDisplay(EGL10.EGL_DEFAULT_DISPLAY); int[] ver = new int[2]; egl.eglInitialize(display, ver); int[] configSpec = {EGL10.EGL_NONE}; EGLConfig[] configOut = new EGLConfig[1]; int[] nConfig = new int[1]; egl.eglChooseConfig(display, configSpec, configOut, 1, nConfig); config = configOut[0]; eglContext = egl.eglCreateContext(display, config, EGL10.EGL_NO_CONTEXT, null); surface = egl.eglCreateWindowSurface(display, config, SurfaceHolder.SURFACE_TYPE_GPU, null); egl.eglMakeCurrent(display, surface, surface, eglContext); gl = (GL11) eglContext.getGL(); } public void savePixels(int x, int y, int w, int h, GL10 gl) { if (gl == null) return; synchronized (this) { if (mSavedBM != null) { mSavedBM.recycle(); mSavedBM = null; } } int b[] = new int[w * (y + h)]; int bt[] = new int[w * h]; IntBuffer ib = IntBuffer.wrap(b); ib.position(0); gl.glReadPixels(x, 0, w, y + h, GL10.GL_RGBA,GL10.GL_UNSIGNED_BYTE,ib); for (int i = 0, k = 0; i < h; i++, k++) { //OpenGLbitmap is incompatible with Android bitmap //and so, some corrections need to be done. for (int j = 0; j < w; j++) { int pix = b[i * w + j]; int pb = (pix >> 16) & 0xff; int pr = (pix << 16) & 0x00ff0000; int pix1 = (pix & 0xff00ff00) | pr | pb; bt[(h - k - 1) * w + j] = pix1; } } Bitmap sb = Bitmap.createBitmap(bt, w, h, Bitmap.Config.ARGB_8888); synchronized (this) { mSavedBM = sb; } } static String saveBitmap(Bitmap bitmap, String dir, String baseName) { try { File sdcard = Environment.getExternalStorageDirectory(); File pictureDir = new File(sdcard, dir); pictureDir.mkdirs(); File f = null; for (int i = 1; i < 200; ++i) { String name = baseName + i + ".png"; f = new File(pictureDir, name); if (!f.exists()) { break; } } if (!f.exists()) { String name = f.getAbsolutePath(); FileOutputStream fos = new FileOutputStream(name); bitmap.compress(Bitmap.CompressFormat.PNG, 100, fos); fos.flush(); fos.close(); return name; } } catch (Exception e) { } finally { //if (fos != null) { // fos.close(); // } } return null; } } Also, if some one can direct me to better way to read the framebuffer it would be great. I am using Android 2.2 and virtual device of API level 8. I have gone through many previous discussions and have found that we can not know read frame buffer directly throuh the "/dev/graphics/fb0". Thanks, Muhammad Ali

    Read the article

  • iPhone + OpenGL. How Do I Correctly Switch From Landscape to Portrait?

    - by dugla
    Because of the additional complexity of drawing via an EAGLView vs. a UIView I was wondering of someone has found a robust way to handle changing the device orientation from Landscape to Portrait. One approach is to tear down the framebuffer and rebuild from scratch which would require saving/retrieving scene state. The other would be far simpler: just rotate and resize the view. What is the best practice for this? Thanks, Doug

    Read the article

  • Drawing a texture with an alpha channel doesn't work -- draws black

    - by DevDevDev
    I am modifying GLPaint to use a different background, so in this case it is white. Anyway the existing stamp they are using assumes the background is black, so I made a new background with an alpha channel. When I draw on the canvas it is still black, what gives? When I actually draw, I just bind the texture and it works. Something is wrong in this initialization. Here is the photo - (id)initWithCoder:(NSCoder*)coder { CGImageRef brushImage; CGContextRef brushContext; GLubyte *brushData; size_t width, height; if (self = [super initWithCoder:coder]) { CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer; eaglLayer.opaque = YES; // In this application, we want to retain the EAGLDrawable contents after a call to presentRenderbuffer. eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil]; context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1]; if (!context || ![EAGLContext setCurrentContext:context]) { [self release]; return nil; } // Create a texture from an image // First create a UIImage object from the data in a image file, and then extract the Core Graphics image brushImage = [UIImage imageNamed:@"test.png"].CGImage; // Get the width and height of the image width = CGImageGetWidth(brushImage); height = CGImageGetHeight(brushImage); // Texture dimensions must be a power of 2. If you write an application that allows users to supply an image, // you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2. // Make sure the image exists if(brushImage) { brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte)); brushContext = CGBitmapContextCreate(brushData, width, width, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast); CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage); CGContextRelease(brushContext); glGenTextures(1, &brushTexture); glBindTexture(GL_TEXTURE_2D, brushTexture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData); free(brushData); } //Set up OpenGL states glMatrixMode(GL_PROJECTION); CGRect frame = self.bounds; glOrthof(0, frame.size.width, 0, frame.size.height, -1, 1); glViewport(0, 0, frame.size.width, frame.size.height); glMatrixMode(GL_MODELVIEW); glDisable(GL_DITHER); glEnable(GL_TEXTURE_2D); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA); glEnable(GL_POINT_SPRITE_OES); glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE); glPointSize(width / kBrushScale); } return self; }

    Read the article

  • Tracking finger move in order to rotate a triangle : tracking is not perfect

    - by Laurent BERNABE
    I've written a custom view, with the OpenGL_1 technology, in order to let user rotate a red triangle just by dragging it along x axis. (Will give a rotation around Y axis). It works, but there is a bit of latency when dragging from one direction to the other (without releasing the mouse/finger). So it seems that my code is not yet "goal perfect". (I am convinced that no code is perfect in itself). I thought of using a quaternion, but maybe it won't be so usefull : must I really use a Quaternion (or a kind of Matrix) ? I've designed application for Android 4.0.3, but it could fit into Android api 3 (Android 1.5) as well (at least, I think it could). So here is my main layout : activity_main.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" > <com.laurent_bernabe.android.triangletournant3d.MyOpenGLView android:layout_width="match_parent" android:layout_height="match_parent" /> </LinearLayout> Here is my main activity : MainActivity.java package com.laurent_bernabe.android.triangletournant3d; import android.app.Activity; import android.os.Bundle; import android.support.v4.app.NavUtils; import android.view.Menu; import android.view.MenuItem; public class MainActivity extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.activity_main, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { switch (item.getItemId()) { case android.R.id.home: NavUtils.navigateUpFromSameTask(this); return true; } return super.onOptionsItemSelected(item); } } And finally, my OpenGL view MyOpenGLView.java package com.laurent_bernabe.android.triangletournant3d; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.FloatBuffer; import javax.microedition.khronos.egl.EGLConfig; import javax.microedition.khronos.opengles.GL10; import android.content.Context; import android.graphics.Point; import android.opengl.GLSurfaceView; import android.opengl.GLSurfaceView.Renderer; import android.opengl.GLU; import android.util.AttributeSet; import android.view.MotionEvent; public class MyOpenGLView extends GLSurfaceView implements Renderer { public MyOpenGLView(Context context, AttributeSet attrs) { super(context, attrs); setRenderer(this); } public MyOpenGLView(Context context) { this(context, null); } @Override public boolean onTouchEvent(MotionEvent event) { int actionMasked = event.getActionMasked(); switch(actionMasked){ case MotionEvent.ACTION_DOWN: savedClickLocation = new Point((int) event.getX(), (int) event.getY()); break; case MotionEvent.ACTION_UP: savedClickLocation = null; break; case MotionEvent.ACTION_MOVE: Point newClickLocation = new Point((int) event.getX(), (int) event.getY()); int dx = newClickLocation.x - savedClickLocation.x; angle += Math.toRadians(dx); break; } return true; } @Override public void onDrawFrame(GL10 gl) { gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glLoadIdentity(); GLU.gluLookAt(gl, 0f, 0f, 5f, 0f, 0f, 0f, 0f, 1f, 0f ); gl.glRotatef(angle, 0f, 1f, 0f); gl.glColor4f(1f, 0f, 0f, 0f); gl.glVertexPointer(2, GL10.GL_FLOAT, 0, triangleCoordsBuff); gl.glDrawArrays(GL10.GL_TRIANGLES, 0, 3); } @Override public void onSurfaceChanged(GL10 gl, int width, int height) { gl.glViewport(0, 0, width, height); gl.glMatrixMode(GL10.GL_PROJECTION); gl.glLoadIdentity(); GLU.gluPerspective(gl, 60f, (float) width / height, 0.1f, 10f); gl.glMatrixMode(GL10.GL_MODELVIEW); } @Override public void onSurfaceCreated(GL10 gl, EGLConfig config) { gl.glEnable(GL10.GL_DEPTH_TEST); gl.glClearDepthf(1.0f); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); buildTriangleCoordsBuffer(); } private void buildTriangleCoordsBuffer() { ByteBuffer buffer = ByteBuffer.allocateDirect(4*triangleCoords.length); buffer.order(ByteOrder.nativeOrder()); triangleCoordsBuff = buffer.asFloatBuffer(); triangleCoordsBuff.put(triangleCoords); triangleCoordsBuff.rewind(); } private float [] triangleCoords = {-1f, -1f, +1f, -1f, +1f, +1f}; private FloatBuffer triangleCoordsBuff; private float angle = 0f; private Point savedClickLocation; } I don't think I really have to give you my manifest file. But I can if you think it is necessary. I've just tested on Emulator, not on real device. So, how can improve the reactivity ? Thanks in advance.

    Read the article

  • OpenGLES mix and match blending options complicated question

    - by DevDevDev
    So I have a background line drawing, black and white. I want to be able to draw over this, keeping the black lines in there, and drawing only where it is white. I can do this by using glBlendFunc(GL_DST_COLOR, GL_ONE_MINUS_SRC_ALPHA); And the black stays black and white gets whatever color I want. However in the white areas, I want to be able to draw some color, pick to another color and then blend. However this doesn't work because the blend function is messed up. So what I was thinking is having a linedrawing framebuffer, and a user-drawing framebuffer. The user draws into the userdrawing framebuffer, with a different blending, and then I switch blending options and draw into the linedrawing framebuffer. But i don't know enough OpenGL to say whether or not this will work or is a stupid idea. Thanks a lot

    Read the article

  • OpenGL on the iPad

    - by Joe Cannatti
    I have an OpenGL project that is a universal project for iPhone/iPad. Basically it is just rendering simple models to the screen. It works great with no OpenGL errors on the iPhone device and iPad simulator. On the iPad device however, i get an OpenGL error and nothing renders. 1282 GL_INVALID_OPERATION immediately after calling [context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:layer]; I can't seem to come up with any reason for this. Any help?

    Read the article

  • iPhone Game Developers - What does your toolchain look like?

    - by slf
    For example: source control: git + adobe drive 3d: google sketchup - *.dae - blender - *.obj 2d: photoshop/illustrator - *.png audio: audacity - *.caf code: ArgoUML, Xcode, Textmate test: OCUnit build: rake, Xcode Feel free to mention any other tools that you think are awesome :) Changed to Community Wiki

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >