Search Results

Search found 2589 results on 104 pages for 'ef es'.

Page 6/104 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Whats the minimum iOS version which supports OpenGL ES2.0 ?

    - by Shireesh Agrawal
    Hi, I am not sure if the question even makes sense. I am writing an iPhone game which uses Opengl ES 2.0. I know that OpenGL ES 2.0 is supported on 3gs and higher. Is there a minimum requirement for iOS version too, like the device needs to have iOS 3.1.3 or higher? Or does it solely depend on the hardware? Thanks! -shireesh p.s. I tried to search on the net but havent found much, perhaps I am not using the right keywords

    Read the article

  • Elérheto és letöltheto az Oracle Database 12c

    - by user645740
    Megjelent az Oracle Database ÚJ verziója, az Oracle Database 12c, számos innovációval, újdonsággal, új funkcióval. Az egyik legfontosabb a Multitenant funkció, ami a container database és pluggable database architektúrára épül, ami elsodlegesen az adatbázis konszolidációt és az adatbázis cloud megvalósításokat támogatja. Az Automatic Data Optimization a Heat Map segítségével az adatok automatikus tömörítését és osztályozott elhelyezését teszi lehetové (tiering). Emellett a biztonság, rendelkezésre állás és számos más területen vannak újdonságok. Az új verzió letöltheto: Linux x86-64, Solaris Sparc64, Solaris (x86-64) Oracle Technology Network. Lehet regisztrálni a launch webcastra: here.

    Read the article

  • How does opengl-es 2 assemble primitives?

    - by stephelton
    Two things I'm quite confused about. 1) OpenGL ES 2.0 creates primitives before the vertex shader is invoked. Why, then, does it not automatically provide the vertex shader the position of the vertex? 2) OpenGL ES 2.0 supports glDrawElements(), but it does not support glEnableClientState() or GL_VERTEX_ARRAY, so how can this call possibly be used to construct primitives? NOTE: this is OpenGL ES 2.0, NOT normal OpenGL! Thanks!

    Read the article

  • [EF + Oracle] Inserting Data (Sequences) (2/2)

    - by JTorrecilla
    Prologue In the previous chapter we have see how to create DB records with EF, now we are going to Some Questions about Oracle.   ORACLE One characteristic from SQL Server that differs from Oracle is “Identity”. To all that has not worked with SQL Server, this property, that applies to Integer Columns, lets indicate that there is auto increment columns, by that way it will be filled automatically, without writing it on the insert statement. In EF with SQL Server, the properties whose match with Identity columns, will be filled after invoking SaveChanges method. In Oracle DB, there is no Identity Property, but there is something similar. Sequences Sequences are DB objects, that allow to create auto increment, but there are not related directly to a Table. The syntax is as follows: name, min value, max value and begin value. 1: CREATE SEQUENCE nombre_secuencia 2: INCREMENT BY numero_incremento 3: START WITH numero_por_el_que_empezara 4: MAXVALUE valor_maximo | NOMAXVALUE 5: MINVALUE valor_minimo | NOMINVALUE 6: CYCLE | NOCYCLE 7: ORDER | NOORDER 8:    How to get sequence value? To obtain the next val from the sequence: 1: SELECT nb_secuencia.Nextval 2: From Dual Due to there is no direct way to indicate that a column is related to a sequence, there is several ways to imitate the behavior: Use a Trigger (DB), Use Stored Procedures or Functions(…) or my particularly option. EF model, only, imports Table Objects, Stored Procedures or Functions, but not sequences. By that, I decide to create my own extension Method to invoke Next Val from a sequence: 1: public static class EFSequence 2: { 3: public static int GetNextValue(this ObjectContext contexto, string SequenceName) 4: { 5: string Connection = ConfigurationManager.ConnectionStrings["JTorrecillaEntities2"].ConnectionString; 6: Connection=Connection.Substring(Connection.IndexOf(@"connection string=")+19); 7: Connection = Connection.Remove(Connection.Length - 1, 1); 8: using (IDbConnection con = new Oracle.DataAccess.Client.OracleConnection(Connection)) 9: { 10: using (IDbCommand cmd = con.CreateCommand()) 11: { 12: con.Open(); 13: cmd.CommandText = String.Format("Select {0}.nextval from DUAL", SequenceName); 14: return Convert.ToInt32(cmd.ExecuteScalar()); 15: } 16: } 17:  18: } 19: } This Object Context’s extension method are going to invoke a query with the Sequence indicated by parameter. It takes the connection strings from the App settings, removing the meta data, that was created by VS when generated the EF model. And then, returns the next value from the Sequence. The next value of a Sequence is unique, by that, when some concurrent users are going to create records in the DB using the sequence will not get duplicates. This is my own implementation, I know that it could be several ways to do and better ways. If I find any other way, I promise to post it. To use the example is needed to add a reference to the Oracle (ODP.NET) dll.

    Read the article

  • EF Doesn't Like Same Named Tables

    - by Anthony Trudeau
    Originally posted on: http://geekswithblogs.net/tonyt/archive/2013/07/02/153327.aspxIt's another week and another restriction imposed by the Entity Framework (EF). Don't get me wrong. I like EF, but I don't like how it restricts you in different ways. At this point you may be asking yourself the question: how can you have more than one table with the same name?The answer is to have tables in different schemas. I do this to partition the data based on the area of concern. It allows security to be assigned conveniently. A lot of people don't use schemas. I love them. But this article isn't about schemas.In the situation I have two tables:Contact.PersonEmployee.PersonThe first contains the basic, more public information such as the name. The second contains mostly HR specific information. I then mapped these tables to two classes. I stuck to a Table per Class (TPC) mapping, because of problems I've had in the past implementing inheritance with EF. The following code gives you the basic contents of the classes.[Table("Person", Schema = "Employee")]public class Employee {   ...   public int PersonId { get; set; }   [ForeignKey("PersonId")]   public virtual Person Person { get; set; }}[Table("Person", Schema = "Contact")]public class Person {   [Key]   public int Id { get; set; }   ...}This seemingly simple scenario just doesn't work. The problem occurs when you try to add a Person to the DbContext. You get an InvalidOperationException with the following text:The entity types 'Employee' and 'Person' cannot share table 'People' because they are not in the same type hierarchy or do not have a valid one to one foreign key relationship with matching primary keys between them..This is interesting for a couple of reasons. First, there is no People table in my database. Second, I have used the SetInitializer method to stop a database from being created, so it shouldn't be thinking about new tables.The solution to my problem was to change the name of my Employee.Person table. I decided to name it Employee.Employee. It's not ideal, but it gets me past the EF limitation. I hope that this article will help someone else that has the same problem.

    Read the article

  • Lighting and OpenGL ES

    - by FX
    Hi all, I'm working on getting a simple lighting right on my OpenGL ES iPhone scene. I'm displaying a simple object centered on the origin, and using an arcball to rotate it by touching the screen. All this works nicely, except I try to add one fixed light (fixed w.r.t. eye position) and it is badly screwed: the whole object (an icosahedron in this example) is lit uniformly, i.e. it all appears in the same color. I have simplified my code as much as possible so it's standalone and still reproduces what I experience: glClearColor (0.25, 0.25, 0.25, 1.); glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable (GL_DEPTH_TEST); glEnable(GL_LIGHTING); glMatrixMode (GL_PROJECTION); glLoadIdentity (); glOrthof(-1, 1, -(float)backingWidth/backingHeight, (float)backingWidth/backingHeight, -10, 10); glMatrixMode (GL_MODELVIEW); glLoadIdentity (); GLfloat ambientLight[] = { 0.2f, 0.2f, 0.2f, 1.0f }; GLfloat diffuseLight[] = { 0.8f, 0.8f, 0.8, 1.0f }; GLfloat specularLight[] = { 0.5f, 0.5f, 0.5f, 1.0f }; GLfloat position[] = { -1.5f, 1.0f, -400.0f, 0.0f }; glEnable(GL_LIGHT0); glLightfv(GL_LIGHT0, GL_AMBIENT, ambientLight); glLightfv(GL_LIGHT0, GL_DIFFUSE, diffuseLight); glLightfv(GL_LIGHT0, GL_SPECULAR, specularLight); glLightfv(GL_LIGHT0, GL_POSITION, position); glShadeModel(GL_SMOOTH); glEnable(GL_NORMALIZE); float currRot[4]; [arcball getCurrentRotation:currRot]; glRotatef (currRot[0], currRot[1], currRot[2], currRot[3]); float f[4]; f[0] = 0.5; f[1] = 0; f[2] = 0; f[3] = 1; glMaterialfv (GL_FRONT_AND_BACK, GL_AMBIENT, f); glMaterialfv (GL_FRONT_AND_BACK, GL_DIFFUSE, f); f[0] = 0.2; f[1] = 0.2; f[2] = 0.2; f[3] = 1; glMaterialfv (GL_FRONT_AND_BACK, GL_SPECULAR, f); glEnableClientState (GL_VERTEX_ARRAY); drawSphere(0, 0, 0, 1); where the drawSphere function actually draws an icosahedron: static void drawSphere (float x, float y, float z, float rad) { glPushMatrix (); glTranslatef (x, y, z); glScalef (rad, rad, rad); // Icosahedron const float vertices[] = { 0., 0., -1., 0., 0., 1., -0.894427, 0., -0.447214, 0.894427, 0., 0.447214, 0.723607, -0.525731, -0.447214, 0.723607, 0.525731, -0.447214, -0.723607, -0.525731, 0.447214, -0.723607, 0.525731, 0.447214, -0.276393, -0.850651, -0.447214, -0.276393, 0.850651, -0.447214, 0.276393, -0.850651, 0.447214, 0.276393, 0.850651, 0.447214 }; const GLubyte indices[] = { 1, 11, 7, 1, 7, 6, 1, 6, 10, 1, 10, 3, 1, 3, 11, 4, 8, 0, 5, 4, 0, 9, 5, 0, 2, 9, 0, 8, 2, 0, 11, 9, 7, 7, 2, 6, 6, 8, 10, 10, 4, 3, 3, 5, 11, 4, 10, 8, 5, 3, 4, 9, 11, 5, 2, 7, 9, 8, 6, 2 }; glVertexPointer (3, GL_FLOAT, 0, vertices); glDrawElements (GL_TRIANGLES, sizeof(indices)/sizeof(indices[0]), GL_UNSIGNED_BYTE, indices); glPopMatrix (); } A movie of what I see as the result is here. Thanks to anyone who can shed some light into this (no kidding!). I'm sure it will look embarassingly trivial to someone, but I swear I have looked at many lighting tutorials before this and am stuck.

    Read the article

  • [OpenGL ES - Android] Better way to generate tiles

    - by Inoe
    Hi ! I'll start by saying that i'm REALLY new to OpenGL ES (I started yesterday =), but I do have some Java and other languages experience. I've looked a lot of tutorials, of course Nehe's ones and my work is mainly based on that. As a test, I started creating a "tile generator" in order to create a small Zelda-like game (just moving a dude in a textured square would be awsome :p). So far, I have achieved a working tile generator, I define a char map[][] array to store wich tile is on : private char[][] map = { {0, 0, 20, 11, 11, 11, 11, 4, 0, 0}, {0, 20, 16, 12, 12, 12, 12, 7, 4, 0}, {20, 16, 17, 13, 13, 13, 13, 9, 7, 4}, {21, 24, 18, 14, 14, 14, 14, 8, 5, 1}, {21, 22, 25, 15, 15, 15, 15, 6, 2, 1}, {21, 22, 23, 0, 0, 0, 0, 3, 2, 1}, {21, 22, 23, 0, 0, 0, 0, 3, 2, 1}, {26, 0, 0, 0, 0, 0, 0, 3, 2, 1}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 1}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 1} }; It's working but I'm no happy with it, I'm sure there is a beter way to do those things : 1) Loading Textures : I create an ugly looking array containing the tiles I want to use on that map : private int[] textures = { R.drawable.herbe, //0 R.drawable.murdroite_haut, //1 R.drawable.murdroite_milieu, //2 R.drawable.murdroite_bas, //3 R.drawable.angledroitehaut_haut, //4 R.drawable.angledroitehaut_milieu, //5 }; (I cutted this on purpose, I currently load 27 tiles) All of theses are stored in the drawable folder, each one is a 16*16 tile. I then use this array to generate the textures and store them in a HashMap for a later use : int[] tmp_tex = new int[textures.length]; gl.glGenTextures(textures.length, tmp_tex, 0); texturesgen = tmp_tex; //Store the generated names in texturesgen for(int i=0; i < textures.length; i++) { //Bitmap bmp = BitmapFactory.decodeResource(context.getResources(), textures[i]); InputStream is = context.getResources().openRawResource(textures[i]); Bitmap bitmap = null; try { //BitmapFactory is an Android graphics utility for images bitmap = BitmapFactory.decodeStream(is); } finally { //Always clear and close try { is.close(); is = null; } catch (IOException e) { } } // Get a new texture name // Load it up this.textureMap.put(new Integer(textures[i]),new Integer(i)); int tex = tmp_tex[i]; gl.glBindTexture(GL10.GL_TEXTURE_2D, tex); //Create Nearest Filtered Texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); //Different possible texture parameters, e.g. GL10.GL_CLAMP_TO_EDGE gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); //Use the Android GLUtils to specify a two-dimensional texture image from our bitmap GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); bitmap.recycle(); } I'm quite sure there is a better way to handle that... I just was unable to figure it. If someone has an idea, i'm all ears. 2) Drawing the tiles What I did was create a single square and a single texture map : /** The initial vertex definition */ private float vertices[] = { -1.0f, -1.0f, 0.0f, //Bottom Left 1.0f, -1.0f, 0.0f, //Bottom Right -1.0f, 1.0f, 0.0f, //Top Left 1.0f, 1.0f, 0.0f //Top Right }; private float texture[] = { //Mapping coordinates for the vertices 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; Then, in my draw function, I loop through the map to define the texture to use (after pointing to and enabling the buffers) : for(int y = 0; y < Y; y++){ for(int x = 0; x < X; x++){ tile = map[y][x]; try { //Get the texture from the HashMap int textureid = ((Integer) this.textureMap.get(new Integer(textures[tile]))).intValue(); gl.glBindTexture(GL10.GL_TEXTURE_2D, this.texturesgen[textureid]); } catch(Exception e) { return; } //Draw the vertices as triangle strip gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3); gl.glTranslatef(2.0f, 0.0f, 0.0f); //A square takes 2x so I move +2x before drawing the next tile } gl.glTranslatef(-(float)(2*X), -2.0f, 0.0f); //Go back to the begining of the map X-wise and move 2y down before drawing the next line } This works great by I really think that on a 1000*1000 or more map, it will be lagging as hell (as a reminder, this is a typical Zelda world map : http://vgmaps.com/Atlas/SuperNES/LegendOfZelda-ALinkToThePast-LightWorld.png ). I've read things about Vertex Buffer Object and DisplayList but I couldn't find a good tutorial and nodoby seems to be OK on wich one is the best / has the better support (T1 and Nexus One are ages away). I think that's it, I've putted a lot of code but I think it helps. Thanks in advance !

    Read the article

  • iPhone OpenGL ES freezes for no reason

    - by KJ
    Hi, I'm quite new to iPhone OpenGL ES, and I'm really stuck. I was trying to implement shadow mapping on iPhone, and I allocated two 512*1024*32bit textures for the shadow map and the diffuse map respectively. The problem is that my application started to freeze and reboot the device after I added the shadow map allocation part to the code (so I guess the shadow map allocation is causing all this mess). It happens randomly, but mostly within 10 minutes. (sometimes within a few secs) And it only happens on the real iPhone device, not on the virtual device. I backtracked the problem by removing irrelevant code lines by lines and now my code is really simple, but it's still crashing (I mean, freezing). Could anybody please download my xcode project linked below and see what on earth is wrong? The code is really simple: http://www.tempfiles.net/download/201004/95922/CrashTest.html I would really appreciate if someone can help me. My iPhone is a 3GS and running on the OS version 3.1. Again, run the code and it'll take about 5 mins in average for the device to freeze and reboot. (Don't worry, it does no harm) It'll just display cyan screen before it freezes, but you'll be able to notice when it happens because the device will reboot soon, so please be patient. Just in case you can't reproduce the problem, please let me know. (That could possibly mean it's specifically my device that something's wrong with) Observation: The problem goes away when I change the size of the shadow map to 512*512. (but with the diffuse map still 512*1024) I'm desperate for help, thanks in advance! Just for the people's information who can't download the link, here is the OpenGL code: #import "GLView.h" #import <OpenGLES/ES2/glext.h> #import <QuartzCore/QuartzCore.h> @implementation GLView + (Class)layerClass { return [CAEAGLLayer class]; } - (id)initWithCoder: (NSCoder*)coder { if ((self = [super initWithCoder:coder])) { CAEAGLLayer* layer = (CAEAGLLayer*)self.layer; layer.opaque = YES; layer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool: NO], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil]; displayLink_ = nil; context_ = [[EAGLContext alloc] initWithAPI: kEAGLRenderingAPIOpenGLES2]; if (!context_ || ![EAGLContext setCurrentContext: context_]) { [self release]; return nil; } glGenFramebuffers(1, &framebuffer_); glBindFramebuffer(GL_FRAMEBUFFER, framebuffer_); glViewport(0, 0, self.bounds.size.width, self.bounds.size.height); glGenRenderbuffers(1, &defaultColorBuffer_); glBindRenderbuffer(GL_RENDERBUFFER, defaultColorBuffer_); [context_ renderbufferStorage: GL_RENDERBUFFER fromDrawable: layer]; glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, defaultColorBuffer_); glGenTextures(1, &shadowColorBuffer_); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, shadowColorBuffer_); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 512, 1024, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL); glGenTextures(1, &texture_); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture_); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 512, 1024, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL); } return self; } - (void)startAnimation { displayLink_ = [CADisplayLink displayLinkWithTarget: self selector: @selector(drawView:)]; [displayLink_ setFrameInterval: 1]; [displayLink_ addToRunLoop: [NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; } - (void)useDefaultBuffers { glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, defaultColorBuffer_); glClearColor(0.0, 0.8, 0.8, 1); glClear(GL_COLOR_BUFFER_BIT); } - (void)useShadowBuffers { glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, shadowColorBuffer_, 0); glClearColor(0, 0, 0, 0); glClear(GL_COLOR_BUFFER_BIT); } - (void)drawView: (id)sender { NSTimeInterval startTime = [NSDate timeIntervalSinceReferenceDate]; [EAGLContext setCurrentContext: context_]; [self useShadowBuffers]; [self useDefaultBuffers]; glBindRenderbuffer(GL_RENDERBUFFER, defaultColorBuffer_); [context_ presentRenderbuffer: GL_RENDERBUFFER]; NSTimeInterval endTime = [NSDate timeIntervalSinceReferenceDate]; NSLog(@"FPS : %.1f", 1 / (endTime - startTime)); } - (void)stopAnimation { [displayLink_ invalidate]; displayLink_ = nil; } - (void)dealloc { if (framebuffer_) glDeleteFramebuffers(1, &framebuffer_); if (defaultColorBuffer_) glDeleteRenderbuffers(1, &defaultColorBuffer_); if (shadowColorBuffer_) glDeleteTextures(1, &shadowColorBuffer_); glDeleteTextures(1, &texture_); if ([EAGLContext currentContext] == context_) [EAGLContext setCurrentContext: nil]; [context_ release]; context_ = nil; [super dealloc]; } @end

    Read the article

  • iPhone OpenGL ES - How to Pick

    - by Ali Nadalizadeh
    I'm working on an OpenGL ES1 app which displays a 2D grid and allows user to navigate and scale/rotate it. I need to know the exact translation of View Touch coordinates into my opengl world and grid cell. Are there any helpers to do the reverse of last few transforms which I do for navigation ? or I should calculate and do the matrix stuff by hand ?

    Read the article

  • Marrying Core Animation with OpenGL ES

    - by Ole Begemann
    Edit: I suppose instead of the long explanation below I might also ask: Sending -setNeedsDisplay to an instance of CAEAGLLayer does not cause the layer to redraw (i.e., -drawInContext: is not called). Instead, I get this console message: <GLLayer: 0x4d500b0>: calling -display has no effect. Is there a way around this issue? Can I invoke -drawInContext: when -setNeedsDisplay is called? Long explanation below: I have an OpenGL scene that I would like to animate using Core Animation animations. Following the standard approach to animate custom properties in a CALayer, I created a subclass of CAEAGLLayer and defined a property sceneCenterPoint in it whose value should be animated. My layer also holds a reference to the OpenGL renderer: #import <UIKit/UIKit.h> #import <QuartzCore/QuartzCore.h> #import "ES2Renderer.h" @interface GLLayer : CAEAGLLayer { ES2Renderer *renderer; } @property (nonatomic, retain) ES2Renderer *renderer; @property (nonatomic, assign) CGPoint sceneCenterPoint; I then declare the property @dynamic to let CA create the accessors, override +needsDisplayForKey: and implement -drawInContext: to pass the current value of the sceneCenterPoint property to the renderer and ask it to render the scene: #import "GLLayer.h" @implementation GLLayer @synthesize renderer; @dynamic sceneCenterPoint; + (BOOL) needsDisplayForKey:(NSString *)key { if ([key isEqualToString:@"sceneCenterPoint"]) { return YES; } else { return [super needsDisplayForKey:key]; } } - (void) drawInContext:(CGContextRef)ctx { self.renderer.centerPoint = self.sceneCenterPoint; [self.renderer render]; } ... (If you have access to the WWDC 2009 session videos, you can review this technique in session 303 ("Animated Drawing")). Now, when I create an explicit animation for the layer on the keyPath @"sceneCenterPoint", Core Animation should calculate the interpolated values for the custom properties and call -drawInContext: for each step of the animation: - (IBAction)animateButtonTapped:(id)sender { CABasicAnimation *animation = [CABasicAnimation animationWithKeyPath:@"sceneCenterPoint"]; animation.duration = 1.0; animation.fromValue = [NSValue valueWithCGPoint:CGPointZero]; animation.toValue = [NSValue valueWithCGPoint:CGPointMake(1.0f, 1.0f)]; [self.glView.layer addAnimation:animation forKey:nil]; } At least that is what would happen for a normal CALayer subclass. When I subclass CAEAGLLayer, I get this output on the console for each step of the animation: 2010-12-21 13:59:22.180 CoreAnimationOpenGL[7496:207] <GLLayer: 0x4e0be20>: calling -display has no effect. 2010-12-21 13:59:22.198 CoreAnimationOpenGL[7496:207] <GLLayer: 0x4e0be20>: calling -display has no effect. 2010-12-21 13:59:22.216 CoreAnimationOpenGL[7496:207] <GLLayer: 0x4e0be20>: calling -display has no effect. 2010-12-21 13:59:22.233 CoreAnimationOpenGL[7496:207] <GLLayer: 0x4e0be20>: calling -display has no effect. ... So it seems that, possibly for performance reasons, for OpenGL layers, -drawInContext: is not getting called because these layers do not use the standard -display method to draw themselves. Can anybody confirm that? Is there a way around it? Or can I not use the technique I laid out above? This would mean I would have to implement the animations manually in the OpenGL renderer (which is possible but not as elegant IMO).

    Read the article

  • Parallax backgrounds in OpenGL ES on the iPhone

    - by Scott
    I've got basically a 2d game on the iPhone and I'm trying to set up multiple backgrounds that scroll at different speeds (known as parallax backgrounds). So my thought was to just stick the backgrounds BEHIND the foreground using different z-coordinate planes, and just make them bigger than the foreground (in size) to accommodate, so that the whole thing can be scrolled (just at a different speed). And (as far as I know) I basically implemented that. The only problem is that it seems to entirely ignore whatever z-value I give it, or rather it just zeroes all of them. I see the background (I've only tested ONE background so far, to keep it simple...so for now I just have a foreground and I want one background scrolling at a different speed), but it scrolls 1:1 with my foreground, so it obviously doesn't look right, and most of it is cut off (cause it's bigger). And I've tried various z-values for the background and various near/far clipping planes...it's always the same. I'm probably just doing one simple thing wrong, but I can't figure it out. I'm wondering if it has to do with me using only 2 coordinates in glVertexPointer for the foreground? (Of course for the background I AM passing in 3) I'll post some code: This is some initial setup: glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -10.0f, 10.0f); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glEnableClientState(GL_VERTEX_ARRAY); //glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); //transparency glEnable (GL_BLEND); glBlendFunc (GL_ONE, GL_ONE_MINUS_SRC_ALPHA); A little bit about my foreground's float array....it's interleaved. For my foreground it goes vertex x, vertex y, texture x, texture y, repeat. This all works just fine. This is my FOREGROUND rendering: glVertexPointer(2, GL_FLOAT, 4*sizeof(GLfloat), texes); <br> glTexCoordPointer(2, GL_FLOAT, 4*sizeof(GLfloat), (GLvoid*)texes + 2*sizeof(GLfloat)); <br> glDrawArrays(GL_TRIANGLES, 0, indexCount / 4); BACKGROUND rendering: Same drill here except this time it goes vertex x, vertex y, vertex z, texture x, texture y, repeat. Note the z value this time. I did make sure the data in this array was correct while debugging (getting the right z values). And again, it shows up...it's just not going far back in the distance like it should. glVertexPointer(3, GL_FLOAT, 5*sizeof(GLfloat), b1Texes); glTexCoordPointer(2, GL_FLOAT, 5*sizeof(GLfloat), (GLvoid*)b1Texes + 3*sizeof(GLfloat)); glDrawArrays(GL_TRIANGLES, 0, b1IndexCount / 5); And to move my camera, I just do a simple glTranslatef(x, y, 0.0f); I'm not understanding what I'm doing wrong cause this seems like the most basic 3D function imaginable...things further away are smaller and don't move as fast when the camera moves. Not the case for me. Seems like it should be pretty basic and not even really be affected by my projection and all that (though I've even tried doing glFrustum just for fun, no success). Please help, I feel like it's just one dumb thing. I will post more code if necessary.

    Read the article

  • GPGPU programming with OpenGL ES 2.0

    - by Albus Dumbledore
    I am trying to do some image processing on the GPU, e.g. median, blur, brightness, etc. The general idea is to do something like this framework from GPU Gems 1. I am able to write the GLSL fragment shader for processing the pixels as I've been trying out different things in an effect designer app. I am not sure however how I should do the other part of the task. That is, I'd like to be working on the image in image coords and then outputting the result to a texture. I am aware of the gl_FragCoords variable. As far as I understand it it goes like that: I need to set up a view (an orthographic one maybe?) and a quad in such a way so that the pixel shader would be applied once to each pixel in the image and so that it would be rendering to a texture or something. But how can I achieve that considering there's depth that may make things somewhat awkward to me... I'd be very grateful if anyone could help me with this rather simple task as I am really frustrated with myself. UPDATE It seems I'll have to use an FBO, getting one like this: glBindFramebuffer(...)

    Read the article

  • Replace these OpenGL functions with OpenGL ES?

    - by Constantin
    I search for a possibility to migrate my PC OpenGL application and an iPhone App into one XCode project (for convenience). So if I make chances to these source files I want to apply this for both plattforms and want to be able to compile for both plattforms from one project. How could I accomplish this? Is there a way to do so in XCode 4 or 3.25? Any help would be highly appreciated edit: Okay, I went so far - All in all, it seems to work with XCode 4. My only problems are these openGL/Glut functions, that aren't working on iPhone: glPushAttrib( GL_DEPTH_BUFFER_BIT | GL_LIGHTING_BIT ); glPopAttrib(); glutGet(GLUT_ELAPSED_TIME); glutSwapBuffers(); Any ideas how to fix these issues?

    Read the article

  • OpenGl ES Eraser Tool

    - by Erika
    Hi Everyone, I am trying to implement an OpenGL eraser tool. I am struggling with this. I was thinking of painting somehow over the previous changes to "clear" out the changes. I can't use the background color because it is not a pattern, not one solid color. Can you point me to the right direction on how to implement an eraser tool ? This is for the iPhone OS but that shouldn't matter. Thanks

    Read the article

  • Opengl-es draw an .obj file, but how?

    - by lacas
    I d like to parse an .obj file. My parser is working good, but my displaying is not good. Obj file is here my code is: public ObjModelParser parse() { long startTime = System.currentTimeMillis(); InputStream fileIn = resources.openRawResource(resourceID); BufferedReader buffer = new BufferedReader(new InputStreamReader(fileIn)); String line=""; Log.e("model loader", "Start parsing object " + resourceID); try { while ((line = buffer.readLine()) != null) { StringTokenizer parts = new StringTokenizer(line, " "); int numTokens = parts.countTokens(); if (numTokens == 0) continue; String part = parts.nextToken(); if (part.equals(VERTEX)) { Log.e("v ", line); vertices.add(Float.parseFloat(parts.nextToken())); vertices.add(Float.parseFloat(parts.nextToken())); vertices.add(Float.parseFloat(parts.nextToken())); .... and my displaying code is: draw that model with TRIANGLE_STRIP and gl.glDrawArrays(rendermode, 0, coords.length/dimension); What is the mistake here? edited: file here to show what is my good coords from my program for a cube, and what is from .obj file, that never show Thanks, Leslie

    Read the article

  • How do I implement AABB ray cast hit checking for opengl es on the iPhone

    - by Big Fizzy
    Basically, I draw a 3D cube, I can spin it around but I want to be able to touch it and know where on my cube's surface the user touched. I'm using for setting up, generating and spinning. Its based on the Molecules code and NeHe tutorial #5. Any help, links, tutorials and code would be greatly appreciated. I have lots of development experience but nothing much in the way of openGL and 3d. // // GLViewController.h // NeHe Lesson 05 // // Created by Jeff LaMarche on 12/12/08. // Copyright Jeff LaMarche Consulting 2008. All rights reserved. // #import "GLViewController.h" #import "GLView.h" @implementation GLViewController - (void)drawBox { static const GLfloat cubeVertices[] = { -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f,-1.0f, 1.0f, -1.0f,-1.0f, 1.0f, -1.0f, 1.0f,-1.0f, 1.0f, 1.0f,-1.0f, 1.0f,-1.0f,-1.0f, -1.0f,-1.0f,-1.0f }; static const GLubyte cubeNumberOfIndices = 36; const GLubyte cubeVertexFaces[] = { 0, 1, 5, // Half of top face 0, 5, 4, // Other half of top face 4, 6, 5, // Half of front face 4, 6, 7, // Other half of front face 0, 1, 2, // Half of back face 0, 3, 2, // Other half of back face 1, 2, 5, // Half of right face 2, 5, 6, // Other half of right face 0, 3, 4, // Half of left face 7, 4, 3, // Other half of left face 3, 6, 2, // Half of bottom face 6, 7, 3, // Other half of bottom face }; const GLubyte cubeFaceColors[] = { 0, 255, 0, 255, 255, 125, 0, 255, 255, 0, 0, 255, 255, 255, 0, 255, 0, 0, 255, 255, 255, 0, 255, 255 }; glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, 0, cubeVertices); int colorIndex = 0; for(int i = 0; i < cubeNumberOfIndices; i += 3) { glColor4ub(cubeFaceColors[colorIndex], cubeFaceColors[colorIndex+1], cubeFaceColors[colorIndex+2], cubeFaceColors[colorIndex+3]); int face = (i / 3.0); if (face%2 != 0.0) colorIndex+=4; glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_BYTE, &cubeVertexFaces[i]); } glDisableClientState(GL_VERTEX_ARRAY); } //move this to a data model later! - (GLfixed)floatToFixed:(GLfloat)aValue; { return (GLfixed) (aValue * 65536.0f); } - (void)drawViewByRotatingAroundX:(float)xRotation rotatingAroundY:(float)yRotation scaling:(float)scaleFactor translationInX:(float)xTranslation translationInY:(float)yTranslation view:(GLView*)view; { glMatrixMode(GL_MODELVIEW); GLfixed currentModelViewMatrix[16] = { 45146, 47441, 2485, 0, -25149, 26775,-54274, 0, -40303, 36435, 36650, 0, 0, 0, 0, 65536 }; /* GLfixed currentModelViewMatrix[16] = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 65536 }; */ //glLoadIdentity(); //glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -10.0f, 4.0f); // Reset rotation system if (isFirstDrawing) { //glLoadIdentity(); glMultMatrixx(currentModelViewMatrix); [self configureLighting]; isFirstDrawing = NO; } // Scale the view to fit current multitouch scaling GLfixed fixedPointScaleFactor = [self floatToFixed:scaleFactor]; glScalex(fixedPointScaleFactor, fixedPointScaleFactor, fixedPointScaleFactor); // Perform incremental rotation based on current angles in X and Y glGetFixedv(GL_MODELVIEW_MATRIX, currentModelViewMatrix); GLfloat totalRotation = sqrt(xRotation*xRotation + yRotation*yRotation); glRotatex([self floatToFixed:totalRotation], (GLfixed)((xRotation/totalRotation) * (GLfloat)currentModelViewMatrix[1] + (yRotation/totalRotation) * (GLfloat)currentModelViewMatrix[0]), (GLfixed)((xRotation/totalRotation) * (GLfloat)currentModelViewMatrix[5] + (yRotation/totalRotation) * (GLfloat)currentModelViewMatrix[4]), (GLfixed)((xRotation/totalRotation) * (GLfloat)currentModelViewMatrix[9] + (yRotation/totalRotation) * (GLfloat)currentModelViewMatrix[8]) ); // Translate the model by the accumulated amount glGetFixedv(GL_MODELVIEW_MATRIX, currentModelViewMatrix); float currentScaleFactor = sqrt(pow((GLfloat)currentModelViewMatrix[0] / 65536.0f, 2.0f) + pow((GLfloat)currentModelViewMatrix[1] / 65536.0f, 2.0f) + pow((GLfloat)currentModelViewMatrix[2] / 65536.0f, 2.0f)); xTranslation = xTranslation / (currentScaleFactor * currentScaleFactor); yTranslation = yTranslation / (currentScaleFactor * currentScaleFactor); // Grab the current model matrix, and use the (0,4,8) components to figure the eye's X axis in the model coordinate system, translate along that glTranslatef(xTranslation * (GLfloat)currentModelViewMatrix[0] / 65536.0f, xTranslation * (GLfloat)currentModelViewMatrix[4] / 65536.0f, xTranslation * (GLfloat)currentModelViewMatrix[8] / 65536.0f); // Grab the current model matrix, and use the (1,5,9) components to figure the eye's Y axis in the model coordinate system, translate along that glTranslatef(yTranslation * (GLfloat)currentModelViewMatrix[1] / 65536.0f, yTranslation * (GLfloat)currentModelViewMatrix[5] / 65536.0f, yTranslation * (GLfloat)currentModelViewMatrix[9] / 65536.0f); // Black background, with depth buffer enabled glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); [self drawBox]; } - (void)configureLighting; { const GLfixed lightAmbient[] = {13107, 13107, 13107, 65535}; const GLfixed lightDiffuse[] = {65535, 65535, 65535, 65535}; const GLfixed matAmbient[] = {65535, 65535, 65535, 65535}; const GLfixed matDiffuse[] = {65535, 65535, 65535, 65535}; const GLfixed lightPosition[] = {30535, -30535, 0, 0}; const GLfixed lightShininess = 20; glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); glEnable(GL_COLOR_MATERIAL); glMaterialxv(GL_FRONT_AND_BACK, GL_AMBIENT, matAmbient); glMaterialxv(GL_FRONT_AND_BACK, GL_DIFFUSE, matDiffuse); glMaterialx(GL_FRONT_AND_BACK, GL_SHININESS, lightShininess); glLightxv(GL_LIGHT0, GL_AMBIENT, lightAmbient); glLightxv(GL_LIGHT0, GL_DIFFUSE, lightDiffuse); glLightxv(GL_LIGHT0, GL_POSITION, lightPosition); glEnable(GL_DEPTH_TEST); glShadeModel(GL_SMOOTH); glEnable(GL_NORMALIZE); } -(void)setupView:(GLView*)view { const GLfloat zNear = 0.1, zFar = 1000.0, fieldOfView = 60.0; GLfloat size; glMatrixMode(GL_PROJECTION); glEnable(GL_DEPTH_TEST); size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0); CGRect rect = view.bounds; glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size / (rect.size.width / rect.size.height), zNear, zFar); glViewport(0, 0, rect.size.width, rect.size.height); glScissor(0, 0, rect.size.width, rect.size.height); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glTranslatef(0.0f, 0.0f, -6.0f); isFirstDrawing = YES; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; } - (void)dealloc { [super dealloc]; } @end

    Read the article

  • Accuracy of OpenGL ES Instrument

    - by Rob Jones
    I'm developing a game for the iPhone. I've decided that 30FPS is plenty so I've written some code that only allows the App to present the render buffer every 1/30 of a second. When I tried to verify this with Instruments I got varying information. On an iPod Touch (2009 edition, 32G) it reports 30 FPS for Core Animation Frames Per Second. On an iPhone 3G I get wildly varying results. And not just less than 30 FPS. I see 30 FPS on a regular basis. It actually seems to hang closer to 36-39. To investigate this anomaly I added my own FPS to the app and update it once per second. I stays right at 29 FPS on both devices. So, does anyone have any suggestions as to what might be going on? I expect Instruments to be accurate so it really concerns me that it appears inaccurate. It makes me think I have a bug somewhere, but I sure can't find it.

    Read the article

  • Texture allocations being doubled in iPhone OpenGL ES

    - by Kyle
    The below couple lines are called 15 times during initialization. The tx-size is reported at 512 everytime, so this will allocate a 1mb image in memory 15 times, for a total of 15mb used.. However, I noticed instruments is reporting a total of 31 allocations! (15*2)+1 glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, tx-size, tx-size, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData); free(spriteData); Likewise in another area of my program that allocates 6 256x256x4 (256kB) textures.. I see 13 sitting there. (6*2)+1 Anyone know what's going on here? It seems like awful memory management, and I really hope it's my fault. Just to let everyone know, I'm on the simulator.

    Read the article

  • Custom view transition in OpenGL ES

    - by melfar
    I'm trying to create a custom transition, to serve as a replacement for a default transition you would get here, for example: [self.navigationController pushViewController:someController animated:YES]; I have prepared an OpenGL-based view that performs an effect on some static texture mapped to a plane (let's say it's a copy of the flip effect in Core Animation). What I don't know how to do is: grab current view content and make a texture out of it (I remember seeing a function that does just that, but can't find it) how to do the same for the view that is currently offscreen and is going to replace current view are there some APIs I can hook to in order to make my transition class as native as possible (make it a kind of Core Animation effect)? Any thoughts or links are greatly appreciated! UPDATE Jeffrey Forbes's answer works great as a solution to capture the content of a view. What I haven't figured out yet is how to capture the content of the view I want to transition to, which should be invisible until the transition is done. Also, which method should I use to present the OpenGL view? For demonstration purposes I used pushViewController. That affects the navbar, though, which I actually want to go one item back, with animation, check this vid for explanation: http://vimeo.com/4649397. Another option would be to go with presentViewController, but that shows fullscreen. Do you think maybe creating another window (or view?) could be useful?

    Read the article

  • problem loading texture with transparency with OpenGL ES and Android

    - by Evan Kimia
    Im trying to load an image that has background transparency that will be layered over another texture. When i try and load it, all i get is a white screen. The texture is 512 by 512, and its saved in photoshop as a 24 bit PNG (standard PNG specs in the Photoshop Save for Web and Devices config window). Any idea why its not showing? The texture without transparency shows without a problem. Here is my loadTextures method: public void loadGLTexture(GL10 gl, Context context) { //Get the texture from the Android resource directory Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.m1); Bitmap normalScheduleLines = BitmapFactory.decodeResource(context.getResources(), R.drawable.m1n); //Generate texture pointers... gl.glGenTextures(3, textures, 0); //...and bind it to our array gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[1]); //Create Nearest Filtered Texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR_MIPMAP_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL11.GL_TEXTURE_2D, GL11.GL_GENERATE_MIPMAP, GL11.GL_TRUE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE); GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); bitmap.recycle(); //Bind our normal schedule bus map lines gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]); //Create Nearest Filtered Texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR_MIPMAP_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL11.GL_TEXTURE_2D, GL11.GL_GENERATE_MIPMAP, GL11.GL_TRUE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE); GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_RGBA, normalScheduleLines, 0); normalScheduleLines.recycle(); }

    Read the article

  • Create a bubble effect on a grid OpenGL-ES

    - by Charles Michael
    Hi there. I have created a grid with 40 x 40 vertex3D (small but useful) I can pick a single vertex out of that grid by simply calling a function with the position array[X][Y], And therefore neighbors too. How can I raise up neighbor vertex Z value so they kinda look like a bubble or sphere kind of thingy? My first tough was to use: Neighbor_vertex.Z = sin(PI/4 * 1 - ( 1/ distance_between_Neighbor_and_Pivot) ) * desired_Max_Height But all I got is something like a wave.... and I would like to have a bubble or Sphere like shape. THX dudes and dudettes

    Read the article

  • openGL ES retina support

    - by Bryan
    I'm trying to get the avTouch sample code app to run on the retina display. Has anyone done this? In the CALevelMeter class, I've tried the following: - (id)initWithCoder:(NSCoder *)coder { if (self = [super initWithCoder:coder]) { CGFloat f = self.contentScaleFactor; if ([self respondsToSelector:@selector(contentScaleFactor)]) { self.contentScaleFactor = [[UIScreen mainScreen] scale]; } f = self.contentScaleFactor; _showsPeaks = YES; _channelNumbers = [[NSArray alloc] initWithObjects:[NSNumber numberWithInt:0], nil]; _vertical = NO; _useGL = YES; _meterTable = new MeterTable(kMinDBvalue); [self layoutSubLevelMeters]; [self registerForBackgroundNotifications]; } return self; } and it sets the contentScaleFactor to "2". Great, that was expected. But then in the layoutSubviews, CALevelMeter frame is still 1/2 of what it should be. Any ideas?

    Read the article

  • displaying a saved buffer in OpenGL ES

    - by Adam
    Hi everyone, So basically I have a screenshot that I've saved that I want to later display on the screen. I've saved it with: glReadPixels(0, 0, self.bounds.size.width, self.bounds.size.height, GL_RGBA, GL_UNSIGNED_BYTE, savedBuffer); And later I'm trying to write it to the screen with: GLuint RenderedTex; glGenTextures(1, &RenderedTex); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, RenderedTex); glTexImage2D (GL_TEXTURE_2D, 0, GL_RGBA, self.bounds.size.width, self.bounds.size.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, savedBuffer); glDisable(GL_TEXTURE_2D); I'm pretty new to OpenGL so I don't know if I'm doing things right... actually I know I'm not, because nothing shows up. Also not sure how to dispose of the texture when I'm done with it. Anyone know the correct way to do this? Edit: I think I might be having a problem loading the texture because it's not a power of 2, it's 320x480... also, I think this code is just loading the texture, but not drawing it, I'd need a call to glDrawArrays(GL_TRIANGLE_STRIP, 0, 4) in there somewhere...

    Read the article

  • OpenGL ES functions not accepting values originating outside of it's view

    - by Josh Elsasser
    I've been unable to figure this out on my own. I currently have an Open GLES setup where a view controller both updates a game world (with a dt), fetches the data I need to render, passes it off to an EAGLView through two structures (built of Apple's ES1Renderer), and draws the scene. Whenever a value originates outside of the Open GL view, it can't be used to either translate objects using glTranslatef, or set up the scene using glOrthof. If I assign a new value to something, it will work - even if it is the exact same number. The two structures I have each contain a variety of floating-point numbers and booleans, along with two arrays. I can log the values from within my renderer - they make it there - but I receive errors from OpenGL if I try to do anything with them. No crashes result, but the glOrthof call doesn't work if I don't set the camera values to anything different. Code used to set up scene: [EAGLContext setCurrentContext:context]; glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer); //clears the color buffer bit glClear(GL_COLOR_BUFFER_BIT); glMatrixMode(GL_PROJECTION); //sets up the scene w/ ortho projection glViewport(0, 0, 320, 480); glLoadIdentity(); glOrthof(320, 0, dynamicData.cam_x2, dynamicData.cam_x1, 1.0, -1.0); glClearColor(1.0, 1.0, 1.0, 1.0); /*error checking code here*/ "dynamicData" (which is replaced every frame) is created within my game simulation. From within my controller, I call a method (w/in my simulation) that returns it, and pass the result on to the EAGLView, which passes it on to the renderer. I haven't been able to come up with a better solution for this - suggestions in this regard would be greatly appreciated as well. Also, this function doesn't work as well (values originate in the same place): glTranslatef(dynamicData.ship_x, dynamicData.ship_y, 0.0); Thanks in advance. Additional Definitions: Structure (declared in a separate header): typedef struct { float ship_x, ship_y; float cam_x1, cam_x2; } dynamicRenderData; Render data getter (and builder) (every frame) - (dynamicData)getDynRenderData { //d_rd is an ivar, zeroed on initialization d_rd.ship_x = mainShip.position.x; d_rd.ship_y = mainShip.position.y; d_rd.cam_x1 = d_rd.ship_x - 30.0f; d_rd.cam_x2 = d_rd.cam_x1 + 480.0f; return d_rd; } Zeroed at start. (d_rd.ship_x = 0;, etc…) Setting up the view. Prototype (GLView): - (void)draw: (dynamicRenderData)dynamicData Prototype (Renderer): - (void)drawView: (dynamicRenderData)dynamicData How it's called w/in the controller: //controller [glview draw: [world getDynRenderData]]; //glview (within draw) [renderer drawView: dynamicData];

    Read the article

  • another question about OpenGL ES rendering to texture

    - by ensoreus
    Hello, pros and gurus! Here is another question about rendering to texture. The whole stuff is all about saving texture between passing image into different filters. Maybe all iPhone developers knows about Apple's sample code with OpenGL processing where they used GL filters(functions), but pass into them the same source image. I need to edit an image by passing it sequentelly with saving the state of the image to edit. I am very noob in OpenGL, so I spent increadibly a lot of to solve the issue. So, I desided to create 2 FBO's and attach source image and temporary image as a textures to render in. Here is my init routine: glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnable(GL_TEXTURE_2D); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, (GLint *)&SystemFBO); glImage = [self loadTexture:preparedImage]; //source image for (int i = 0; i < 4; i++) { fullquad[i].s *= glImage->s; fullquad[i].t *= glImage->t; flipquad[i].s *= glImage->s; flipquad[i].t *= glImage->t; } tmpImage = [self loadEmptyTexture]; //editing image glGenFramebuffersOES(1, &tmpImageFBO); glBindFramebufferOES(GL_FRAMEBUFFER_OES, tmpImageFBO); glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, tmpImage->texID, 0); GLenum status = glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES); if(status != GL_FRAMEBUFFER_COMPLETE_OES) { NSLog(@"failed to make complete tmp framebuffer object %x", status); } glBindTexture(GL_TEXTURE_2D, 0); glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0); glGenRenderbuffersOES(1, &glImageFBO); glBindFramebufferOES(GL_FRAMEBUFFER_OES, glImageFBO); glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, glImage->texID, 0); status = glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) ; if(status != GL_FRAMEBUFFER_COMPLETE_OES) { NSLog(@"failed to make complete cur framebuffer object %x", status); } glBindTexture(GL_TEXTURE_2D, 0); glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0); When user drag the slider, this routine invokes to apply changes -(void)setContrast:(CGFloat)value{ contrast = value; if(flag!=mfContrast){ NSLog(@"contrast: dumped"); flag = mfContrast; glBindFramebufferOES(GL_FRAMEBUFFER_OES, glImageFBO); glClearColor(1,1,1,1); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(0, 512, 0, 512, -1, 1); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glScalef(512, 512, 1); glBindTexture(GL_TEXTURE_2D, tmpImage->texID); glViewport(0, 0, 512, 512); glVertexPointer(2, GL_FLOAT, sizeof(V2fT2f), &fullquad[0].x); glTexCoordPointer(2, GL_FLOAT, sizeof(V2fT2f), &fullquad[0].s); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0); } glBindFramebufferOES(GL_FRAMEBUFFER_OES,tmpImageFBO); glClearColor(0,0,0,1); glClear(GL_COLOR_BUFFER_BIT); glEnable(GL_TEXTURE_2D); glActiveTexture(GL_TEXTURE0); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(0, 512, 0, 512, -1, 1); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glScalef(512, 512, 1); glBindTexture(GL_TEXTURE_2D, glImage->texID); glViewport(0, 0, 512, 512); [self contrastProc:fullquad value:contrast]; glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0); [self redraw]; } Here are two cases: if it is the same filter(edit mode) to use, I bind tmpFBO to draw into tmpImage texture and edit glImage texture. contrastProc is a pure routine from Apples's sample. If it is another mode, than I save edited image by drawing tmpImage texture in source texture glImage, binded with glImageFBO. After that I call redraw: glBindFramebufferOES(GL_FRAMEBUFFER_OES, SystemFBO); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(0, kTexWidth, 0, kTexHeight, -1, 1); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glScalef(kTexWidth, kTexHeight, 1); glBindTexture(GL_TEXTURE_2D, glImage->texID); glViewport(0, 0, kTexWidth, kTexHeight); glVertexPointer(2, GL_FLOAT, sizeof(V2fT2f), &flipquad[0].x); glTexCoordPointer(2, GL_FLOAT, sizeof(V2fT2f), &flipquad[0].s); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0); And here it binds visual framebuffer and dispose glImage texture. So, the result is VERY aggresive filtering. Increasing contrast volume by just 0.2 brings image to state that comparable with 0.9 contrast volume in Apple's sample code project. I miss something obvious, I guess. Interesting, if I disabple line glBindTexture(GL_TEXTURE_2D, glImage->texID); in setContrast routine it brings no effect. At all. If I replace tmpImageFBO with SystemFBO to draw glImage directly on display(and disabling redraw invoking line), all works fine. Please, HELP ME!!! :(

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >