Search Results

Search found 3627 results on 146 pages for 'opengl es 2 0'.

Page 61/146 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • OpenGL ES 2.0 equivalent of glOrtho()?

    - by Zippo
    In my iphone app, I need to project 3d scene into the 2D coordinates of the screen for some calculations. My objects go through various rotations, translations and scaling. So I figured I need to multiply the vertices with ModelView matrix first, then I need to multiply it with the Orthogonal projection matrix. First of all am on the right track? I have the Model View Matrix, but need the projection matrix. Is there a glOrtho() equivalent in ES 2.0?

    Read the article

  • Graphic Setup tune-up checklist

    - by Click Ok
    I was trying to play the game Warzone 2100 and the games runs fine, with nice speed, but the screen stays with a horizontal lines "flickering"... My PC have a integrated GeForce Go 6100 vga. Ok, not a powerfull vga, but it's not the end of the world to run a "simple" game like this (compared with another games that ask you send your eyes to purchase a expensive vga). So, I think that the problem can be of the configuration of my machine. I use it in first instance for programming jobs, so I underpay attention to video setup. I would like about a checklist to know if my PC is "ready" to games. By example, I know that I need: Lastest vga drivers Updated DirectX and OpenGL What you suggest? There is too some good programs to test performance and suggests improvements in the system? Thank you! PS: I'm using Windows 7

    Read the article

  • Transform OpenGL coordinates to lower UIView coordinates

    - by John Qualis
    Hi, I am new to OpenGL over iPhone. I am developing an iPhone app similar to a barcode reader but with an extra OpenGL layer. The bottommost layer is UIImagePickerController, then I use UIView on top and draw a rectangle at certain co-ordinates on the iphone screen. So far everything is OK. Then I am trying to draw an OpenGL 3-D model in that rectangle. I am able to load a 3-D model in the iPhone based on this code here - http://iphonedevelopment.blogspot.com/2008/12/start-of-wavefront-obj-file-loader.html I am not able to transform the co-ordinates of the rectangle into OpenGL co-ordinates. Appreciate any help. Do I need to use a matrix to translate the currentPosition of the 3-D model so it is drawn within myRect? The code is given below.. Appreciate any help/pointers in this regards. John (void)drawView:(GLView*)view { static GLfloat rotation = 0.0; glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); glColor4f(0.0, 0.5, 1.0, 1.0); // The coordinates of the rectangle are myRect.x, // myRect.y, myRect.width, myRect.height // Do I need a transform matrix here? //glOrthof(-160.0f, 160.0f, -240.0f, 240.0f, -1.0f, 1.0f); [plane drawSelf]; .... } -(void)setupView:(GLView*)view { const GLfloat zNear = 0.01, zFar = 1000.0, fieldOfView = 45.0; GLfloat size; glEnable(GL_DEPTH_TEST); glMatrixMode(GL_PROJECTION); //glMatrixMode(GL_MODELVIEW); size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0); CGRect rect = view.bounds; glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size / (rect.size.width / rect.size.height), zNear, zFar); glViewport(0, 0, rect.size.width, rect.size.height); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glClearColor(0.0f, 0.0f, 0.0f, 0.0f); }

    Read the article

  • Erratic DNS name resolution

    - by alex
    Hi all, We have a client we host a web for (blog.foobar.es). We do not manage foobar.es's DNS setup, we just told them to point blog.foobar.es to our web server's IP. We have noticed that sometimes we cannot browse to blog.foobar.es, but we can browse to other sites on that server. Troubleshooting a bit using host(1) yields something funny: $ host blog.foobar.es 8.8.8.8 Using domain server: Name: 8.8.8.8 Address: 8.8.8.8#53 Aliases: Host blog.foobar.es not found: 3(NXDOMAIN) , being 8.8.8.8 one of Google's public DNS servers. However, sometimes the same server resolves the name correctly (!). Another funny thing, is that our ISP's DNS servers sometimes say: $ host blog.foobar.es 80.58.61.250 Using domain server: Name: 80.58.61.250 Address: 80.58.61.250#53 Aliases: blog.foobar.es has address x.x.x.x Host blog.foobar.es not found: 3(NXDOMAIN) Which I don't really understand. I've dug around using dig(1), and have noticed they've set up a SOA record for foobar.es: $ dig foobar.es ; <<>> DiG 9.7.0-P1 <<>> foobar.es ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 59824 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;foobar.es. IN A ;; AUTHORITY SECTION: foobar.es. 86400 IN SOA dns1.provider.es. root.dns1.provider.es. 2011030301 86400 7200 2592000 172800 ;; Query time: 78 msec ;; SERVER: 80.58.61.250#53(80.58.61.250) ;; WHEN: Thu Mar 3 16:16:19 2011 ;; MSG SIZE rcvd: 78 ... which I'm completely unfamiliar with. Ideas? We can't really do much as we do not control DNS, but we'd like to point our clients in the right direction...

    Read the article

  • OWB és heterogén adatforrások, Oracle Magazine, 2010. május-június

    - by Fekete Zoltán
    Megjelent az Oracle Magazine aktuális száma (naná, az aktuális számnak ez a dolga. Oracle Magazine, 2010. május-június. Ebben a számban sok érdekes cikk közül válogathatunk: cloud computing, Java, .Net, új generációs backup, párhuzamosság és PL/SQL, OWB,... Ajánlom a Business Intelligence - Oracle Warehouse Builder 11g Release 2 and Heterogeneous Databases cikket, melyben megtudhatjuk, hogyan használhatunk heterogén adatforrásokat az Oracle Warehouse Builder ETL-ELT eszközzel, hogyan tudunk például SQL Serverhez csatlakozni, és nagy teljesítménnyel adatokat kinyerni. Az Oracle adatintegrációs weblapja. Ez a gazdag heterogenitás az OWB az Oracle Data Integrator testvér termékbol jön. Az adatintegrációs SOD azt mondja, hogy ez a két Java alapú termék, az OWB és az ODI egy termékben fognak egyesülni.

    Read the article

  • En direct des Qt DevDays 2012 : compte-rendu de la formation sur l'OpenGL moderne avec Qt 5

    Bonjour à tous, Actuellement, je suis à Berlin, au Cafe Moskau pour assister aux Qt DevDays 2012. Comme chaque année, la première journée est réservée aux formations. J'assiste à la formation appelée "Modern OpenGL with Qt5" réalisée par Sean Harmer de KDAB. Nous avons passé les deux heures de la matinée à voir la création et l'initialisation d'une fenêtre OpenGL dans Qt 5 (il y a quelques changements mineurs par rapport à Qt 4) et l'affichage d'un joli triangle en OpenGL moderne.

    Read the article

  • Change the scale-policy of OpenGL ES in Android?

    - by wanting252
    I currently develop a game for Android in OpenGL ES 1.0, use libgdx library. I target the 720x480 screen size. For example, I design only one arts pack for 720x480. And what will happen in Android phones with screen-size smaller or bigger than it, 480x320 for instance? Could you please tell me how to change the scale-policy of OpenGL ES in Android? Or in libgdx specially? Is there anything like "Resample Image" like photoshop?(Nearest Neighbor, Bilinear, Bicubic etc..) for libgdx? Edit: I found some tutorials about texture filter in OpenGL, test it with Linear and Nearest. Linear is good for scaling but slow down the game, and Nearest is on the contrary. What should I do to get a balance between those?

    Read the article

  • 3 index buffers

    - by bobobobo
    So, in both D3D and OpenGL there's ability to draw from an index buffer. The OBJ file format however does something weird. It specifies a bunch of vertices like: v -21.499660 6.424470 4.069845 v -25.117170 6.418100 4.068025 v -21.663851 8.282170 4.069585 v -21.651890 6.420180 4.068675 v -25.128481 8.281520 4.069585 Then it specifies a bunch of normals like.. vn 0.196004 0.558984 0.805680 vn -0.009523 0.210194 -0.977613 vn -0.147787 0.380832 -0.912757 vn 0.822108 0.567581 0.044617 vn 0.597037 0.057507 -0.800150 vn 0.809312 -0.045432 0.585619 Then it specifies a bunch of tex coords like vt 0.1225 0.5636 vt 0.6221 0.1111 vt 0.4865 0.8888 vt 0.2862 0.2586 vt 0.5865 0.2568 vt 0.1862 0.2166 THEN it specifies "faces" on the model like: f 1/2/5 2/3/7 8/2/6 f 5/9/7 6/3/8 5/2/1 So, in trying to render this with vertex buffers, In OpenGL I can use glVertexPointer, glNormalPointer and glTexCoordPointer to set pointers to each of the vertex, normal and texture coordinate arrays respectively.. but when it comes down to drawing with glDrawElements, I can only specify ONE set of indices, namely the indices it should use when visiting the vertices. Ok, then what? I still have 3 sets of indices to visit. In d3d its much the same - I can set up 3 streams: one for vertices, one for texcoords, and one for normals, but when it comes to using IDirect3DDevice9::DrawIndexedPrimitive, I can still only specify ONE index buffer, which will index into the vertices array. So, is it possible to draw from vertex buffers using different index arrays for each of the vertex, texcoord, and normal buffers (EITHER d3d or opengl!)

    Read the article

  • Draw LINE_STRIP with Unity

    - by Boozzz
    For a new project I am thinking about whether to use OpenGL or Unity3d. I have a bit of experience with OpenGL, but I am completely new to Unity. I already read through the Unity documentation and tutorials on the Unity Website. However, I could not find a way to draw a simple Line-Strip with Unity. In the following example (C#, OpenGL/SharpGL) I draw a round trajectory from a predifined point to an obstacle, which can be imagined as a divided circle with midpoint [cx,cy] and radius r. The position (x-y coordinates) of the obstacle is given by obst_x and obst_y. Question 1: How could I do the same with Unity? Question 2: In my new project, I will have to draw quite a lot of such geometric primitives. Does it make any sense to use Unity for those things? void drawCircle(float cx, float cy, float r, const float obst_x, const float obst_y) { float theta = 0.0f, pos_x, pos_y, dist; const float delta = 0.1; glBegin(GL_LINE_STRIP); while (theta < 180) { theta += delta; //get the current angle float x = r * cosf(theta); //calculate the x component float y = r * sinf(theta); //calculate the y component pos_x = x + cx; //calculate current x position pos_y = y + cy; //calculate current y position //calculate distance from current vertex to obstacle dist = sqrt(pow(pos_x - obst_x) + pow(pos_y - obst_y)); //check if current vertex intersects with obstacle if dist <= 0 { break; //stop drawing circle } else { glVertex2f(pos_x, pos_y); //draw vertex } } glEnd(); }

    Read the article

  • Android depth buffer issue: Advice for anyone experiencing problem

    - by Andrew Smith
    I've wasted around 30 hours this week writing and re-writing code, believing that I had misunderstood how the OpenGL depth buffer works. Everything I tried, failed. I have now resolved my problem by finding what may be an error in the Android implementation of OpenGL. See this API entry: http://www.opengl.org/sdk/docs/man/xhtml/glClearDepth.xml void glClearDepth(GLclampd depth); Specifies the depth value used when the depth buffer is cleared. The initial value is 1. Android's implementation has two versions of this command: glClearDepthx which takes an integer value, clamped 0-1 glClearDepthf which takes a floating point value, clamped 0-1 If you use glClearDepthf(1) then you get the results you would expect. If you use glClearDepthx(1), as I was doing then you get different results. (Note that 1 is the default value, but calling the command with the argument 1 produces different results than not calling it at all.) Quite what is happening I do not know, but the depth buffer was being cleared to a value different from what I had specified.

    Read the article

  • How can you transform a set of numbers into mostly whole ones?

    - by Alice
    Small amount of background: I am working on a converter that bridges between a map maker (Tiled) that outputs in XML, and an engine (Angel2D) that inputs lua tables. Most of this is straight forward However, Tiled outputs in pixel offsets (integers of absolute values), while Angel2D inputs OpenGL units (floats of relative values); a conversion factor between these two is needed (for example, 32px = 1gu). Since OpenGL units are abstract, and the camera can zoom in or out if the objects are too small or big, the actual conversion factor isn't important; I could use a random number, and the user would merely have to zoom in or out. But it would be best if the conversion factor was selected such that most numbers outputted were small and whole (or fractions of small whole numbers), because that makes it easier to work with (and the whole point of the OpenGL units is that they are easy to work with). How would I find such a conversion factor reliably? My first attempt was to use the smallest number given; this resulted in no fractions below 1, but often lead to lots of decimal places where the factors didn't line up. Then I tried the mode of the sequence, which lead to the largest number of 1's possible, but often lead to very long floats for background images. My current approach gets the GCD of the whole sequence, which, when it works, works great, but can easily be thrown off course by a single bad apple. Note that while I could easily just pass the numbers I am given along, or pick some fixed factor, or use one of the conversions I specified above, I am looking for a method to reliably scale this list of integers to small, whole numbers or simple fractions, because this would most likely be unsurprising to the end user; this is not a one off conversion. The end users tend to use 1.0 as their "base" for manipulations (because it's simple and obvious), so it would make more sense for the sizes of entities to cluster around this.

    Read the article

  • Maximized MFC window has dead region at the top

    - by John Calsbeek
    I'm trying to make a MFC window fullscreen whenever it is maximized. This window is being used to draw OpenGL content. So far it works fine—it fills the entire screen with the exception of the taskbar—but there's a dead black region at the top of the screen, 62 pixels in height. It's pretty darn close to the height of the Windows 7 taskbar, but it pretty much stays the same regardless of if the taskbar is on autohide or on a different side of the screen. When I get a CWind::OnSize callback, the height that is given is 988, which is 62 pixels short of the actual screen height (1050). I've tried to manually set the window height to 1050 with SetWindowPos, I've tried to give Windows the screen dimensions in CWnd::OnGetMinMaxInfo, and I've tried to give the screen dimensions to glViewport instead of the 988 pixels that I'm being given. None of these seem to work. I'm accomplishing the fullscreening with a call to… ModifyStyle(0, WS_MINIMIZEBOX | WS_MAXIMIZEBOX | WS_SYSMENU | WS_CAPTION | WS_POPUP, 0); …in the SIZE_MAXIMIZED CWnd::OnSize callback, which works fine, except for this dead region. I don't know if it's an OpenGL thing or a Win32 thing or a MFC thing. The GetClientRect function for my window reports the false 988 height. The same OpenGL rendering code works fine in my Mac OS X build. Curiously enough, I have gotten the dead region to move around a bit when I play with the taskbar (autohiding it, moving it around the screen, etc.). I've gotten the dead area to shrink to about half—not sure if the other half went to the bottom of the window or not.

    Read the article

  • Blit SDL_Surface onto another SDL_Surface and apply a colorkey

    - by NordCoder
    I want to load an SDL_Surface into an OpenGL texture with padding (so that NPOT-POT) and apply a color key on the surface afterwards. I either end up colorkeying all pixels, regardless of their color, or not colorkey anything at all. I have tried a lot of different things, but none of them seem to work. Here's the working snippet of my code. I use a custom color class for the colorkey (range [0-1]): // Create an empty surface with the same settings as the original image SDL_Surface* paddedImage = SDL_CreateRGBSurface(image->flags, width, height, image->format->BitsPerPixel, #if SDL_BYTEORDER == SDL_BIG_ENDIAN 0xff000000, 0x00ff0000, 0x0000ff00, 0x000000ff #else 0x000000ff, 0x0000ff00, 0x00ff0000, 0xff000000 #endif ); // Map RGBA color to pixel format value Uint32 colorKeyPixelFormat = SDL_MapRGBA(paddedImage->format, static_cast<Uint8>(colorKey.R * 255), static_cast<Uint8>(colorKey.G * 255), static_cast<Uint8>(colorKey.B * 255), static_cast<Uint8>(colorKey.A * 255)); SDL_FillRect(paddedImage, NULL, colorKeyPixelFormat); // Blit the image onto the padded image SDL_BlitSurface(image, NULL, paddedImage, NULL); SDL_SetColorKey(paddedImage, SDL_SRCCOLORKEY, colorKeyPixelFormat); Afterwards, I generate an OpenGL texture from paddedImage using similar code to the SDL+OpenGL texture loading code found online (I'll post if necessary). This code works if I just want the texture with or without padding, and is likely not the problem. I realize that I set all pixels in paddedImage to have alpha zero which causes the first problem I mentioned, but I can't seem to figure out how to do this. Should I just loop over the pixels and set the appropriate colors to have alpha zero?

    Read the article

  • Adobe premiere CS5 problem with the display driver

    - by user30179
    This error is really hindering our project. I get an error, it started showing-up June 16th 2010. There are no windows updates at the on the same date as the error, other than (Windows Defender) Seems to happen when working with Image overlays. ERROR: "The NVIDIA OpenGL driver detected a problem with the display driver and is unable to continue. The application must close." We opened the side of the case in the possibility there is an over heating problem. Nvidia Driver ver 8.16.11.9175 (nVidia Quadro FX 1700) I am running: Windows 7 x64 Adobe premiere CS5 Production nVidia Quadro FX 1700 (MRGA14L) 4 Gig ram RAID 10 2 750GB drives Duo core 3.0 6MB L2 Cache This is at least three other people that have come across this error: NVidia Forum EVGA Forum NVidia Forum UPDATE: Having the case open did not help. I also installed New Nvidia drivers now I get a different error: *ERROR:*Your hardware configuration does not meet minimum specifications needed to run the application. The application must close. I ran Windows Update and installed all four updates so now I am waiting to see if the error occurs again. Anything beyond this I am out of options.

    Read the article

  • How can I make the XAnalogTV xscreensaver fill my screen?

    - by Breakthrough
    I recently installed xscreensaver, as well as the additional/extra screensavers. Many of the OpenGL ones function correctly, going fullscreen as expected. However, for some reason, the XAnalogTV screensaver leaves two "blank" spots on the edges of my screen. If I manually launch XAnalogTV, it displays a window, which it fills correctly. When I maximize the window, the same effect occurs: the window maximizes, but the two edges of the screen are literally "transparent". This effect also occurs when the screensaver is set to fullscreen. For these reasons, I believe the problem may be related to the aspect ratio of the screen. The edges of the screen are literally "ignored", with nothing being drawn there. Specifically, note the transition between the maximized and full-screen screenshots (with the un-drawn whitespace shrinking as the vertical height has been increased). For reference, I am running Xubuntu 12.04 on a Dell Vostro 1520 (Intel P8600, Nvidia 9300M) with a 1440 x 900 display (16:10). I have also set the GetViewPortIsFullOfLies preference to true. Is there any way to force XAnalogTV to fill my entire screen? Alternatively, as I believe the problem is aspect-ratio related, is there any way I can get the screensaver to render larger than my display, and simply discard the extra pixels? Relevant screenshots (windowed, maximized, and full-screen, respectively): You can see in the last two that the scrollbar from Firefox is clearly visible, even though this is a full-screen screensaver.

    Read the article

  • How can I make XAnalogTV fill my screen?

    - by Breakthrough
    I recently installed xscreensaver, as well as the additional/extra screensavers. Many of the OpenGL ones function correctly, going fullscreen as expected. However, for some reason, the XAnalogTV screensaver leaves two "blank" spots on the edges of my screen. If I manually launch XAnalogTV, it displays a window, which it fills correctly. When I maximize the window, the same effect occurs: the window maximizes, but the two edges of the screen are literally "transparent". This effect also occurs when the screensaver is set to fullscreen. For these reasons, I believe the problem may be related to the aspect ratio of the screen. The edges of the screen are literally "ignored", with nothing being drawn there. Specifically, note the transition between the maximized and full-screen screenshots (with the un-drawn whitespace shrinking as the vertical height has been increased). For reference, I am running Xubuntu 12.04 on a Dell Vostro 1520 (Intel P8600, Nvidia 9300M) with a 1440 x 900 display (16:10). I have also set the GetViewPortIsFullOfLies preference to true. Is there any way to force XAnalogTV to fill my entire screen? Relevant screenshots (windowed, maximized, and full-screen, respectively):

    Read the article

  • Understanding the memory consumption on iPhone

    - by zoul
    Hello! I am working on a 2D iPhone game using OpenGL ES and I keep hitting the 24 MB memory limit – my application keeps crashing with the error code 101. I tried real hard to find where the memory goes, but the numbers in Instruments are still much bigger than what I would expect. I ran the application with the Memory Monitor, Object Alloc, Leaks and OpenGL ES instruments. When the application gets loaded, free physical memory drops from 37 MB to 23 MB, the Object Alloc settles around 7 MB, Leaks show two or three leaks a few bytes in size, the Gart Object Size is about 5 MB and Memory Monitor says the application takes up about 14 MB of real memory. I am perplexed as where did the memory go – when I dig into the Object Allocations, most of the memory is in the textures, exactly as I would expect. But both my own texture allocation counter and the Gart Object Size agree that the textures should take up somewhere around 5 MB. I am not aware of allocating anything else that would be worth mentioning, and the Object Alloc agrees. Where does the memory go? (I would be glad to supply more details if this is not enough.) Update: I really tried to find where I could allocate so much memory, but with no results. What drives me wild is the difference between the Object Allocations (~7 MB) and real memory usage as shown by Memory Monitor (~14 MB). Even if there were huge leaks or huge chunks of memory I forget about, the should still show up in the Object Allocations, shouldn’t they? I’ve already tried the usual suspects, ie. the UIImage with its caching, but that did not help. Is there a way to track memory usage “debugger-style”, line by line, watching each statement’s impact on memory usage? What I have found so far: I really am using that much memory. It is not easy to measure the real memory consumption, but after a lot of counting I think the memory consumption is really that high. My fault. I found no easy way to measure the memory used. The Memory Monitor numbers are accurate (these are the numbers that really matter), but the Memory Monitor can’t tell you where exactly the memory goes. The Object Alloc tool is almost useless for tracking the real memory usage. When I create a texture, the allocated memory counter goes up for a while (reading the texture into the memory), then drops (passing the texture data to OpenGL, freeing). This is OK, but does not always happen – sometimes the memory usage stays high even after the texture has been passed on to OpenGL and freed from “my” memory. This means that the total amount of memory allocated as shown by the Object Alloc tool is smaller than the real total memory consumption, but bigger than the real consumption minus textures (real – textures < object alloc < real). Go figure. I misread the Programming Guide. The memory limit of 24 MB applies to textures and surfaces, not the whole application. The actual red line lies a bit further, but I could not find any hard numbers. The consensus is that 25–30 MB is the ceiling. When the system gets short on memory, it starts sending the memory warning. I have almost nothing to free, but other applications do release some memory back to the system, especially Safari (which seems to be caching the websites). When the free memory as shown in the Memory Monitor goes zero, the system starts killing. I had to bite the bullet and rewrite some parts of the code to be more efficient on memory, but I am probably still pushing it. I

    Read the article

  • Stuffed Animal In OpenGL

    - by anon
    I have seen metal/plastic/water/fire/... shaders for OpenGL. However, it it possible to render something fur-like, say a stuffed animal / teddy bear in OpenGL (I know this is possible with renderman / ray tracers, but I want to do it in OpenGl). If you have pointers to GLSl shaders for this, please point me in the right direction. Thanks! [I'm guessing the answer is no since fur requires more than just shaders -- it almost requires creating geometry on the fly -- but I'd love to be proven wrong)]

    Read the article

  • access violation in wglMakeCurrent

    - by Stefan
    Sometimes in my OpenGL application I get an access violation in the following API call: wglMakeCurrent(NULL, NULL); The application only has one single thread, and I've checked that before that call, both the DC and HGLRC that are currently used are correct and valid. There are three different windows with OpenGL content, and they're all redrawn on WM_PAINT messages and if a refresh is required due to user interaction (e.g., picking an object). Also this access violation happens on different machines with different graphic cards, so I don't think it's a driver issue. What could make this API call crash? What should I investigate in the app code to find out where/why this happens? I'm really lost here since I've checked everything I could think of already. I hope someone can give me hints/ideas on what more to check.

    Read the article

  • Tracking finger move in order to rotate a triangle : tracking is not perfect

    - by Laurent BERNABE
    I've written a custom view, with the OpenGL_1 technology, in order to let user rotate a red triangle just by dragging it along x axis. (Will give a rotation around Y axis). It works, but there is a bit of latency when dragging from one direction to the other (without releasing the mouse/finger). So it seems that my code is not yet "goal perfect". (I am convinced that no code is perfect in itself). I thought of using a quaternion, but maybe it won't be so usefull : must I really use a Quaternion (or a kind of Matrix) ? I've designed application for Android 4.0.3, but it could fit into Android api 3 (Android 1.5) as well (at least, I think it could). So here is my main layout : activity_main.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" > <com.laurent_bernabe.android.triangletournant3d.MyOpenGLView android:layout_width="match_parent" android:layout_height="match_parent" /> </LinearLayout> Here is my main activity : MainActivity.java package com.laurent_bernabe.android.triangletournant3d; import android.app.Activity; import android.os.Bundle; import android.support.v4.app.NavUtils; import android.view.Menu; import android.view.MenuItem; public class MainActivity extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.activity_main, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { switch (item.getItemId()) { case android.R.id.home: NavUtils.navigateUpFromSameTask(this); return true; } return super.onOptionsItemSelected(item); } } And finally, my OpenGL view MyOpenGLView.java package com.laurent_bernabe.android.triangletournant3d; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.FloatBuffer; import javax.microedition.khronos.egl.EGLConfig; import javax.microedition.khronos.opengles.GL10; import android.content.Context; import android.graphics.Point; import android.opengl.GLSurfaceView; import android.opengl.GLSurfaceView.Renderer; import android.opengl.GLU; import android.util.AttributeSet; import android.view.MotionEvent; public class MyOpenGLView extends GLSurfaceView implements Renderer { public MyOpenGLView(Context context, AttributeSet attrs) { super(context, attrs); setRenderer(this); } public MyOpenGLView(Context context) { this(context, null); } @Override public boolean onTouchEvent(MotionEvent event) { int actionMasked = event.getActionMasked(); switch(actionMasked){ case MotionEvent.ACTION_DOWN: savedClickLocation = new Point((int) event.getX(), (int) event.getY()); break; case MotionEvent.ACTION_UP: savedClickLocation = null; break; case MotionEvent.ACTION_MOVE: Point newClickLocation = new Point((int) event.getX(), (int) event.getY()); int dx = newClickLocation.x - savedClickLocation.x; angle += Math.toRadians(dx); break; } return true; } @Override public void onDrawFrame(GL10 gl) { gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glLoadIdentity(); GLU.gluLookAt(gl, 0f, 0f, 5f, 0f, 0f, 0f, 0f, 1f, 0f ); gl.glRotatef(angle, 0f, 1f, 0f); gl.glColor4f(1f, 0f, 0f, 0f); gl.glVertexPointer(2, GL10.GL_FLOAT, 0, triangleCoordsBuff); gl.glDrawArrays(GL10.GL_TRIANGLES, 0, 3); } @Override public void onSurfaceChanged(GL10 gl, int width, int height) { gl.glViewport(0, 0, width, height); gl.glMatrixMode(GL10.GL_PROJECTION); gl.glLoadIdentity(); GLU.gluPerspective(gl, 60f, (float) width / height, 0.1f, 10f); gl.glMatrixMode(GL10.GL_MODELVIEW); } @Override public void onSurfaceCreated(GL10 gl, EGLConfig config) { gl.glEnable(GL10.GL_DEPTH_TEST); gl.glClearDepthf(1.0f); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); buildTriangleCoordsBuffer(); } private void buildTriangleCoordsBuffer() { ByteBuffer buffer = ByteBuffer.allocateDirect(4*triangleCoords.length); buffer.order(ByteOrder.nativeOrder()); triangleCoordsBuff = buffer.asFloatBuffer(); triangleCoordsBuff.put(triangleCoords); triangleCoordsBuff.rewind(); } private float [] triangleCoords = {-1f, -1f, +1f, -1f, +1f, +1f}; private FloatBuffer triangleCoordsBuff; private float angle = 0f; private Point savedClickLocation; } I don't think I really have to give you my manifest file. But I can if you think it is necessary. I've just tested on Emulator, not on real device. So, how can improve the reactivity ? Thanks in advance.

    Read the article

  • how to show semi-transparent object over playing video

    - by Aditya.Sen
    Hi, I want to have a alpha blended window over a video playing in the background. Considering the fact that a window cannot be made alpha blended over another window unless you take a snap of the background, I had to resort to OpenGL. I have few options for this: (1) To show an object on the WinCE window without showing the OpenGL ES window. Is it possible to do this ?? (2) Make the opengles window transparent and then show the object. Any suggestions will be greatly appreciated. Please can some1 help me by giving some inputs Thanks for any help Aditya

    Read the article

  • How do I prevent jagged edges alongside the surfaces of my 3d model?

    - by badcodenotreat
    Lets say I've implemented in openGL a crude model viewer with shading which renders a series of blocks, such that I have something that looks like this. http://i.imgur.com/TsF7K.jpg Whenever I rotate my model to the side, it causes an unwanted jagged effect along any surface with a steep viewing angle. http://i.imgur.com/Bgl9o.jpg I'm pretty sure this is due to the polygon offset I used to prevent z-fighting between the model and the wireframe, however I'm not able to find the factor/unit parameters in openGL which prevent this unwanted effect. what are the best values of factor and unit for glPolygonOffset to prevent this? would implementing anti-aliasing alleviate the problem? is the trade off in performance trivial/significant? is this perhaps a shading issue? should i try a solution along this line of thought?

    Read the article

  • What are some techniques to create scrollable areas?

    - by Omega
    I'm getting started with OpenGL ES on Android and I'd looking to learn some techniques to have a game map larger than the visible area. I'm assuming I've somehow got to ensure that the system isn't rendering the entire scene, including what's outside of the visible area. I'm just not sure how I'd go about designing this! This is for simple 2D top-down tile based rendering. No real 3D except what's inherent in OpenGL ES itself. Would anyone be able to get me started on the right path? Are there options that might scale nicely when I decide to start tilting my perspective and doing 3D?

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >