Search Results

Search found 2515 results on 101 pages for 'opengl es2'.

Page 94/101 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Design pattern for mouse interaction

    - by mike
    I need some opinions on what is the "ideal" design pattern for a general mouse interaction. Here the simplified problem. I have a small 3d program (QT and openGL) and I use the mouse for interaction. Every interaction is normally not only a single function call, it is mostly performed by up to 3 function calls (initiate, perform, finalize). For example, camera rotation: here the initial function call will deliver the current first mouse position, whereas the performing function calls will update the camera etc. However, for only a couple of interactions, hardcoding these (inside MousePressEvent, MouseReleaseEvent MouseMoveEvent or MouseWheelEvent etc) is not a big deal, but if I think about a more advanced program (e.g 20 or more interactions) then a proper design is needed. Therefore, how would you design such a interactions inside QT. I hope I made my problem clear enough, otherwise don't bother complain :-) Thanks

    Read the article

  • How to make some simple GUI controls?

    - by daniels
    I need to make a DirectX or OpenGL app and i will need a custom GUI for that. I think a button, a input text box, a list box (that will need a scroll bar as there will be more items that can fit on the screen) and a slider control will be enough. I know about CeGUI framework but i just don't like it, way too many XML files for my taste. My question is where should i start in learning how to do this custom GUI controls, are there any tutorial available or any material that could get me started? I haven't done a GUI contol myself before.

    Read the article

  • Developing and deploying games for Windows, Mac (& Linux)

    - by nornagon
    I want to write games that run on all the major platforms. I also want people to be able to play them by downloading a file and double clicking it. That means a single .exe/.app file. I'm happy to use OpenGL directly for graphics. What I don't know how to do is show a window, handle mouse/keyboard input and play sounds in a cross-platform manner. I don't really mind what the underlying language is, as long as it isn't C++ or Java. C#, Ruby or Python would be preferable, in that order :) Please, SO, save me from having to write Flash games!

    Read the article

  • How to color a mesh with values at the vertices in WPF 3D?

    - by Christo
    We've got a sphere which we want to display in 3D and color given a fuction that depends on spherical coordinates. The sphere was triangulated using a regular grid in (theta, phi), but this produced a lot of small triangles near the poles. In an attempt to reduce the number triangles at the poles, we've changed out mesh generation to produce more evenly sized triangles over the surface. The first triangulation method had the advantage that we could easily create a texture and drape it over the surface. It seems that in WPF it isn't possible to assign colors to vertices the way one would go about in OpenGL or Direct3D. With the second triangulation method it isn't apparent how to go about generating the texture and setting the texture coordinates, since the vertices aren't aligned to a grid anymore. Maybe it would be possible to create a linear texture containing a color for each vertex, but then how will that effect the coloring? Will it still render smoothly over the triangle surfaces as one would expect by applying per vertex coloring?

    Read the article

  • UITextField showing trash instead of characters

    - by krasnyk
    There's quite a strange thing happening to text fields in application I'm developing (see image below).  [1]: http://img7.imageshack.us/img7/1449/zrzutekranu20100506godz.png At some point in the application I'm using    - (void) viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; [[(TextFieldView*)[self view] usernameTextField] becomeFirstResponder];    } Which results in the above image. If it's called in viewDidAppear - everything is fine. The funny thing about this error is that it breaks ALL text fields throught the application. Has anyone ever encountered such an error? Might it be related to openGL use?

    Read the article

  • UITableView cellForRowAtIndexPath occasionally not called?

    - by Tobster
    I'm developing a graphing application that on the main navigation/tab view displays a UIView that renders a graph using openGL. Beneath that view is a UITableView that displays the list of elements on the graph, and an add button. Occasionally, when the user clicks on another tab, and then returns to the graph view tab, the table view does not get redrawn. I have a [tableView reloadData] method being called in the navigation controllers' (also the table view's delegate and data source) viewDidAppear method. numberOfSectionsInTableView and numberOfRowsInSection get called, but cellForRowAtIndexPath does not despite both latter methods returning positive values. This is an intermittant problem, only happening some of the time, but it's not clear what (if anything) influences this. Does anyone have any ideas?

    Read the article

  • How to "flick" a UIImageView?

    - by Meltemi
    I've got some UIViews that I'd like the user to be able to "flick" across the screen. They're not scroll views. They simply contain a raster image (png). Can anyone point me to some sample code, etc to help get me started? Something a little more heavyweight than "MoveMe" out there that helps detect a "flick" (vs a "nudge" or a drag and drop) and then carries the view off in the direction of the "flick"? OpenGL probably overkill. If possible I'd like to stay w/in the realm of Core Graphics/Animation.

    Read the article

  • DirectX text at (x,y,z)

    - by bobobobo
    In OpenGL, you can actually draw text with an XYZ position, and it will appear at that location, but in a fixed size. If anyone's played MechWarrior 2, they used it there for nav points. The text had a 3d position, but it always appeared a fixed size. The nav point was actually a bit of text at that exact point in space. Other than that the ability to place 3d text was pretty much useless.. you'd always want text to be 2d, righT? I'm finally in a position where I want this feature. I have these points in space that I need to assign text information to, i.e. I need to draw text at a fixed size but with a 3d position. Can this be done from DirectX?

    Read the article

  • What would be a good starting point for development of a 3D application for representation of struct

    - by Lela Dax
    I was thinking QT on OpenGL. Multiplatform ability and being able to be closed (at no cost) at a later point would be important points. But I'm very interested in finding a way that is not only viable but also has the least amount of reinvention of the wheel. e.g. "Why not Ogre? A ready powerful 3D engine without reinventing that part". But I'm very uncertain in what is the optimal collection of tools for that job.

    Read the article

  • Compile Qt Project To Run On A Linux System

    - by ForgiveMeI'mAN00b
    I have a Qt project. It uses the cross platform libraries SDL, OpenGL and FLTK. I want to be able to compile the project so that it can run on a Linux computer. I'm looking at a bunch of articles I have seen so far two ways to do this. Use a cross compiler, which seems to me a rather complicated thing to setup and compile with, or, the other options, is to compile the project simply on a Linux computer, simply the Linux version of Qt creator/SDK. My question is, If I have a Qt project that uses only cross platform libraries, then is creating a Windows version easy as compiling it in Qt/Windows, and creating the Linux version as easy as doing it in Qt/Linux? PS. Please don't ask/complain about why I didn't just try to see if it works myself, I don't have any Linux OS's installed on my computer right now, and I don't want to risk going into the trouble of installing a whole new OS just to have it not work in the end.

    Read the article

  • Is there a way to make an executable from an Xcode project?

    - by Questor
    Hello, all! I feel it's quite a naive question I'm going to ask. Excuse me if it's foolish. I have made an iPhone game using Cocos2d, Box2d and OpenGL. I want to show the game to a potential employer for demonstration purposes, without giving him the source code. How can I make a .exe or .app file from the Xcode project? I've searched online a lot but couldn't find the relevant answer. Thank you very much in advance.

    Read the article

  • Android: Touch seriously slowing my application

    - by Jason Rogers
    Hi all, I've been raking my brains on this one for a while. when I'm running my application (opengl game) eveyrthing goes fine but when I touch the screen my application slows down quite seriously (not noticeable on powerful phones like the nexus one, but on the htc magic it gets quite annoying). I did a trace and found out that the touch events seem to be handled in a different thread and even if it doesn't take so much processing time I think androids ability to switch between threads is not so good... What is the best way to handle touch when speed is an issue ? Currently I'm using : in the GLSurfaceView @Override public boolean onTouchEvent(MotionEvent event) { GameHandler.onTouchEvent(event); return true; } Any ideas are welcome

    Read the article

  • Erase part of RenderTarget when drawing primitives?

    - by user1495173
    I'm creating a paint like application using XNA. I have a render target which acts as a canvas. When the user draws something I draw corresponding triangles using DrawUserPrimitives and triangle strips to make lines and other curves. I want to implement an eraser in the application, so that the user can erase the triangles from the texture. I've used OpenGL in the past and there I would just use a blend function like so: glBlendFunc(GL_ZERO, GL_ONE_MINUS_SRC_ALPHA); How would I do this in XNA? I tried setting the GraphicsDevice blend mode to AlphaBlend, Additive, etc.. but it did not work. Any ideas? Thanks!

    Read the article

  • Designing entire webpages as SVG files

    - by user1311390
    Disclaimer I realize that given the absurdity of the title, this sounds like a troll. However, it's a genuine question. My background involves OpenGL / x86 assembly. I've recently started learning web programming. I really like SVG + CSS, and was wondering -- why do people not design entire webpages in SVG? Context SVG provides beautiful primitive: quadratic + cubic bezier curves; lines + filling -- all as vector graphics SVG provides text SVG provides affine transformations Questions Are there examples of people designing entire websites as a giant SVG file? If not, what the limitations? Are there performance hits when using SVG primitives as opposed to divs/tables?

    Read the article

  • When does code bloat start having a noticeable effect on performance?

    - by Kyle
    I am looking to make a hefty shift towards templates in one of my OpenGL projects, mainly for fun and the learning experience. I plan on watching the size of the executable carefully as I do this, to see just how much of the notorious bloat happens. Currently, the size of my Release build is around 580 KB when I favor speed and 440 KB when I favor size. Yes, it's a tiny project, and in fact even if my executable bloats 10 x its size, it's still going to be 5 MB or so, which hardly seems large by today's standards... or is it? This brings me to my question. Is speed proportional to size, or are there leaps and plateaus at certain thresholds, thresholds which I should be aiming to stay below? (And if so, what are the thresholds specifically?)

    Read the article

  • Tips for begining 3D Application Deelopment

    - by Moon .
    hey, guys i want to create an application that has 3D display. I want all the planets in it. Then the next step is i want to have satellites in it as well. I want to provide an interface to add satellites etc. Offcourse this will include 3D designing and it will be more like a game. I want to know what are the things i need to know to make this.. you know give me like a list 3D modeling and OpenGL etc.

    Read the article

  • Drawing with GDI+ under IIS

    - by Zac
    I'm running a web application under IIS that we draw graphs with that are sent to the clients. We were previously running under iis6, while migrating to 2008 ( iis7 ) we have encountered some very weird issues with the graphing. I stumbled across the msdn docs for GDI+ stating that "GDI+ functions and classes are not supported for use within a Windows service." I suspect that my issues are probably related to further isolation of services http://msdn.microsoft.com/en-us/library/ms533798%28VS.85%29.aspx My question is how the heck are we supposed to draw graphics? Raw GDI? OpenGL - but doesn't that still require a DC?

    Read the article

  • Looking for a mobile platform to view vector data and use it like a simple map

    - by Orchestrator
    I would like to develop or use an existing platform that will allow me to view custom vector data and use it as a map on mobile phones such as Android/IPhone (Maybe even WP7). I'm hoping that there's already a good infrastructure for what I need so I would not need to develop a whole infrastructure by myself. In Conclusion - Is there any existing platform that may answer my needs? If not, how would you guys suggest I should begin? How should I save my vector data? How could I read it? Should I view it with a graphics engine like OpenGL? Is there any chance this solution could be cross-platform? I know that it's possible since it was already done with apps like Waze. And it works the same on iOS and Android. Thanks!

    Read the article

  • Normalized Device Coordinates to window coordinates

    - by okoman
    I just read some stuff about the theory behind 3d graphics. As I understand it, normalized device coordinates (NDC) are coordinates that describe a point in the interval from -1 to 1 on both the horizontal and vertical axis. On the other hand window coordinates describe a point somewhere between (0,0) and (width,height) of the window. So my formula to convert a point from the NDC coordinate system to the window system would be xwin = width + xndc * 0.5 * width ywin = height + ynfv * 0.5 * height The problem now is that in the OpenGL documentation for glViewport there is an other formula: xwin = ( xndc + 1 ) * width * 0.5 + x ywin = ( yndc + 1 ) * height * 0.5 + y Now I'm wondering what I am getting wrong. Especially I'm wondering what the additional "x" and "y" mean. Hope the question isn't too "not programming related", but I thought somehow it is related to graphics programming.

    Read the article

  • iPhone : Primitives getters and setters

    - by Burf2000
    I feel a bit miffed at the moment, I done a few iPhone projects that use floats and ints etc and all is fine. I now using OpenGL and GLFloat[] C arrays etc and it seems unless I make methods to set / get them it crashes on the device (not the simulator). Now as these are not setup as properties (I don't think c arrays can) it kind of makes sense. However the project has been working for months without them. It seems something in the code is wiping out anything float / ints to the point that the debugger can see an assigned value but accessing it crashes the phone. As soon as I think I know something for this platform, something changes my mind lol.

    Read the article

  • Android:simulating 1-bit display

    - by user1681805
    I'm new to Android,trying to build a simple game which use 1-bit black and white display.the screen dimension is 160 * 80,that is 12800 pixels.I created a byte array for the "VRAM",so each time it draws,it first checks the array. The thing is that I am not drawing a point or rectangle for each pixel,I'm using 2 bitmaps(ARGB_4444,I have to use alpha channel,because of shadow effect),1 for positive and 1 for negative.So I called 12800 times drawBitmap() in the surfaceView's Draw method.I know that's silly...But even for openGL,12800 quards won't be that fast right? Sorry..I cannot post imgs.the link of screenshot:http://i1014.photobucket.com/albums/af267/baininja/Screenshot_2013-10-22-01-05-36_zps91dbcdef.png should i totally give up this and draw points on a 160*80 bitmap then scale it to intented size?But that loses the visual effects.

    Read the article

  • Why do we need normalized coordinate system? Options

    - by jcyang
    Hi, I have problem understand following sentences in my textbook Computer Graphics with OpenGL. "To make viewing process independent of the requirements of any output device,graphic system convert object descriptions to normalized coordinates and apply the clipping routines." Why normalized coordinates could make viewing process independent of the requirements of any output devices? Isn't the projection coordinates already independent of output device?We only need to first scale and then translate the projection coordinate then we will get device coordinate. So why do we need first convert the projection coordinate to normalized coordinate first? "Clipping is usually performed in normlized coordinates.This allows us to reduce computations by first concatenating the various transformation matrices" Why clipping is usually performed in normlized coordinates? What kind of transformation concatenated? thanks. jcyang.

    Read the article

  • Creating a Linux Desktop Envoriment

    - by Alon
    Suppose I want to create my own desktop envoriment for Linux, without X. Like Google with the Android did. Where do I start? Is it actually a normal application that just draws stuff, and starts after the kernel boot? And how does it draw it? Using OpenGL or is there something more generic? And graphics drivers, how is it going? You should develop custom graphics drivers for your desktop or it comes with the Linux kernel? Note: It's for normal PCs and not embedded devices. Thanks.

    Read the article

  • Iterators over a LInked List in a Game in Java

    - by Matthew
    I am using OpenGl in android and they have a callback method called draw that gets called with out my control. (As fast as the device can handle if I am not mistaken) I have a list of "GameObjects" that have a .draw method and a .update method. I have two different threads that handle each of those. So, the question is, can I declare two different iterators in two different methods in two different threads that iterate over the same Linked List? If so, do I simply declare ListIterator<GameObject> l = objets.listIterator() each time I want a new iterator and it won't interfere with other iterators?

    Read the article

  • Pixel Perfect Collision Detection in Cocos2dx

    - by Happybirthday
    I am trying to port the pixel perfect collision detection in Cocos2d-x the original version was made for Cocos2D and can be found here: http://www.cocos2d-iphone.org/forums/topic/pixel-perfect-collision-detection-using-color-blending/ Here is my code for the Cocos2d-x version bool CollisionDetection::areTheSpritesColliding(cocos2d::CCSprite *spr1, cocos2d::CCSprite *spr2, bool pp, CCRenderTexture* _rt) { bool isColliding = false; CCRect intersection; CCRect r1 = spr1-boundingBox(); CCRect r2 = spr2-boundingBox(); intersection = CCRectMake(fmax(r1.getMinX(),r2.getMinX()), fmax( r1.getMinY(), r2.getMinY()) ,0,0); intersection.size.width = fmin(r1.getMaxX(), r2.getMaxX() - intersection.getMinX()); intersection.size.height = fmin(r1.getMaxY(), r2.getMaxY() - intersection.getMinY()); // Look for simple bounding box collision if ( (intersection.size.width0) && (intersection.size.height0) ) { // If we're not checking for pixel perfect collisions, return true if (!pp) { return true; } unsigned int x = intersection.origin.x; unsigned int y = intersection.origin.y; unsigned int w = intersection.size.width; unsigned int h = intersection.size.height; unsigned int numPixels = w * h; //CCLog("Intersection X and Y %d, %d", x, y); //CCLog("Number of pixels %d", numPixels); // Draw into the RenderTexture _rt-beginWithClear( 0, 0, 0, 0); // Render both sprites: first one in RED and second one in GREEN glColorMask(1, 0, 0, 1); spr1-visit(); glColorMask(0, 1, 0, 1); spr2-visit(); glColorMask(1, 1, 1, 1); // Get color values of intersection area ccColor4B *buffer = (ccColor4B *)malloc( sizeof(ccColor4B) * numPixels ); glReadPixels(x, y, w, h, GL_RGBA, GL_UNSIGNED_BYTE, buffer); _rt-end(); // Read buffer unsigned int step = 1; for(unsigned int i=0; i 0 && color.g 0) { isColliding = true; break; } } // Free buffer memory free(buffer); } return isColliding; } My code is working perfectly if I send the "pp" parameter as false. That is if I do only a bounding box collision but I am not able to get it working correctly for the case when I need Pixel Perfect collision. I think the opengl masking code is not working as I intended. Here is the code for "_rt" _rt = CCRenderTexture::create(visibleSize.width, visibleSize.height); _rt-setPosition(ccp(origin.x + visibleSize.width * 0.5f, origin.y + visibleSize.height * 0.5f)); this-addChild(_rt, 1000000); _rt-setVisible(true); //For testing I think I am making a mistake with the implementation of this CCRenderTexture Can anyone guide me with what I am doing wrong ? Thank you for your time :)

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >