Search Results

Search found 16 results on 1 pages for 'quixoto'.

Page 1/1 | 1 

  • Popover with embedded navigation controller doesn't respect size on back nav

    - by quixoto
    I have a UIPopoverController hosting a UINavigationController, which contains a small hierarchy of view controllers. I followed the docs and for each view controller, I set the view's popover-context size like so: [self setContentSizeForViewInPopover:CGSizeMake(320, 500)]; (size different for each controller) This works as expected as I navigate forward in the hierarchy-- the popover automatically animates size changes to correspond to the pushed controller. However, when I navigate "Back" through the view stack via the navigation bar's Back button, the popover doesn't change size-- it remains as large as the deepest view reached. This seems broken to me; I'd expect the popover to respect the sizes that are set up as it pops through the view stack. Am I missing something? Thanks.

    Read the article

  • Smoothing touch-based animation in iPhone OpenGL?

    - by quixoto
    I know this is vague, but looking for general tips/help on this, as it's not an area of significant expertise for me. I have some iPhone code that's basically an EAGL view handling a single touch. The app draws (using GL) a circle via triangle fan at the touch point, and moves it when the user moves the touch point, and re-renders the view then. When dragging a finger slowly, the circle keeps up and consistent with the finger as it moves. If I scribble my finger quickly back and forth across the screen, the rendering doesn't keep up with the touch motion, so you see an optical illusion of "multiple" discrete circles on the screen "at once". (Normal persistence of vision illusion). This optical illusion is jarring. How can I make this look more natural? Can I blur the motion of the circle somehow? Is this result the evidence of some bad frame rate issue? I see this artifact even when nothing else is being rendered, so I think this might just be as fast as we can go. Any hints or suggestions? Much appreciated. Thanks.

    Read the article

  • Rendering UIImage/CGImage into CGPDFContext results in... blankness!

    - by quixoto
    Hi all, I'm trying to take an image that I have in a image object and render into a Core Graphics PDF context-- happens to be on an iPhone but this question surely applies equally to desktop Quartz. This UIImage is a simple color-on-white image at about 600x800 resolution. If I (say) turn it into a PNG file, that file looks exactly as expected-- so the data is OK. Here's what I'm doing to generate the PDF: NSMutableData * outputData = [[NSMutableData alloc] init]; CGDataConsumerRef dataConsumer = CGDataConsumerCreateWithCFData((CFMutableDataRef)outputData); CFMutableDictionaryRef attrDictionary = NULL; attrDictionary = CFDictionaryCreateMutable(NULL, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks); CFDictionarySetValue(attrDictionary, kCGPDFContextTitle, @"My Awesome Document"); CGContextRef pdfContext = CGPDFContextCreate(dataConsumer, NULL, attrDictionary); CFRelease(dataConsumer); CFRelease(attrDictionary); CGImageRef pageImage = [myUIImage CGImage]; CGPDFContextBeginPage(pdfContext, NULL); CGContextDrawImage(pdfContext, CGRectMake(0, 0, [myUIImage size].width, [myUIImage size].height), pageImage); CGPDFContextEndPage(pdfContext); CGContextRelease(pdfContext); Resulting PDF, which ends up in outputData, seems like a valid PDF file (opens correctly, document title is present in metadata), but it consists of precisely one blank page. What am I doing wrong? Thanks.

    Read the article

  • iPhone OpenGL ES missing functions should be there - glBlendFuncSeparate etc

    - by quixoto
    I'm using OpenGL ES 1.1 on the iPhone, and I'd like to use the following functions: glBlendFuncSeparate glBlendColor With their related constants. These didn't exist in early iPhone GL implementations, but according to this page: http://developer.apple.com/iphone/library/releasenotes/General/iPhone30APIDiffs/index.html they should be there in 3.0+, which I'm building for. But I'm getting "implicit definition" warnings. What do I need to do to get those functions? Thanks!

    Read the article

  • iPad modal form sheet takes up the whole screen anyways

    - by quixoto
    I'm trying to create a form sheet modal on iPad, which should be a 540x620 modal view. I've created a view controller with a NIB file whose view is a 540x620 sized UIView (with stuff on it). I set the modal presentation style to UIModalPresentationFormSheet, and call presentModalViewController:animated: on the current view controller. My view slides in from the bottom, but instead of being a form sheet, it takes up the whole screen (my view elements are all anchored in the top left of the screen). Even stranger, when I dismiss it, all the UI that was "underneath" it, is all re-layed out to be in the center, in approximately a form sheet sized area in the center of the screen. Bizarro! Anyone have any suggestions as to what could cause this behavior? Thanks.

    Read the article

  • iPhone: understanding field crash reports: unrecognized selector?

    - by quixoto
    Hi all, A user of my app out in the field seems to having bad crash-at-app-start issues. I got him to send me the .crash files from his PC. After "symbolicating" them according to this article, I get what looks from the stack like a unrecognized selector fail. But the top line of code corresponding to my process is an unambiguous message send that gets executed hundreds of times without issue in my app normally. Needless to say, I never repro this issue myself. Can the crash report lie? Could this stack indicate anything besides unrecognized selector? Thanks for any insight. Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x00000000, 0x00000000 Crashed Thread: 0 Thread 0 Crashed: 0 libSystem.B.dylib 0x000790a0 __kill + 8 1 libSystem.B.dylib 0x00079090 kill + 4 2 libSystem.B.dylib 0x00079082 raise + 10 3 libSystem.B.dylib 0x0008d20a abort + 50 4 libstdc++.6.dylib 0x00044a1c __gnu_cxx::__verbose_terminate_handler() + 376 5 libobjc.A.dylib 0x000057c4 _objc_terminate + 104 6 libstdc++.6.dylib 0x00042dee __cxxabiv1::__terminate(void (*)()) + 46 7 libstdc++.6.dylib 0x00042e42 std::terminate() + 10 8 libstdc++.6.dylib 0x00042f12 __cxa_throw + 78 9 libobjc.A.dylib 0x000046a4 objc_exception_throw + 64 10 CoreFoundation 0x00094174 -[NSObject doesNotRecognizeSelector:] + 108 11 CoreFoundation 0x00093afa ___forwarding___ + 482 12 CoreFoundation 0x000306c8 _CF_forwarding_prep_0 + 40 13 MyAppProcess 0x000147c6 -[ImageLoader imageSmallForColor:style:] (ImageLoader.m:180) .... /* many more frames... */

    Read the article

  • UIView transparency shows how the sausages are made!

    - by quixoto
    I have a UIView container that has two UIImageViews inside it, one partially obscuring the other (they're being composed like this to allow for occasional animation of one "layer" or another. Sometimes I want to make this container 50% alpha, so what the users sees fades. Here's the problem: setting my container view to 50% alpha makes all my subviews inherit this as well, and now you can see through the first subview into the second, which in my application has a weird X-Ray effect that I'm not looking for. What I'm after, of course, is for what the user currently sees to become 50% transparent-- the equivalent of flattening the visible view into one bitmap, and then making that 50% alpha. What are my best bets for accomplishing this? Ideally would like to avoid actually, dynamically flattening the views if I can help it, but best practices on that welcome as well. Am I missing something obvious? Since most views have subviews and would run into this issue, I feel like there's some obvious solution here. Thanks!

    Read the article

  • EAGLContext, EAGLSharegroups, RenderBuffers, FrameBuffers, oh my!

    - by quixoto
    Hi all, I'm trying to wrap my head around the OpenGL object model on iPhone OS. I'm currently rendering into a few different UIViews (build on CAEAGLayers) on the screen. I currently have each of these as using separate EAGLContext, each of which has a color renderbuffer and a framebuffer. I'm rendering similar things in them, and I'd like to share textures between these instances to save memory overhead. My current understanding is that I could use the same setup (some number of contexts, each with a FBO/RBO) but if I spawn the later ones using the EAGLShareGroup of the first one, then I can simply use the texture names (GLuints) from the first one in the later ones. Is this accurate? If this is the case, I guess the followup question is: what's the benefit to having it be a "sharegroup"? Could I just reuse the same context, and attach multiple FBOs/RBOs to that context? I think I'm struggling with the abstraction layer of a sharegroup, which seems to share "objects" (textures and other named things) but not "state" (matrices, enabled/disabled states) which are owned by the context. What's the best way to think of this? Thanks for any enlightenment!

    Read the article

  • iPhone platform: endianness (detection & swapping)

    - by quixoto
    Hi all, I'm doing some endian-sensitive file manipulation on iPhone. Are there standard macros or #defines in that environment that indicate native endianness and offer swapping if necessary? I know I can check in advance and just do the right thing for this particular architecture, but wondering if there are cleaner ways of doing the right thing. (The file format is little endian; if it were big-endian, I'd probably just use the htons/htonl family.) Thanks.

    Read the article

  • How to handle alpha in a manual "Overlay" blend operation?

    - by quixoto
    I'm playing with some manual (walk-the-pixels) image processing, and I'm recreating the standard "overlay" blend. I'm looking at the "Photoshop math" macros here: http://www.nathanm.com/photoshop-blending-math/ (See also here for more readable version of Overlay) Both source images are in fairly standard RGBA (8 bits each) format, as is the destination. When both images are fully opaque (alpha is 1.0), the result is blended correctly as expected: But if my "blend" layer (the top image) has transparency in it, I'm a little flummoxed as to how to factor that alpha into the blending equation correctly. I expect it to work such that transparent pixels in the blend layer have no effect on the result, opaque pixels in the blend layer do the overlay blend as normal, and semitransparent blend layer pixels have some scaled effect on the result. Can someone explain to me the blend equations or the concept behind doing this? Bonus points if you can help me do it such that the resulting image has correctly premultiplied alpha (which only comes into play for pixels that are not opaque in both layers, I think.) Thanks! // factor in blendLayerA, (1-blendLayerA) somehow? resultR = ChannelBlend_Overlay(baseLayerR, blendLayerR); resultG = ChannelBlend_Overlay(baseLayerG, blendLayerG); resultB = ChannelBlend_Overlay(baseLayerB, blendLayerB); resultA = 1.0; // also, what should this be??

    Read the article

  • What does setting the GL color before doing a texture mapping operation do?

    - by quixoto
    I am looking at some sample code in a book that creates a jittered antialiasing effect by repeatedly rendering a scene (at different offsets) onto a offscreen texture, then using that texture to repeatedly draw a quad in the main view with some blend stuff set up. To accumulate the color "correctly", the code is setting the color like so: glColor4f(f, f, f, 1); where f is 1.0/number_of_samples, and then binding the offscreen texture and rendering it. Since textures come with their own color and alpha data, what is the effect (mathematically and intuitively) that setting the overall "color" in advance achieves? Thanks.

    Read the article

  • Best way to do something when a runloop event is done processing?

    - by quixoto
    I have some processing in my Cocoa app that sometimes ends up calling through a hierarchy of data to do a bunch of work as the result of an event. Each small piece creates and destroys some resources. I don't want those resources around most of the time, but I would like to find a smart way of creating them before all the work and killing them at the end. Short of creating the resources up front and then passing them entirely down through the call hierarchy when work is done, is there a way to know locally in some code when an event loop run has ended? Then I could create them if they're not there, and keep them until the run loop ends, reusing them for any subsequent calls before that time. Thanks.

    Read the article

  • Creating a "permanent" Cocoa object

    - by quixoto
    I have an object factory that hands out instances of certain "constant" objects. I'd like these objects to be protected against bad memory management by clients. This is how I've overridden the class's key methods. Am I missing anything (code or other considerations)? - (id)retain { return self; } - (NSUInteger)retainCount { return UINT_MAX; } - (void)release { // nothing. }

    Read the article

  • How can I do something when a runloop event is done processing?

    - by quixoto
    I have some processing in my Cocoa app that sometimes ends up calling through a hierarchy of data to do a bunch of work as the result of an event. Each small piece creates and destroys some resources. I don't want those resources around most of the time, but I would like to find a smart way of creating them before all the work and killing them at the end. Short of making those buffers etc available globally from the "parent" or elsewhere, is there a way to know locally in some code when an event loop run has ended? Then I could create them if they're not there, and keep them until the run loop ends, reusing them for any subsequent calls before that time. EDIT: I'm not looking for suggestions on how to restructure my code, which I may do anyways. This issue just brought up the question for me of how to know when the runloop is done. If I were writing in, I dunno, Javascript, I'd use a setTimeout with zero to accomplish end-event cleanup. I suppose an NSTimer with an interval of zero might accomplish this too, but wondering if there's something cleaner. Thanks.

    Read the article

1