Search Results

Search found 991 results on 40 pages for 'quartz scheduler'.

Page 13/40 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Rendering UIImage/CGImage into CGPDFContext results in... blankness!

    - by quixoto
    Hi all, I'm trying to take an image that I have in a image object and render into a Core Graphics PDF context-- happens to be on an iPhone but this question surely applies equally to desktop Quartz. This UIImage is a simple color-on-white image at about 600x800 resolution. If I (say) turn it into a PNG file, that file looks exactly as expected-- so the data is OK. Here's what I'm doing to generate the PDF: NSMutableData * outputData = [[NSMutableData alloc] init]; CGDataConsumerRef dataConsumer = CGDataConsumerCreateWithCFData((CFMutableDataRef)outputData); CFMutableDictionaryRef attrDictionary = NULL; attrDictionary = CFDictionaryCreateMutable(NULL, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks); CFDictionarySetValue(attrDictionary, kCGPDFContextTitle, @"My Awesome Document"); CGContextRef pdfContext = CGPDFContextCreate(dataConsumer, NULL, attrDictionary); CFRelease(dataConsumer); CFRelease(attrDictionary); CGImageRef pageImage = [myUIImage CGImage]; CGPDFContextBeginPage(pdfContext, NULL); CGContextDrawImage(pdfContext, CGRectMake(0, 0, [myUIImage size].width, [myUIImage size].height), pageImage); CGPDFContextEndPage(pdfContext); CGContextRelease(pdfContext); Resulting PDF, which ends up in outputData, seems like a valid PDF file (opens correctly, document title is present in metadata), but it consists of precisely one blank page. What am I doing wrong? Thanks.

    Read the article

  • How to set up a user Quartz2D coordinate system with scaling that avoids fuzzy drawing?

    - by jdmuys
    This topic has been scratched once or twice, but I am still puzzled. And Google was not friendly either. Since Quartz allows for arbitrary coordinate systems using affine transform, I want to be able to draw things such as floorplans using real-life coordinate, e.g. feet. So basically, for the sake of an example, I want to scale the view so that when I draw a 10x10 rectangle (think a 1-inch box for example), I get a 60x60 pixels rectangle. It works, except the rectangle I get is quite fuzzy. Another question here got an answer that explains why. However, I'm not sure I understood that reason why, and moreover, I don't know how to fix it. Here is my code: I set my coordinate system in my awakeFromNib custom view method: - (void) awakeFromNib { CGAffineTransform scale = CGAffineTransformMakeScale(6.0, 6.0); self.transform = scale; } And here is my draw routine: - (void)drawRect:(CGRect)rect { CGContextRef context = UIGraphicsGetCurrentContext(); CGRect r = CGRectMake(10., 10., 11., 11.); CGFloat lineWidth = 1.0; CGContextStrokeRectWithWidth(context, r, lineWidth); } The square I get is scaled just fine, but totally fuzzy. Playing with lineWidth doesn't help: when lineWidth is set smaller, it gets lighter, but not crisper. So is there a way to set up a view to have a scaled coordinate system, so that I can use my domain coordinates? Or should I go back and implementing scaling in my drawing routines? Note that this issue doesn't occur for translation or rotation. Thanks

    Read the article

  • Can't block capslock with CGEventTap

    - by Thor Frølich
    I'm using Quartz CGEventTap in an attempt to globally intercept capslock presses and block them (to have them do something useful instead). I succesfully detect capslock presses but have so far been unable to block them. My code (originating from this stackoverflow answer) is something like this: eventTap = CGEventTapCreate(kCGHIDEventTap, kCGTailAppendEventTap, kCGEventTapOptionDefault, eventMask, myCGEventCallback, &oldFlags); runLoopSource = CFMachPortCreateRunLoopSource(kCFAllocatorDefault, eventTap, 0); CFRunLoopAddSource(CFRunLoopGetCurrent(), runLoopSource, kCFRunLoopCommonModes); CGEventTapEnable(eventTap, true); CGEventRef myCGEventCallback(CGEventTapProxy proxy, CGEventType type, CGEventRef theEvent, void *refcon) { CGEventFlags *oldFlags = (CGEventFlags *)refcon; switch (type) { case kCGEventFlagsChanged: { CGEventFlags newFlags = CGEventGetFlags(theEvent); CGEventFlags changedFlags = *oldFlags ^ newFlags; *oldFlags = newFlags; if (changedFlags == 65536) { NSLog(@"Capslock pressed. Let's not return the event"); return NULL; } break; } default: break; } NSLog(@"Different modifier than capslock. Returning the event"); return theEvent; } If I understand correctly returning NULL should effectively block the keypress from propagating. Indeed it also does for "normal" keyup and -down events. However capslock toggles regardless. Any ideas why that is? Am I making incorrect assumptions? And/or how can I do things differently to achieve my goal? Thanks, Thor

    Read the article

  • What is the best approach to 2D collision detection on the iPhone?

    - by Magic Bullet Dave
    Been working on this problem of collision detection and there appears to be 3 main approaches I could take: Sprite and mask approach. (AND the overlap of the sprites and check for a non-zero number in the resulting sprite pixel data). Bounding circles, rectangles or polygons. (Create one or more shapes that enclose the sprites and do the basic maths to check for overlaps). Use an existing sprite library. The first approach, even though it would have been the way I would have done it in the old days of 16x16 sprite blocks, it appears that there just isn’t an easy way of getting at the individual image pixel data and/or alpha channel within Quartz (or OPENGL for that matter). Detecting the overlap of the bounding box is easy, but then creating a 3rd image from the overlap and then testing it for pixels is complicated and my gut feel is that even if we could get it to work would be slow. Am I missing something neat here? The second approach involves dividing up our sprites into several polygons and testing them for overlaps. The more polygons the more accurate the collision detection. The benefit is that it is fast, and can be accurate. The downside is it makes the sprite creation more complicated. i.e., we have to create the polygons for each sprite. For speed the best approach is to create a tree of polygons. The 3rd approach I’m not sure about as it involves buying code (or using an open source licence). I am not sure what the best library to use is or whether this would make life easier or give us a problem integrating this into our app. So in short I am favouring the polygon and tree approach and would appreciate you views on this before I go and write lots of code. Best regards Dave

    Read the article

  • why my iphone device and simulator has different screen resolution?

    - by happyzone8
    i use itouch 4G has my device and i use simulator-4.2 i will just draw a rectangle as an example. i use Quartz-2d to draw - (void)drawRect:(CGRect)rect { // Get a graphics context, saving its state CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSaveGState(context); // Reset the transformation CGAffineTransform t0 = CGContextGetCTM(context); t0 = CGAffineTransformInvert(t0); CGContextConcatCTM(context,t0); // Draw a green rectangle CGContextBeginPath(context); CGContextSetRGBFillColor(context, 0,1,0,1); CGContextAddRect(context, CGRectMake(0,0,320,480)); CGContextClosePath(context); CGContextDrawPath(context,kCGPathFill); CGContextRestoreGState(context); } i run it in the simulator, the whole screen becomes green, then i run this on my device, only the quarter of the screen becomes green, in order to make the whole screen green on my device, i have to draw a larger rectangle CGContextAddRect(context, CGRectMake(0,0,640,960)); seem like my device has twice resolution than the simulator, how can i fix this?

    Read the article

  • Troubleshooting High-CPU Utilization for SQL Server

    - by Susantha Bathige
    The objective of this FAQ is to outline the basic steps in troubleshooting high CPU utilization on  a server hosting a SQL Server instance. The first and the most common step if you suspect high CPU utilization (or are alerted for it) is to login to the physical server and check the Windows Task Manager. The Performance tab will show the high utilization as shown below: Next, we need to determine which process is responsible for the high CPU consumption. The Processes tab of the Task Manager will show this information: Note that to see all processes you should select Show processes from all user. In this case, SQL Server (sqlserver.exe) is consuming 99% of the CPU (a normal benchmark for max CPU utilization is about 50-60%). Next we examine the scheduler data. Scheduler is a component of SQLOS which evenly distributes load amongst CPUs. The query below returns the important columns for CPU troubleshooting. Note – if your server is under severe stress and you are unable to login to SSMS, you can use another machine’s SSMS to login to the server through DAC – Dedicated Administrator Connection (see http://msdn.microsoft.com/en-us/library/ms189595.aspx for details on using DAC) SELECT scheduler_id ,cpu_id ,status ,runnable_tasks_count ,active_workers_count ,load_factor ,yield_count FROM sys.dm_os_schedulers WHERE scheduler_id See below for the BOL definitions for the above columns. scheduler_id – ID of the scheduler. All schedulers that are used to run regular queries have ID numbers less than 1048576. Those schedulers that have IDs greater than or equal to 1048576 are used internally by SQL Server, such as the dedicated administrator connection scheduler. cpu_id – ID of the CPU with which this scheduler is associated. status – Indicates the status of the scheduler. runnable_tasks_count – Number of workers, with tasks assigned to them that are waiting to be scheduled on the runnable queue. active_workers_count – Number of workers that are active. An active worker is never preemptive, must have an associated task, and is either running, runnable, or suspended. current_tasks_count - Number of current tasks that are associated with this scheduler. load_factor – Internal value that indicates the perceived load on this scheduler. yield_count – Internal value that is used to indicate progress on this scheduler.                                                                 Now to interpret the above data. There are four schedulers and each assigned to a different CPU. All the CPUs are ready to accept user queries as they all are ONLINE. There are 294 active tasks in the output as per the current_tasks_count column. This count indicates how many activities currently associated with the schedulers. When a  task is complete, this number is decremented. The 294 is quite a high figure and indicates all four schedulers are extremely busy. When a task is enqueued, the load_factor  value is incremented. This value is used to determine whether a new task should be put on this scheduler or another scheduler. The new task will be allocated to less loaded scheduler by SQLOS. The very high value of this column indicates all the schedulers have a high load. There are 268 runnable tasks which mean all these tasks are assigned a worker and waiting to be scheduled on the runnable queue.   The next step is  to identify which queries are demanding a lot of CPU time. The below query is useful for this purpose (note, in its current form,  it only shows the top 10 records). SELECT TOP 10 st.text  ,st.dbid  ,st.objectid  ,qs.total_worker_time  ,qs.last_worker_time  ,qp.query_plan FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp ORDER BY qs.total_worker_time DESC This query as total_worker_time as the measure of CPU load and is in descending order of the  total_worker_time to show the most expensive queries and their plans at the top:      Note the BOL definitions for the important columns: total_worker_time - Total amount of CPU time, in microseconds, that was consumed by executions of this plan since it was compiled. last_worker_time - CPU time, in microseconds, that was consumed the last time the plan was executed.   I re-ran the same query again after few seconds and was returned the below output. After few seconds the SP dbo.TestProc1 is shown in fourth place and once again the last_worker_time is the highest. This means the procedure TestProc1 consumes a CPU time continuously each time it executes.      In this case, the primary cause for high CPU utilization was a stored procedure. You can view the execution plan by clicking on query_plan column to investigate why this is causing a high CPU load. I have used SQL Server 2008 (SP1) to test all the queries used in this article.

    Read the article

  • Analyzing bitmaps produced by NSAffineTransform and CILineOverlay filters

    - by Adam
    I am trying to manipulate an image using a chain of CIFilters, and then examine each byte of the resulting image (bitmap). Long term, I do not need to display the resulting image (bitmap) -- I just need to "analyze" it in memory. But near-term I am displaying it on screen, to help with debugging. I have some "bitmap examination" code that works as expected when examining the NSImage (bitmap representation) I use as my input (loaded from a JPG file into an NSImage). And it SOMETIMES works as expected when I use it on the outputBitmap produced by the code below. More specifically, when I use an NSAffineTransform filter to create outputBitmap, then outputBitmap contains the data I would expect. But if I use a CILineOverlay filter to create the outputBitmap, none of the bytes in the bitmap have any data in them. I believe both of these filters are working as expected, because when I display their results on screen (via outputImageView), they look "correct." Yet when I examine the outputBitmaps, the one created from the CILineOverlay filter is "empty" while the one created from NSAffineTransfer contains data. Furthermore, if I chain the two filters together, the final resulting bitmap only seems to contain data if I run the AffineTransform last. Seems very strange, to me??? My understanding (from reading the CI programming guide) is that the CIImage should be considered an "image recipe" rather than an actual image, because the image isn't actually created until the image is "drawn." Given that, it would make sense that the CIimage bitmap doesn't have data -- but I don't understand why it has data after I run the NSAffineTransform but doesn't have data after running the CILineOverlay transform? Basically, I am trying to determine if creating the NSCIImageRep (ir in the code below) from the CIImage (myResult) is equivalent to "drawing" the CIImage -- in other words if that should force the bitmap to be populated? If someone knows the answer to this please let me know -- it will save me a few hours of trial and error experimenting! Finally, if the answer is "you must draw to a graphics context" ... then I have another question: would I need to do something along the lines of what is described in the Quartz 2D Programming Guide: Graphics Contexts, listing 2-7 and 2-8: drawing to a bitmap graphics context? That is the path down which I am about to head ... but it seems like a lot of code just to force the bitmap data to be dumped into an array where I can get at it. So if there is an easier or better way please let me know. I just want to take the data (that should be) in myResult and put it into a bitmap array where I can access it at the byte level. And since I already have code that works with an NSBitmapImageRep, unless doing it that way is a bad idea for some reason that is not readily apparent to me, then I would prefer to "convert" myResult into an NSBitmapImageRep. CIImage * myResult = [transform valueForKey:@"outputImage"]; NSImage *outputImage; NSCIImageRep *ir = [NSCIImageRep alloc]; ir = [NSCIImageRep imageRepWithCIImage:myResult]; outputImage = [[[NSImage alloc] initWithSize: NSMakeSize(inputImage.size.width, inputImage.size.height)] autorelease]; [outputImage addRepresentation:ir]; [outputImageView setImage: outputImage]; NSBitmapImageRep *outputBitmap = [[NSBitmapImageRep alloc] initWithCIImage: myResult]; Thanks, Adam

    Read the article

  • Copying the bitmap contents of a UIView's context to that of another UIView

    - by Joonas Trussmann
    Basically what I want to do is copy the already rendered content (a PDF drawn into the UIView's graphics context using CGContextDrawPDFPage()) onto a similar UIView, without having to re render the PDF. The idea is, that I'd then be able to perform an animated transform on the UIView and later re render the PDF with more accuracy. For both UIViews I'm using a larger-than-screen CATiledLayer to make it easier to rerender the PDF once the user zooms in, if that makes any difference. Any tips? I'm kind of lost here.

    Read the article

  • Arc text in iphone application

    - by Amit Battan
    Hi all I want to bend the text that of UILabel from corner. It just like appear as arc or as following link. http://picasaweb.google.com/lh/photo/kfBzK4R4IlvyHfVywUNd1A?feat=directlink please suggest me, from where i start. any documentation,link, sample code. thanks amit battan

    Read the article

  • IKImageView resize is blocky

    - by Brian Postow
    I am putting an image into an IKImageView, and immediately sizing it to fit. Whenever I do this, the image originally appears at 1-1 size (huge) and then resizes down, which would be fine if the animation was smooth. However, the animation looks ... fluttery? There are big blocks, like 2 inches square, of the image that appear and shrink independently of each other. The effect is a little annoying, and almost to the level where it might give an epileptic a seizure... (I'm exaggerating a little). Is this a bug in IKImageView? Is it a bug in the animation? Will it go away if I turn off the animation (How do I do that? setAnimates: NO doesn't seem to do anything, nor does overloading animates to return NO in my subclass... EDIT: addedcode: NSImage* image = [doc currentImage]; [imageView setImage: image]; [imageView zoomImageToFit: self]; This is in the app controller, so self is the application (or plugin, depending on which version I'm looking at)

    Read the article

  • Core Animation not working on Leopard, working on Snow Leopard

    - by Nick Paulson
    Hi, I animate NSImageViews using its animator proxy. While testing my application on Snow Leopard, everything works as expected. However, on Leopard, none of the animations are functioning. In addition, NSImageViews don't seem to take into effect the alphaValue I set on them, whether through the animator proxy or not. The only way I can get them to disappear is by setting their image to nil. What is weird is that this all works fine in Snow Leopard, but does not work on Leopard 10.5.8. Any idea on why this may be occurring?

    Read the article

  • Is this a good way to do a game loop for an iPhone game?

    - by Danny Tuppeny
    Hi all, I'm new to iPhone dev, but trying to build a 2D game. I was following a book, but the game loop it created basically said: function gameLoop update() render() sleep(1/30th second) gameLoop The reasoning was that this would run at 30fps. However, this seemed a little mental, because if my frame took 1/30th second, then it would run at 15fps (since it'll spend as much time sleeping as updating). So, I did some digging and found the CADisplayLink class which would sync calls to my gameLoop function to the refresh rate (or a fraction of it). I can't find many samples of it, so I'm posting here for a code review :-) It seems to work as expected, and it includes passing the elapsed (frame) time into the Update method so my logic can be framerate-independant (however I can't actually find in the docs what CADisplayLink would do if my frame took more than its allowed time to run - I'm hoping it just does its best to catch up, and doesn't crash!). // // GameAppDelegate.m // // Created by Danny Tuppeny on 10/03/2010. // Copyright Danny Tuppeny 2010. All rights reserved. // #import "GameAppDelegate.h" #import "GameViewController.h" #import "GameStates/gsSplash.h" @implementation GameAppDelegate @synthesize window; @synthesize viewController; - (void) applicationDidFinishLaunching:(UIApplication *)application { // Create an instance of the first GameState (Splash Screen) [self doStateChange:[gsSplash class]]; // Set up the game loop displayLink = [CADisplayLink displayLinkWithTarget:self selector:@selector(gameLoop)]; [displayLink setFrameInterval:2]; [displayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; } - (void) gameLoop { // Calculate how long has passed since the previous frame CFTimeInterval currentFrameTime = [displayLink timestamp]; CFTimeInterval elapsed = 0; // For the first frame, we want to pass 0 (since we haven't elapsed any time), so only // calculate this in the case where we're not the first frame if (lastFrameTime != 0) { elapsed = currentFrameTime - lastFrameTime; } // Keep track of this frames time (so we can calculate this next time) lastFrameTime = currentFrameTime; NSLog([NSString stringWithFormat:@"%f", elapsed]); // Call update, passing the elapsed time in [((GameState*)viewController.view) Update:elapsed]; } - (void) doStateChange:(Class)state { // Remove the previous GameState if (viewController.view != nil) { [viewController.view removeFromSuperview]; [viewController.view release]; } // Create the new GameState viewController.view = [[state alloc] initWithFrame:CGRectMake(0, 0, IPHONE_WIDTH, IPHONE_HEIGHT) andManager:self]; // Now set as visible [window addSubview:viewController.view]; [window makeKeyAndVisible]; } - (void) dealloc { [viewController release]; [window release]; [super dealloc]; } @end Any feedback would be appreciated :-) PS. Bonus points if you can tell me why all the books use "viewController.view" but for everything else seem to use "[object name]" format. Why not [viewController view]?

    Read the article

  • Event taps: Varying results with CGEventPost, kCGSessionEventTap, kCGAnnotatedSessionEventTap, CGEve

    - by kevingessner
    I'm running into a thorny problem with posting an event from an event tap. I'm tapping for NSSystemDefined at kCGHIDEventTap, then replacing the event with a new one. The problem I'm running in to is that depending on how I post the event, it's being seen only by some applications. My test applications are Opera, Firefox, Quicksilver, and Xcode. Here are the different techniques I've tried within my event tap callback, with results. I'm expecting an action (the "correct response") from each app; "system beep" means the nothing-is-bound-to-that-key system sound. Create a new event, and return it from the callback. Opera: no response/system beep, Firefox: no response/system beep, Quicksilver: correct response, Xcode: no response/system beep Create a new event, post to kCGSessionEventTap with CGEventPost, return null. Opera: no response/system beep, Firefox: no response/system beep, Quicksilver: correct response, Xcode: no response/system beep Create a new event, post to kCGAnnotatedSessionEventTap with CGEventPost, return null. Opera: correct response, Firefox: correct response, Quicksilver: no response/system beep, Xcode: no response/system beep Create a new event, post with CGEventTapPostEvent, return null. Opera: no response/system beep, Firefox: no response/system beep, Quicksilver: correct response, Xcode: no response/system beep Create a new event, post to kCGSessionEventTap with CGEventPost, and return new event. Opera: no response/system beep, Firefox: no response/system beep, Quicksilver: correct response, Xcode: no response/system beep Create a new event, post to kCGAnnotatedSessionEventTap with CGEventPost, and return new event. Opera: correct response and system beep, Firefox: correct response and system beep, Quicksilver: correct response and system beep, Xcode: no response/double system beep Create a new event, post with CGEventTapPostEvent, and return new event. Opera: no response/system beep, Firefox: no response/system beep, Quicksilver: correct response, Xcode: no response/system beep (6) is the best, but users are complaining about the extra system beep on correct responses, which I'm guessing is coming from the double-posting of the event. I'm not sure of other combinations to try, or where else to look. Can anyone offer any guidance? Is there any way to get the results of both returning the event from my callback and posting to the annotated tap without doing both? Sorry for the lengthy question; I've been doing a lot of experimenting. Thanks in advance Update: this is the code I use to create the event tap: CFMachPortRef eventTap; eventTap = CGEventTapCreate(kCGHIDEventTap, kCGHeadInsertEventTap, 0,CGEventMaskBit(NX_SYSDEFINED) | (1 << kCGEventKeyDown) | (1 << kCGEventKeyUp), myCGEventCallback, (void *)hidEventQueue);

    Read the article

  • Clipping different parts of an image with path

    - by huggie
    I've recently asked a question about clipping an image via path at view's drawRect method. http://stackoverflow.com/questions/2570653/iphone-clip-image-with-path Krasnyk's code is copied below. - (void)drawRect:(CGRect)rect { CGContextRef context = UIGraphicsGetCurrentContext(); CGMutablePathRef path = CGPathCreateMutable(); //or for e.g. CGPathAddRect(path, NULL, CGRectInset([self bounds], 10, 20)); CGPathAddEllipseInRect(path, NULL, [self bounds]); CGContextAddPath(context, path); CGContextClip(context); CGPathRelease(path); [[UIImage imageNamed:@"GC.png"] drawInRect:[self bounds]]; } It works very well. However, when my image is larger than the view itself, how do I show different parts of the image? I tried tweaking around with translation on the locations (show as bounds above) of ellipse and/or UIImage drawInRect but some complex effects (Unwanted clipping, weird elipse size) I can't explain happens.

    Read the article

  • What way to use the CGContext to draw is suitable?

    - by Tattat
    I know that the CGContext cannot call it to draw directly, and it needs to fill the drawing logic in the drawInContext, and call the CGContext to draw using "setNeedsDisplay", so, I designed a cmd to execute, but it cause some problems... like this : http://stackoverflow.com/questions/2617827/why-i-cant-draw-in-a-loop-using-uiview-in-iphone I think the CGContext is very different from my previous programming experience....(I used HTML5 canvas, that allow me add more details, after I draw, so do the Java Swing) Actually, I want to know what is the suitable to implement these kind of thing in Apples' programmer mind. Thz.

    Read the article

  • CATiledLayer blanking tiles before drawing contents

    - by Greg Plesur
    All, I'm having trouble getting behavior that I want from CATiledLayer. Is there a way that I can trigger the tiles to redraw without having the side-effect that their areas are cleared to white first? I've already subclassed CATiledLayer to set fadeDuration to return 0. To be more specific, here are the details of what I'm seeing and what I'm trying to achieve: I have a UIScrollView with a big content size...~12000x800. Its content view is a UIView backed by a CATiledLayer. The UIView is rendered with a lot of custom-drawn lines Everything works fine, but the contents of the UIView sometimes change. When that happens, I'd like to redraw the tiles as seamlessly as possible. When I use setNeedsDisplay on the view, the tiles redraw but they are first cleared to white and there's a fraction-of-a-second delay before the new content is drawn. I've already subclassed CATiledLayer so that fadeDuration is set to 0. The behavior that I want seems like it should be possible...when you zoom in on the scrollview and the content gets redrawn at a higher resolution, there's no blanking before the redraw; the new content is drawn right on top of the old one. That's what I'm looking for. Thanks; I appreciate your ideas. Update: Just to follow up - I realized that the tiles weren't being cleared to white before the redraw, they're being taken out entirely; the white that I was seeing is the color of the view that's beneath my CATiledLayer-backed view. As a quick hack/fix, I put a UIImageView beneath the UIScrollView, and before triggering a redraw of the CATiledLayer-backed view I render its visible section into the UIImageView and let it show. This smooths out the redraw significantly. If anyone has a better solution, like keeping the redraw-targeted tiles from going away before being redrawn in the first place, I'd still love to hear it.

    Read the article

  • Why there is an invalid context error?

    - by Tattat
    Here is the code I use to draw: - (void) drawSomething { CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetRGBStrokeColor(context, 1, 0, 0, 1); CGContextSetLineWidth(context, 6.0); CGContextMoveToPoint(context, 100.0f, 100.0f); CGContextAddLineToPoint(context, 200.0f, 200.0f); CGContextStrokePath(context); NSLog(@"draw"); } But I got the error like this: [Session started at 2010-04-03 17:51:07 +0800.] Sat Apr 3 17:51:09 MacBook.local MyApp[12869] <Error>: CGContextSetRGBStrokeColor: invalid context Sat Apr 3 17:51:09 MacBook.local MyApp[12869] <Error>: CGContextSetLineWidth: invalid context Sat Apr 3 17:51:09 MacBook.local MyApp[12869] <Error>: CGContextMoveToPoint: invalid context Sat Apr 3 17:51:09 MacBook.local MyApp[12869] <Error>: CGContextAddLineToPoint: invalid context Sat Apr 3 17:51:09 MacBook.local MyApp[12869] <Error>: CGContextDrawPath: invalid context Why it prompt me to say that the context is invalided?

    Read the article

  • Removing the image from an IKImageView

    - by Brian Postow
    I have an IKImageView that is coming up effectively un-initialized. This is happening effectively in an error-state (The user is unregistered) so I haven't had a chance to put an image in it yet. In 10.6, this comes up fine, with a black rectangle. In 10.5, however, it comes up with garbage. some rectangles of noise, some rectangles of copies of the desktop, etc. I've tried setting the ZoomFactor to 0.0, I've tried setting the image to nil, but it appears that the problem is beyond that. any ideas? (My next kludge is going to be to ship tiny blank image with the app, and try to get it to load that... but that's kind of silly)

    Read the article

  • CGContext rotation

    - by kasperjj
    I have a 100x100 pixel image that I want to draw at various angles rotated around the center of the image. The following code works, but rotates around the original origo of the coordinate system (upper left hand corner) and not the translated location. Thus the image is not rotated around itself but around the upper left corner of the screen. CGContextRef context = UIGraphicsGetCurrentContext(); CGContextTranslateCTM(context, -50, -50); CGContextRotateCTM (context, 0.3); CGContextTranslateCTM(context,768/2,1024/2); [image drawAtPoint:CGPointMake(0,0)]; I tried doing the same using CGAffineTransform, but got the same results.

    Read the article

  • Using CATransition in a viewController

    - by eco_bach
    Hi I'm trying to implement the ViewTransitions code sample from Apple, but putting all the logic in my viewController instead of my applicationDelegate class. I'm getting a bizarre error when I try to compile. I'm importing QuartzCore in my viewcontroller implementation as #import _kCATransitionFade", referenced from: _kCATransitionFade$non_lazy_ptr in ViewTransitionsAsViewControllerViewController.o (maybe you meant: _kCATransitionFade$non_lazy_ptr) Anyone have any ideas?

    Read the article

  • Building a clip area in a UIView from path objects in its subviews

    - by hkatz
    I'm trying to produce a clipping area in a UIView that's generated from path objects in its subviews. For example, I might have one subview containing a square and another containing a circle. I want to be able to produce a clip in the parent superview that's the union of both these shapes. Can someone explain how to do this? About all I've been able to figure out so far is that: 1 - the superview's drawRect: method is called before its subviews' drawRects are, and 2 - the ContextRef that's accessible in all three instances is the same. Other than that I'm stumped. Thanks, Howard

    Read the article

  • iPhone - Is it ok to override UITableViewCell setSelected:animated

    - by Brian
    I am drawing custom UITableViewCells. My cells are opaque and are completely drawn in the drawRect of the cell to help with performance. I want to handle the look of a selected cell myself. If I don't, then the contents of my cell is covered up by the selectedBackgroundView that is added. Is it common or acceptable to override the setSelected:animated method of my cell so this is done properly. I guess if I did that, then I would not call the super's setSelected method since I would be handling how the cell will show that its selected. I would also have to set the selected property of the cell. Any help would be great. Thanks.

    Read the article

  • "Work stealing" vs. "Work shrugging (tm)"?

    - by John
    Why is it that I can find lots of information on "work stealing" and nothing on a "work shrugging(tm)" as a load-balancing strategy? I am surprised because work-stealing seems to me to have an inherent weakness when implementating efficient fine-grained load-balancing. Vis:- Relying on consumer processors to implement distribution (by actively stealing) begs the question of what these processors do when they find no work? None of the work-stealing references and implementations I have come across so far address this issue satisfactorarily for me. They either:- 1) Manage not to disclose what they do with idle processors! [Cilk] (?anyone know?) 2) Have all idle processors sleep and wake periodically and scatter messages to the four winds to see if any work has arrived [e.g. JAWS] (= way too latent & inefficient for me). 3) Assume that it is acceptable to have processors "spinning" looking for work ( = non-starter for me!) Unless anyone thinks there is a solution for this I will move on to consider a "Work Shrugging(tm)" strategy. Having the task-producing processor distribute excess load seems to me inherently capable of a much more efficient implementation. However a quick google didn't show up anything under the heading of "Work Shrugging" so any pointers to prior-art would be welcome. tx Tags I would have added if I was allowed to [work-stealing]

    Read the article

  • How to fill a path with gradient in drawRect:?

    - by Derrick
    filling a path with a solid color is easy enough: CGPoint aPoint; for (id pointValue in points) { aPoint = [pointValue CGPointValue]; CGContextAddLineToPoint(context, aPoint.x, aPoint.y); } [[UIColor redColor] setFill]; [[UIColor blackColor] setStroke]; CGContextDrawPath(context, kCGPathFillStroke); I'd like to draw a gradient instead of solid red, but I am having trouble. I've tried the code listed in the Question/Answer: http://stackoverflow.com/questions/422066/gradients-on-uiview-and-uilabels-on-iphone which is: CAGradientLayer *gradient = [CAGradientLayer layer]; [gradient setFrame:rect]; [gradient setColors:[NSArray arrayWithObjects:(id)[[UIColor blueColor] CGColor], (id)[[UIColor whiteColor] CGColor], nil]]; [[self layer] setMasksToBounds:YES]; [[self layer] insertSublayer:gradient atIndex:0]; However, this paints the entire view that this is in with the gradient, covering up my original path.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >