Search Results

Search found 8999 results on 360 pages for 'core i7'.

Page 18/360 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • AND NSPedicate on Core Data relationships

    - by jesse001
    I'm having trouble compounding NSPredicate with AND, although using OR works fine. Imagine 2 entities, Doctor and Patient. Doctors can have many patients and patients many doctors. I want to find doctors that, say, have both person1 and person2 as patients. I expected this to work but it returns none. NSPredicate *predicate = [NSPredicate predicateWithFormat:@"ANY patients matches 'person1&&person2'"]; If I change && to ||, I get all doctors that have person1 or person2 as I'd expect. Thanks in advance for your help.

    Read the article

  • Using iPhone Core Data to many Relationship

    - by BLeB
    When I define a to many relationship between entities in Xcode and then generate the data class from the entity I get a header with the following methods defined: @interface PriceList (CoreDataGeneratedAccessors) - (void)addItemsObject:(PriceListItem *)value; - (void)removeItemsObject:(PriceListItem *)value; - (void)addItems:(NSSet *)value; - (void)removeItems:(NSSet *)value; @end When I attempt to call addItemsObject with the following code a doesNotRecognizeSelector exception is thrown. PriceListItem *item = [NSEntityDescription insertNewObjectForEntityForName:@"PriceListItem" inManagedObjectContext:managedObjectContext]; item.cat = [attributeDict valueForKey:@"c"]; item.sel = [attributeDict valueForKey:@"s"]; [self addItemsObject:item]; From what I have read I do not have to implement these methods and that they are generated at runtime. Any ideas?

    Read the article

  • Store image in core data and Retina Display ?

    - by shani
    Hi I have an app that has hundreds of words with 3/4 images for each word. I have 2 versions of each word one for iOS 3 and one for retina display. I wish to save the images as data and connect them to the appropriate word so it will be easy to pull them later. my question is - how do i get the suitable size ? its works great with the @2x wjen you get it from the app file system, but hoe does it supposed to work when i get it from data ? thanks shani

    Read the article

  • Core Data to-many relationship in code

    - by Jan Bezemer
    I have three entities: Session, User and Test. A session has 0-many users and a user can perform 0-6 tests. (I say 0 but in the real application always at least 1 is required, at least 1 user for a session and at least 1 test for a user. But I say 0 to express an empty start.) All entities have their own specific data attributes too. A user has a name, A session has a name, a test has six values to be filled in by the user, and so on. But my issue is with the relationships. How do I set multiple users and have them added to one session (same goes for multiple tests for one user). How do I show the content in a right way? How do I show a session that has multiple users and these users having completed multiple tests? Here's my code so far with regard to issue 1: Session *session = [NSEntityDescription insertNewObjectForEntityForName:@"Session" inManagedObjectContext:context]; session.name = @"Session 1"; User *users = [NSEntityDescription insertNewObjectForEntityForName:@"User" inManagedObjectContext:context]; users.age = [NSNumber numberWithInt:28]; users.session = session; //sessie.users = users; [sessie addUserObject:users]; With regard to issue 2: I can log the session, but I can't get the user(s) logged from a session. NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Session" inManagedObjectContext:context]; [fetchRequest setEntity:entity]; NSArray *fetchedObjects = [context executeFetchRequest:fetchRequest error:&error]; for (Session *info in fetchedObjects) { NSLog(@"Name: %@", info.name); NSLog(@"Having problems with this: %@",info.user); //User *details = info.user; //NSLog(@"User: %@", details.age); }

    Read the article

  • Improving the efficiency of multiple concurrent Core Animation animations

    - by Alex
    I have a view in my app that is very similar to the month view in the built-in Calendar app. There's a subview that holds the individual cells (a custom UIView subclass that draws text into its layer), and when the user navigates to the next "month", I create the new cells and slide the view to show them. When the animation stops, I remove the old, hidden cells and set things up so it's ready to go for the next animation. This all works nicely. However, I'd like to animate the cells' text color, as in the Calendar app, so that the outgoing ones transition to a lighter color and the incoming ones transition to a darker color. The problems is that I can have as many as 70 cells, so doing individual animations is very slow -- between 5-10 fps on my iPhone 3GS. I'm trying to find a less computationally intense way of doing this. My reading of the Shark results is that the majority of the time is spent redrawing the text for each frame for each frame. This makes sense, since text rendering is hardly the cheapest operation. I've considered creating a second view -- one holding the "outgoing" state and one holding the "incoming" state and using a single opacity animation to gradually reveal the updated cells while both are sliding. I'm concerned that instead of having 70 cells, I'll have 140, which seems like a lot of views. So, is that too many views or would there be a better way of doing this?

    Read the article

  • Add Microsoft Core Fonts to Ubuntu

    - by Matthew Guay
    Have you ever needed the standard Microsoft fonts such as Times New Roman on your Ubuntu computer?  Here’s how you can easily add the core Microsoft fonts to Ubuntu. Times New Roman, Arial, and other core Microsoft fonts are still some of the most commonly used fonts in documents and websites.  Times New Roman especially is often required for college essays, legal docs, and other critical documents that you may need to write or edit.  Ubuntu includes the Liberation alternate fonts that include similar alternates to Times New Roman, Arial, and Courier New, but these may not be accepted by professors and others when a certain font is required.  But, don’t worry; it only takes a couple clicks to add these fonts to Ubuntu for free. Installing the Core Microsoft Fonts Microsoft has released their core fonts, including Times New Roman and Arial, for free, and you can easily download these from the Software Center.  Open your Applications menu, and select Ubuntu Software Center.   In the search box enter the following: ttf-mscorefonts Click Install on the “Installer for Microsoft TrueType core fonts” directly in the search results. Enter your password when requested, and click Authenticate. The fonts will then automatically download and install in a couple minutes depending on your internet connection speed. Once the install is finished, you can launch OpenOffice Writer to try out the new fonts.  Here’s a preview of all the fonts included in this pack.  And, yes, this does included the infamous Comic Sans and Webdings fonts as well as the all-important Times New Roman. Please Note:  By default in Ubuntu, OpenOffice uses Liberation Serif as the default font, but after installing this font pack, the default font will switch to Times New Roman. Adding Other Fonts In addition to the Microsoft Core Fonts, the Ubuntu Software Center has hundreds of free fonts available.  Click the Fonts link on the front page to explore these, and install the same as above. If you’ve downloaded another font individually, you can also install it easily in Ubuntu.  Just double-click it, and then click Install in the preview window. Conclusion Although you may prefer the fonts that are included with Ubuntu, there are many reasons why having the Microsoft core fonts can be helpful.  Thankfully it’s easy in Ubuntu to install them, so you’ll never have to worry about not having them when you need to edit an important document. Similar Articles Productive Geek Tips Enable Smooth fonts on Ubuntu LinuxEmbed True Type Fonts in Word and PowerPoint 2007 DocumentsNew Vista Syntax for Opening Control Panel Items from the Command-lineStupid Geek Tricks: Enable More Fonts for the Windows Command PromptAdding extra Repositories on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook Recycle !

    Read the article

  • Do NOT remove the reference to System.Core from your VS2010 Project

    - by Lee Brandt
    One of the things I routinely do when adding a new class library project, is remove all references and just add them back in as I need them. That is NOT a good idea for Visual Studio 2010. When I DID need System.Core, and went to add it back, this is what I got: "A reference to 'System.Core' could not be added. This component is automatically referenced..." After some Googling I found this article: http://connect.microsoft.com/VisualStudio/feedback/details/525663/cannot-remove-system-core-dll-reference-from-a-vs2010-project It tells you to add it back manually. Here is the part that needs back in the project file. After the last PropertyGroup node, add this node:   <ItemGroup>     <Reference Include="System.Core" />   </ItemGroup> You should be good to go again. Hope this helps.

    Read the article

  • Getting Dell E6320 with I7 to work with 3 monitors at 1920x1080p x 3

    - by MadBoy
    I want to buy Dell E6320 which comes with Intel Core I7-2620M (2.70GHz, 4MB cache, Dual Core) with Intel HD Graphics 3000. Laptop will come with docking station. I want to connect 3 monitors to that docking station so that working at home would give me some additional boost. Docking station will allow me to connect only 2 monitors so I'm looking at following other options: Matrox TRIPLEHEAD2GO DIGITAL Edition or TRIPLEHEAD2GO DP Edition. But reading Matrox Support Page intel GPU can't run the highest resolution with 3 monitors connected, it even gets worse since it seems monitors would have to be able to work at 50hz. Also I'm not sure but it seems that Matrox doesn't split the monitors as 3 separate monitors but simply as one big space (which is a bit opposite to what I need) Buy 2 or maybe just 1 USB based monitor but it would also mean having 1 or 2 different monitors then the main one, unless I buy 3 USB based monitors which would mean more money to spend. Also I found only couple of models and most of them require USB 3.0 and no other cables to plug in (nice but costly - couldn't find decent monitor with only USB for sending signal and having power connected normally) . But docking station has only one USB 3.0 port. Can I use hub and still get it to work? Find some converters from Digital to USB (I think DisplayLink does some?) Buy different laptop but what kind? I need it to be I7, small (13"), fast and lightweight. At same time it requires docking station that I can use at home to connect 3 external monitors. Some other suggested solution... Edit: I need 3 monitors for work in terms of coding in Visual Studio or having word/excel/outlook open. Nothing fancy. Maybe some movie once in a while.

    Read the article

  • New SQL Server 2012 per core licensing – Thank you Microsoft

    - by jchang
    Many of us have probably seen the new SQL Server 2012 per core licensing, with Enterprise Edition at $6,874 per core super ceding the $27,495 per socket of SQL Server 2008 R2 (discounted to $19,188 for 4-way and $23,370 for 2-way in TPC benchmark reports) with Software Assurance at $6,874 per processor? Datacenter was $57,498 per processor, so the new per-core licensing puts 2012 EE on par with 2008R2 DC, at 8-cores per socket. This is a significant increase for EE licensing on the 2-way Xeon 5600...(read more)

    Read the article

  • I have a server running Windows 2008 R2 Core and it needs to hosts either SVN or GIT

    - by Jason Adams
    The server allocated for our cross platform projects (both Mac & PC) source repository is running Win2008R2 Core. We're really happy with its stability and we aren't interested in moving over to non-core. We need to get either SVN or GIT installed on the aforementioned box in the shortest amount of steps. We know the advantages/disadvantages of both systems. That being said, we don't care which one we use, we're just are looking for the path of least resistance on setting up a repository on a machine running R2 core.

    Read the article

  • Is there any difference between processor and core?

    - by Salvador
    The following two command seems to give me different information about the same hardware srs@ubuntu:~$ cat /proc/cpuinfo | grep -e processor -e cores processor : 0 cpu cores : 4 processor : 1 cpu cores : 4 processor : 2 cpu cores : 4 processor : 3 cpu cores : 4 srs@ubuntu:~$ sudo dmidecode -t processor # dmidecode 2.9 SMBIOS 2.6 present. Handle 0x0004, DMI type 4, 42 bytes Processor Information Socket Designation: LGA1155 Type: Central Processor Family: <OUT OF SPEC> Manufacturer: Intel ID: A7 06 02 00 FF FB EB BF Version: Intel(R) Core(TM) i5-2500K CPU @ 3.30GHz Voltage: 1.0 V External Clock: 100 MHz Max Speed: 3800 MHz Current Speed: 3300 MHz Status: Populated, Enabled Upgrade: Other L1 Cache Handle: 0x0005 L2 Cache Handle: 0x0006 L3 Cache Handle: 0x0007 Serial Number: To Be Filled By O.E.M. Asset Tag: To Be Filled By O.E.M. Part Number: To Be Filled By O.E.M. Core Count: 4 Core Enabled: 1 Characteristics: 64-bit capable Until today I thought I had a single processor with 4 independent cores. I also thought that within each core can be used different threads.

    Read the article

  • SQL SERVER The Difference between Dual Core vs. Core 2 Duo

    I have decided that I would not write on this subject until I have received a total of 25 questions on this subject. Here are a few questions from the list: Questions: What is the difference between Dual Core and Core 2 Duo? Which one is recommended for SQL Server: Core 2 Duo or Dual [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • IIS Express only utilizes 13% of i7 Quad Core

    - by John Nevermore
    Since one of my scripts got incredibly complex, i was benchmarking the performance of moving some javascript processing logic to the server side in my ASP.NET MVC 4 application. According to taskmgr.exe, IIS Express only utilizes 13% of my i7. I decided to throw in 3 parallel tasks calculating the fibonacci sequence up to 50 and the IIS express still wouldn't utilize more than 13% of my cpu. Is there anything i can do, so that the application utilizes the full cpu, as it would in a real server ?

    Read the article

  • IBM "per core" comparisons for SPECjEnterprise2010

    - by jhenning
    I recently stumbled upon a blog entry from Roman Kharkovski (an IBM employee) comparing some SPECjEnterprise2010 results for IBM vs. Oracle. Mr. Kharkovski's blog claims that SPARC delivers half the transactions per core vs. POWER7. Prior to any argument, I should say that my predisposition is to like Mr. Kharkovski, because he says that his blog is intended to be factual; that the intent is to try to avoid marketing hype and FUD tactic; and mostly because he features a picture of himself wearing a bike helmet (me too). Therefore, in a spirit of technical argument, rather than FUD fight, there are a few areas in his comparison that should be discussed. Scaling is not free For any benchmark, if a small system scores 13k using quantity R1 of some resource, and a big system scores 57k using quantity R2 of that resource, then, sure, it's tempting to divide: is  13k/R1 > 57k/R2 ? It is tempting, but not necessarily educational. The problem is that scaling is not free. Building big systems is harder than building small systems. Scoring  13k/R1  on a little system provides no guarantee whatsoever that one can sustain that ratio when attempting to handle more than 4 times as many users. Choosing the denominator radically changes the picture When ratios are used, one can vastly manipulate appearances by the choice of denominator. In this case, lots of choices are available for the resource to be compared (R1 and R2 above). IBM chooses to put cores in the denominator. Mr. Kharkovski provides some reasons for that choice in his blog entry. And yet, it should be noted that the very concept of a core is: arbitrary: not necessarily comparable across vendors; fluid: modern chips shift chip resources in response to load; and invisible: unless you have a microscope, you can't see it. By contrast, one can actually see processor chips with the naked eye, and they are a bit easier to count. If we put chips in the denominator instead of cores, we get: 13161.07 EjOPS / 4 chips = 3290 EjOPS per chip for IBM vs 57422.17 EjOPS / 16 chips = 3588 EjOPS per chip for Oracle The choice of denominator makes all the difference in the appearance. Speaking for myself, dividing by chips just seems to make more sense, because: I can see chips and count them; and I can accurately compare the number of chips in my system to the count in some other vendor's system; and Tthe probability of being able to continue to accurately count them over the next 10 years of microprocessor development seems higher than the probability of being able to accurately and comparably count "cores". SPEC Fair use requirements Speaking as an individual, not speaking for SPEC and not speaking for my employer, I wonder whether Mr. Kharkovski's blog article, taken as a whole, meets the requirements of the SPEC Fair Use rule www.spec.org/fairuse.html section I.D.2. For example, Mr. Kharkovski's footnote (1) begins Results from http://www.spec.org as of 04/04/2013 Oracle SUN SPARC T5-8 449 EjOPS/core SPECjEnterprise2010 (Oracle's WLS best SPECjEnterprise2010 EjOPS/core result on SPARC). IBM Power730 823 EjOPS/core (World Record SPECjEnterprise2010 EJOPS/core result) The questionable tactic, from a Fair Use point of view, is that there is no such metric at the designated location. At www.spec.org, You can find the SPEC metric 57422.17 SPECjEnterprise2010 EjOPS for Oracle and You can also find the SPEC metric 13161.07 SPECjEnterprise2010 EjOPS for IBM. Despite the implication of the footnote, you will not find any mention of 449 nor anything that says 823. SPEC says that you can, under its fair use rule, derive your own values; but it emphasizes: "The context must not give the appearance that SPEC has created or endorsed the derived value." Substantiation and transparency Although SPEC disclaims responsibility for non-SPEC information (section I.E), it says that non-SPEC data and methods should be accurate, should be explained, should be substantiated. Unfortunately, it is difficult or impossible for the reader to independently verify the pricing: Were like units compared to like (e.g. list price to list price)? Were all components (hw, sw, support) included? Were all fees included? Note that when tpc.org shows IBM pricing, there are often items such as "PROCESSOR ACTIVATION" and "MEMORY ACTIVATION". Without the transparency of a detailed breakdown, the pricing claims are questionable. T5 claim for "Fastest Processor" Mr. Kharkovski several times questions Oracle's claim for fastest processor, writing You see, when you publish industry benchmarks, people may actually compare your results to other vendor's results. Well, as we performance people always say, "it depends". If you believe in performance-per-core as the primary way of looking at the world, then yes, the POWER7+ is impressive, spending its chip resources to support up to 32 threads (8 cores x 4 threads). Or, it just might be useful to consider performance-per-chip. Each SPARC T5 chip allows 128 hardware threads to be simultaneously executing (16 cores x 8 threads). The Industry Standard Benchmark that focuses specifically on processor chip performance is SPEC CPU2006. For this very well known and popular benchmark, SPARC T5: provides better performance than both POWER7 and POWER7+, for 1 chip vs. 1 chip, for 8 chip vs. 8 chip, for integer (SPECint_rate2006) and floating point (SPECfp_rate2006), for Peak tuning and for Base tuning. For example, at the 8-chip level, integer throughput (SPECint_rate2006) is: 3750 for SPARC 2170 for POWER7+. You can find the details at the March 2013 BestPerf CPU2006 page SPEC is a trademark of the Standard Performance Evaluation Corporation, www.spec.org. The two specific results quoted for SPECjEnterprise2010 are posted at the URLs linked from the discussion. Results for SPEC CPU2006 were verified at spec.org 1 July 2013, and can be rechecked here.

    Read the article

  • Drawing custom graphics on the iPhone: CALayer vs. CGContext

    - by Henry Cooke
    Hi all, I have an application in which I'm doing some custom drawing, a bunch of lines on a gradient background, like so (ignore the text, they're just UILabels): At the moment, that's all done by starting a new CGContext, drawing stuff into it with CGContextDrawLinearGradient and CGContextStrokePath, then finally saving the resulting image with UIGraphicsGetImageFromCurrentImageContext. The positioning info is calculated while I'm laying out those labels, so it'd be a PITA (and duplication of effort) to calculate it all over again when the containing UIView is drawn with drawRect, so I'm drawing it ahead of time into a UIImage. All works fine, so far so good. However, I have a sneaking suspicion that it may be more efficient to use CALayers to do this drawing. My (cursory) understanding of the difference between the two approaches is that a CALayer is more like a bunch of instructions to draw stuff, and so takes up less memory until it's actually drawn onscreen, whereas drawing everything into a UIImage ahead of time means that you've got a sodding great bitmap kicking around in memory all the time, whether it's drawn or not. Is that a correct understanding? What is generally considered to be the best way of drawing custom images on the iPhone?

    Read the article

  • CATiledLayer: Determining levelsOfDetail when in drawLayer

    - by Russell Quinn
    I have a CATiledLayer inside a UIScrollView and all is working fine. Now I want to add support for showing different tiles for three levels of zooming. I have set levelsOfDetail to 3 and my tile size is 300 x 300. This means I need to provide three sets of tiles (I'm supplying PNGs) to cover: 300 x 300, 600 x 600 and 1200 x 1200. My problem is that inside "(void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx" I cannot work out which levelOfDetail is currently being drawn. I can retrieve the bounds currently required by using CGContextGetClipBoundingBox and usually this requests a rect for one of the above sizes, but at layer edges the tiles are usually smaller and therefore this isn't a good method. Basically, if I have set levelsOfDetail to 3, how do I find out if drawLayer is requesting level 1, 2 or 3 when it's called? Thanks, Russell.

    Read the article

  • How to get UIScrollView contentOffset during an animation

    - by user249488
    I am trying to get the contentOffset property of a UIScrollView in the middle of a setContentOffset animation. Note I am not using the animated property of the method, but instead enclosing the setContentOffset operation within a UIView animation block so I can have finer control. When I try to read the value of contentOffset in the middle of the animation, it returns the final value rather than the value at that moment. I know that you can use the presentationLayer to get the current layer, but is there any way you can get the current offset in the middle of an animation?

    Read the article

  • Drawing performance with CGImageCreateWithJPEGDataProvider?

    - by Rnegi
    I've actually curious about this for the iPhone. I am getting an MJPEG stream from a server and trying to render it natively on the iphone (without the use of safari class). Reasons for this is because the safari class while CAN render MJPEG natively, does not do so at the framerate I would like. So I tried drawing it natively, but I've come up with performance issues, namely a syncing issue between what I'm getting from the server and what I am able to draw onto the screen of the phone. (There should be a little lag, but the drift gets really bad, which is what I want to avoid). So I have a connection set up to my server and I do get the JPEGS. It's just data I insert into a NSMutableArray buffer CFMutableDataRef _t_data_ref = (CFMutableDataRef)[_buffer_array objectAtIndex:0]; //CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData (_cf_buffer_data); CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData(_t_data_ref); CGImageRef image = CGImageCreateWithJPEGDataProvider(imgDataProvider, NULL, true, kCGRenderingIntentDefault); CGImageRef imgRef = image; CGContextDrawImage(context, CGRectMake(0, 17, 380, 285), imgRef); CGImageRelease(image); CGDataProviderRelease(imgDataProvider); please note this is the gist of my code, but it should summarize what I am trying to accomplish with regards to drawing. Also in order to get the framerate in sync, I had to detach a separate thread that sleeps X seconds and calls [self setNeedsDisplay]. NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; // Top-level pool while(1) { //[NSThread sleepForTimeInterval:TIMER_REFRESH_VALUE]; //sleep(unsigned int ); usleep(MICRO_REFRESH_VALUE); if ([_buffer_array count] > 10) { //NSLog(@"stuff %d", [_buffer_array count]); //[self setNeedsDisplay]; [self performSelectorOnMainThread:@selector(setNeedsDisplay) withObject:nil waitUntilDone:NO]; } } [pool release]; // Release the objects in the pool. My buffer of jpeg data actually fills up quite quick, but I can't seem to actually consume what i'm getting at the same rate, actually much slower. Are there any documents that can describe what kind of performance tuning I can do to make it go faster when rendering the JPEG to the screen? Or am I kind of stuck here? Thanks!

    Read the article

  • Optimizing drawing on UITableViewCell

    - by Brian
    I am drawing content to a UITableViewCell and it is working well, but I'm trying to understand if there is a better way of doing this. Each cell has the following components: Thumbnail on the left side - could come from server so it is loaded async Title String - variable length so each cell could be different height Timestamp String Gradient background - the gradient goes from the top of the cell to the bottom and is semi-transparent so that background colors shine through with a gloss It currently works well. The drawing occurs as follows: UITableViewController inits/reuses a cell, sets needed data, and calls [cell setNeedsDisplay] The cell has a CALayer for the thumbnail - thumbnailLayer In the cell's drawRect it draws the gradient background and the two strings The cell's drawRect it then calls setIcon - which gets the thumbnail and sets the image as the contents of the thumbnailLayer. If the image is not found locally, it sets a loading image as the contents of the thumbnailLayer and asynchronously gets the thumbnail. Once the thumbnail is received, it is reset by calling setIcon again & resets the thumbnailLayer.contents This all currently works, but using Instruments I see that the thumbnail is compositing with the gradient. I have tried the following to fix this: setting the cell's backgroundView to a view whose drawRect would draw the gradient so that the cell's drawRect could draw the thumbnail and using setNeedsDisplayInRect would allow me to only redraw the thumbnail after it loaded --- but this resulted in the backgroundView's drawing (gradient) covering the cell's drawing (text). I would just draw the thumbnail in the cell's drawRect, but when setNeedsDisplay is called, drawRect will just overlap another image and the loading image may show through. I would clear the rect, but then I would have to redraw the gradient. I would try to draw the gradient in a CAGradientLayer and store a reference to it, so I can quickly redraw it, but I figured I'd have to redraw the gradient if the cell's height changes. Any ideas? I'm sure I'm missing something so any help would be great.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >