Search Results

Search found 13860 results on 555 pages for 'core graphics'.

Page 131/555 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • NSManagedObjectContext returns YES for hasChanges when there are none

    - by JK
    I created a separate NSManagedObjectContext on a separate thread to perform some store maintenance. However, I have noticed that the context returns YES for hasChanges as soon as a managed object in it is even referenced e.g. NSString *name = managedObject.name; This context is created and used exclusively in 1 method. Why is it returning has changes, when there there are none?

    Read the article

  • Java java.util.ConcurrentModificationException error

    - by vijay
    Hi all, please can anybody help me solve this problem last so many days I could not able to solve this error. I tried using synchronized method and other ways but did not work so please help me Error java.util.ConcurrentModificationException at java.util.AbstractList$Itr.checkForComodification(Unknown Source) at java.util.AbstractList$Itr.remove(Unknown Source) at JCA.startAnalysis(JCA.java:103) at PrgMain2.doPost(PrgMain2.java:235) Code public synchronized void startAnalysis() { //set Starting centroid positions - Start of Step 1 setInitialCentroids(); Iterator<DataPoint> n = mDataPoints.iterator(); //assign DataPoint to clusters loop1: while (true) { for (Cluster c : clusters) { c.addDataPoint(n.next()); if (!n.hasNext()) break loop1; } } //calculate E for all the clusters calcSWCSS(); //recalculate Cluster centroids - Start of Step 2 for (Cluster c : clusters) { c.getCentroid().calcCentroid(); } //recalculate E for all the clusters calcSWCSS(); // List copy = new ArrayList(originalList); //synchronized (c) { for (int i = 0; i < miter; i++) { //enter the loop for cluster 1 for (Cluster c : clusters) { for (Iterator<DataPoint> k = c.getDataPoints().iterator(); k.hasNext(); ) { // synchronized (k) { DataPoint dp = k.next(); System.out.println("Value of DP" +dp); //pick the first element of the first cluster //get the current Euclidean distance double tempEuDt = dp.getCurrentEuDt(); Cluster tempCluster = null; boolean matchFoundFlag = false; //call testEuclidean distance for all clusters for (Cluster d : clusters) { //if testEuclidean < currentEuclidean then if (tempEuDt > dp.testEuclideanDistance(d.getCentroid())) { tempEuDt = dp.testEuclideanDistance(d.getCentroid()); tempCluster = d; matchFoundFlag = true; } //if statement - Check whether the Last EuDt is > Present EuDt } //for variable 'd' - Looping between different Clusters for matching a Data Point. //add DataPoint to the cluster and calcSWCSS if (matchFoundFlag) { tempCluster.addDataPoint(dp); //k.notify(); // if(k.hasNext()) k.remove(); for (Cluster d : clusters) { d.getCentroid().calcCentroid(); } //for variable 'd' - Recalculating centroids for all Clusters calcSWCSS(); } //if statement - A Data Point is eligible for transfer between Clusters. // }// syn } //for variable 'k' - Looping through all Data Points of the current Cluster. }//for variable 'c' - Looping through all the Clusters. }//for variable 'i' - Number of iterations. // syn }

    Read the article

  • CoreGraphics taking a while to show on a large view - can i get it to repeat pixels?

    - by Andrew
    This is my coregraphics code: void drawTopPaperBackground(CGContextRef context, CGRect rect) { CGRect paper3 = CGRectMake(10, 14, 300, rect.size.height - 14); CGRect paper2 = CGRectMake(13, 12, 294, rect.size.height - 12); CGRect paper1 = CGRectMake(16, 10, 288, rect.size.height - 10); //Shadow CGContextSetShadowWithColor(context, CGSizeMake(0,0), 10, [[UIColor colorWithWhite:0 alpha:0.5]CGColor]); CGPathRef path = createRoundedRectForRect(paper3, 0); CGContextSetFillColorWithColor(context, [[UIColor blackColor] CGColor]); CGContextAddPath(context, path); CGContextFillPath(context); //Layers of paper //CGContextSaveGState(context); drawPaper(context, paper3); drawPaper(context, paper2); drawPaper(context, paper1); //CGContextRestoreGState(context); } void drawPaper(CGContextRef context, CGRect rect) { //Shadow CGContextSaveGState(context); CGContextSetShadowWithColor(context, CGSizeMake(0,0), 1, [[UIColor colorWithWhite:0 alpha:0.5]CGColor]); CGPathRef path = createRoundedRectForRect(rect, 0); CGContextSetFillColorWithColor(context, [[UIColor blackColor] CGColor]); CGContextAddPath(context, path); CGContextFillPath(context); //CGContextRestoreGState(context); //Gradient //CGContextSaveGState(context); CGColorRef startColor = [UIColor colorWithWhite:0.92 alpha:1.0].CGColor; CGColorRef endColor = [UIColor colorWithWhite:0.94 alpha:1.0].CGColor; CGRect firstHalf = CGRectMake(rect.origin.x, rect.origin.y, rect.size.width / 2, rect.size.height); CGRect secondHalf = CGRectMake(rect.origin.x + (rect.size.width / 2), rect.origin.y, rect.size.width / 2, rect.size.height); drawVerticalGradient(context, firstHalf, startColor, endColor); drawVerticalGradient(context, secondHalf, endColor, startColor); //CGContextRestoreGState(context); //CGContextSaveGState(context); CGRect redRect = rectForRectWithInset(rect, -1); CGMutablePathRef redPath = createRoundedRectForRect(redRect, 0); //CGContextSaveGState(context); CGContextSetStrokeColorWithColor(context, [[UIColor blackColor] CGColor]); CGContextAddPath(context, path); CGContextClip(context); CGContextAddPath(context, redPath); CGContextSetShadowWithColor(context, CGSizeMake(0, 0), 15.0, [[UIColor colorWithWhite:0 alpha:0.1] CGColor]); CGContextStrokePath(context); CGContextRestoreGState(context); } The view is a UIScrollView, which contains a textview. Every time the user types something and goes onto a new line, I call [self setNeedsDisplay]; and it redraws the code. But when the view starts to get long - around 1000 height, it has very noticeable lag. How can i make this code more efficient? Can i take a line of pixels and make it just repeat that, or stretch it, all the way down?

    Read the article

  • Blit Queue Optimization Algorithm

    - by martona
    I'm looking to implement a module that manages a blit queue. There's a single surface, and portions of this surface (bounded by rectangles) are copied to elsewhere within the surface: add_blt(rect src, point dst); There can be any number of operations posted, in order, to the queue. Eventually the user of the queue will stop posting blits, and ask for an optimal set of operations to actually perform on the surface. The task of the module is to ensure that no pixel is copied unnecessarily. This gets tricky because of overlaps of course. A blit could re-blit a previously copied pixel. Ideally blit operations would be subdivided in the optimization phase in such a way that every block goes to its final place with a single operation. It's tricky but not impossible to put this together. I'm just trying to not reinvent the wheel. I looked around on the 'net, and the only thing I found was the SDL_BlitPool Library which assumes that the source surface differs from the destination. It also does a lot of grunt work, seemingly unnecessarily: regions and similar building blocks are a given. I'm looking for something higher-level. Of course, I'm not going to look a gift horse in the mouth, and I also don't mind doing actual work... If someone can come forward with a basic idea that makes this problem seem less complex than it does right now, that'd be awesome too. EDIT: Thinking about aaronasterling's answer... could this work? Implement customized region handler code that can maintain metadata for every rectangle it contains. When the region handler splits up a rectangle, it will automatically associate the metadata of this rectangle with the resulting sub-rectangles. When the optimization run starts, create an empty region handled by the above customized code, call this the master region Iterate through the blt queue, and for every entry: Let srcrect be the source rectangle for the blt beng examined Get the intersection of srcrect and master region into temp region Remove temp region from master region, so master region no longer covers temp region Promote srcrect to a region (srcrgn) and subtract temp region from it Offset temp region and srcrgn with the vector of the current blt: their union will cover the destination area of the current blt Add to master region all rects in temp region, retaining the original source metadata (step one of adding the current blt to the master region) Add to master region all rects in srcrgn, adding the source information for the current blt (step two of adding the current blt to the master region) Optimize master region by checking if adjacent sub-rectangles that are merge candidates have the same metadata. Two sub-rectangles are merge candidates if (r1.x1 == r2.x1 && r1.x2 == r2.x2) | (r1.y1 == r2.y1 && r1.y2 == r2.y2). If yes, combine them. Enumerate master region's sub-rectangles. Every rectangle returned is an optimized blt operation destination. The associated metadata is the blt operation`s source.

    Read the article

  • Optimally place a pie slice in a rectangle.

    - by Lisa
    Given a rectangle (w, h) and a pie slice with start angle and end angle, how can I place the slice optimally in the rectangle so that it fills the room best (from an optical point of view, not mathematically speaking)? I'm currently placing the pie slice's center in the center of the rectangle and use the half of the smaller of both rectangle sides as the radius. This leaves plenty of room for certain configurations. Examples to make clear what I'm after, based on the precondition that the slice is drawn like a unit circle: A start angle of 0 and an end angle of PI would lead to a filled lower half of the rectangle and an empty upper half. A good solution here would be to move the center up by 1/4*h. A start angle of 0 and an end angle of PI/2 would lead to a filled bottom right quarter of the rectangle. A good solution here would be to move the center point to the top left of the rectangle and to set the radius to the smaller of both rectangle sides. This is fairly easy for the cases I've sketched but it becomes complicated when the start and end angles are arbitrary. I am searching for an algorithm which determines center of the slice and radius in a way that fills the rectangle best. Pseudo code would be great since I'm not a big mathematician.

    Read the article

  • .GIF re edit! Can't figure it out!!

    - by Adam C
    http://img227.imageshack.us/img227/1892/hatersgonna.gif That is the photo.. I am trying to cut around it so its a little smaller and make him walk the opposite direction. The reason I am doing this is for a VBulletin forum signature since it marquees left to right. I have tried editing the animation in Photoshop and I flipped the canvas to horizontal... I can't figure this out.. I've been at it for HOURS. hah Also if anyone can make it just a little darker that would be amazing. "no I'm not asking for free help" but any help would be great Thank you so much

    Read the article

  • How to distort the desktop screen

    - by HaifengWang
    Hi friends, I want to change the shape of the desktop screen, so what are displayed on the desktop will be distorted at the same time. And the user can still operate the PC with the mouse on the distorted desktop(Run the applications, Open the "My Computer" and so on). I think I must get the projection matrix of the screen coordinate at first. Then transform the matrix, and map the desktop buffer image to the distorted mesh. Are there any interfaces which can modify the shape of the desktop screen in OpenGL or DirectX? Would you please give me some tip on it. Thank you very much in advance. Please refer to the picture from http://oi53.tinypic.com/bhewdx.jpg BR, Haifeng Addition1: I'm sorry! Maybe I didn't express clearly what I want to implement. What I want to implement is to modify the shape of the screen. So we can distort the shapes of all the applications which are run on Windows at the same time. For example that the window of "My Computer" will be distorted with the distortion of the desktop screen. And we can still operate the PC with mouse from the distorted desktop(Click the shortcut to run a program). Addition2: The projection matrix is just my assume. There isn't any desktop projection matrix by which the desktop surface is projected to the screen. What I want to implement is to change the shape of the desktop, as the same with mapping the desktop to an 3D mesh. But the user can still operate the OS on the distorted desktop(Click the shortcut to run a program, open the ie to surf the internet). Addition3: The shapes of all the programs run on the OS are changed with the distortion of the screen. It's realtime. The user can still operate the OS on the distorted screen as usually. Maybe we can intercept or override the GPU itself to implement the effect. I'm investigating GDI, I think I can find some clue for that. The first step is to find how to show the desktop on the screen.

    Read the article

  • CoreData is saving model without the save: method being called

    - by chris
    I have a list of items with a plus button in the navigation bar that opens up a modal window containing a table of the models attributes, that displays a form when the table items are clicked (pretty standard form style). For some reason if I click the plus button to open the form to create a new model, then immediately click the done button, the person model is saved. The action linked to the done button does nothing but call on a delegate method notifying the personListViewController to close the window. The apple docs do state that the model is not saved after calling the insertNewObjectForEntityName: ... Simply creating a managed object does not cause it to be saved to a persistent store. The managed object context acts as a scratchpad. I am at a lost to why this is happening, but every time I click the done button I have a new blank item in the original tableView. I am using SDK v3.1.3 // PersonListViewController - open modal window - (void)addPerson { // Load the new form PersonNewViewController *newController = [[PersonNewViewController alloc] init]; newController.modalFormDelegate = self; [self presentModalViewController:newController animated:YES]; [newController release]; } // PersonFormViewController - (void)viewDidLoad { [super viewDidLoad]; if ( person == nil ) { MyAppDelegate *appDelegate = [[UIApplication sharedApplication] delegate]; self.person = [NSEntityDescription insertNewObjectForEntityForName:@"Person" inManagedObjectContext:appDelegate.managedObjectContext]; } ... UIBarButtonItem *doneButton = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemDone target:self action:@selector(done)]; self.navigationItem.rightBarButtonItem = doneButton; [doneButton release]; } - (IBAction) done { [self.modalFormDelegate didCancel]; } // delegate method in the original listViewController - (void) didCancel { [self dismissModalViewControllerAnimated:YES]; }

    Read the article

  • Displaying bitmaps in relative positions

    - by JonF
    I'd like to put a couple images on a surfaceview. I understand that the screen sizes of android devices can vary, so I don't think I can just use an x y position or I might end up placing it off different screens. Say I want to put two boxes in the center of the screen, a blue one and a red one. The blue one is to the left of the red one. How can I accomplish that while accounting for different screen sizes?

    Read the article

  • how do i get a layer's frame to automatically resize based on its superlayer's frame or its view's f

    - by Kevlar
    I'm experimenting with using cagradientlayer to draw gradients in our app instead of having a subclass of uiview manage gradients. One snafu that i've come across is that when the view that has the gradient as a sublayer of its main layer gets resized to fit the data i am trying to show, the layer doesn't resize along with it. I end up having the gradient layer end at the original frame size while my view's frame is much larger. Is there a way to have the sublayer autoresize to fit its superlayer's frame, or the superlayer's view's frame?

    Read the article

  • How to animate the frame of an layer with CABasicAnimation?

    - by mystify
    I guess I have to convert the CGRect into an object to pass it to fromValue? This is how I try it, but it doesn't work: CABasicAnimation *frameAnimation = [CABasicAnimation animationWithKeyPath:@"frame"]; frameAnimation.duration = 2.5; frameAnimation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut]; frameAnimation.fromValue = [NSValue valueWithCGRect:myLayer.frame]; frameAnimation.toValue = [NSValue valueWithCGRect:theNewFrameRect]; [myLayer addAnimation:frameAnimation forKey:@"MLC"];

    Read the article

  • How to change the coordinate of a point that is inside a GraphicsPath?

    - by Ben
    Is there anyway to change the coordinates of some of the points within a GraphicsPath object while leaving the other points where they are? The GraphicsPath object that gets passed into my method will contain a mixture of polygons and lines. My method would want to look something like: void UpdateGraphicsPath(GraphicsPath gPath, RectangleF regionToBeChanged, PointF delta) { // Find the points in gPath that are inside regionToBeChanged // and move them by delta. // gPath.PathPoints[i].X += delta.X; // Compiles but doesn't work } GraphicsPath.PathPoints seems to be readonly, so does GraphicsPath.PathData.Points. So I am wondering if this is even possible. Perhaps generating a new GraphicsPath object with an updated set of points? How can I know if a point is part of a line or a polygon? If anyone has any suggestions then I would be grateful.

    Read the article

  • Methods for making R plots look like Excel plots?

    - by brianjd
    I've been poking around with R graphical parameters trying to make my plots look a little more professional (e.g., las=1, bty="n" usually help). But not quite there. Started playing with tikzDevice. A huge improvement! Amazing how much better things look when the font sizes and styles in the figure match those of the surrounding document. Still, not quite there. What I'm ultimately looking for are those professional gradient shading, rounded corners, and shadow effects found in MS Excel plots. I know they're probably considered chart junk, but I like them. They're just nice looking. Q: How can I get these effects into my R plots? Do people usually just export to Inkscape and doodle over there? It would be nice if there were a literate programming approach. Is there an R package that handles these effects outright?

    Read the article

  • Resizing an image with alpha channel

    - by Hafthor
    I am writing some code to generate images - essentially I have a source image that is large and includes transparent regions. I use GDI+ to open that image and add additional objects. What I want to do next is to save this new image much smaller, so I used the Bitmap constructor that takes a source Image object and a height and width, then saved that. I was expecting the alpha channel to be smoothed like the color channels, but this did not happen -- it did result in a couple of semitransparent pixels, but overall it is very blocky. What gives? Using img As New Bitmap("source100x100.png") ''// Drawing stuff Using simg As New Bitmap(img, 20, 20) simg.Save("target20x20.png") End Using End Using Edit: I think what I want is SuperSampling, like what Paint.NET does when set to "Best Quality"

    Read the article

  • Code Interaction with Quartz Composition

    - by Alberto MQO
    Hi, i have a Quartz Composition with a Cube, and X/Y/Z rotation inputs are published. On Interface Builder i made a QCView and a QCPatchController with the previous Quartz Composition loaded. In QCView the Patch Controller is binded, and the rotation published ports are binded too to three NSSlider, so when i change the value of the NSSlider's then the cube rotates. All this works fine, but i want to change the rotation values of the cube from the App Delegate on XCode. I tried to change the value of the NSSliders with IBoulets pointing to them, but this change doesnt apply to the cube, like it does when i change the Sliders directly with my mouse. What should i instanciate and/or how to access and change this Input_Ports.value throught the CQPatchController? Thank you very much for reading, i really need help!

    Read the article

  • How do you populate a NSArrayController with CoreData rows programmatically?

    - by Andrew McCloud
    After several hours/days of searching and diving into example projects i've concluded that I need to just ask. If I bind the assetsView (IKImageBrowserView) directly to an IB instance of NSArrayController everything works just fine. - (void) awakeFromNib { library = [[NSArrayController alloc] init]; [library setManagedObjectContext:[[NSApp delegate] managedObjectContext]]; [library setEntityName:@"Asset"]; NSLog(@"%@", [library arrangedObjects]); NSLog(@"%@", [library content]); [assetsView setDataSource:library]; [assetsView reloadData]; } Both NSLogs are empty. I know i'm missing something... I just don't know what. The goal is to eventually allow multiple instances of this view's "library" filtered programmatically with a predicate. For now i'm just trying to have it display all of the rows for the "Asset" entity. Addition: If I create the NSArrayController in IB and then try to log [library arrangedObjects] or manually set the data source for assetsView I get the same empty results. Like I said earlier, if I bind library.arrangedObjects to assetsView.content (IKImageBrowserView) in IB - with same managed object context and same entity name set by IB - everything works as expected. - (void) awakeFromNib { // library = [[NSArrayController alloc] init]; // [library setManagedObjectContext:[[NSApp delegate] managedObjectContext]]; // [library setEntityName:@"Asset"]; NSLog(@"%@", [library arrangedObjects]); NSLog(@"%@", [library content]); [assetsView setDataSource:library]; [assetsView reloadData]; }

    Read the article

  • What OpenGL functions are not GPU accelerated?

    - by Xavier Ho
    I was shocked when I read this (from the OpenGL wiki): glTranslate, glRotate, glScale Are these hardware accelerated? No, there are no known GPUs that execute this. The driver computes the matrix on the CPU and uploads it to the GPU. All the other matrix operations are done on the CPU as well : glPushMatrix, glPopMatrix, glLoadIdentity, glFrustum, glOrtho. This is the reason why these functions are considered deprecated in GL 3.0. You should have your own math library, build your own matrix, upload your matrix to the shader. For a very, very long time I thought most of the OpenGL functions use the GPU to do computation. I'm not sure if this is a common misconception, but after a while of thinking, this makes sense. Old OpenGL functions (2.x and older) are really not suitable for real-world applications, due to too many state switches. This makes me realise that, possibly, many OpenGL functions do not use the GPU at all. So, the question is: Which OpenGL function calls don't use the GPU? I believe knowing the answer to the above question would help me become a better programmer with OpenGL. Please do share some of your insights.

    Read the article

  • How to convert m4a file to aac adts file in Xcode?

    - by Bird Hsuie
    I have a mp4 file copied from iPod lib and saved to my Document for my next step, I need it to convert to .mp3 or .aac(ADTS type) I use this code and failed... -(IBAction)compressFile:(id)sender{ NSLog (@"handleConvertToPCMTapped"); // open an ExtAudioFile NSLog (@"opening %@", exportURL); ExtAudioFileRef inputFile; CheckResult (ExtAudioFileOpenURL((__bridge CFURLRef)exportURL, &inputFile), "ExtAudioFileOpenURL failed"); // prepare to convert to a plain ol' PCM format AudioStreamBasicDescription myPCMFormat; myPCMFormat.mSampleRate = 44100; // todo: or use source rate? myPCMFormat.mFormatID = kAudioFormatMPEGLayer3 ; myPCMFormat.mFormatFlags = kAudioFormatFlagsCanonical; myPCMFormat.mChannelsPerFrame = 2; myPCMFormat.mFramesPerPacket = 1; myPCMFormat.mBitsPerChannel = 16; myPCMFormat.mBytesPerPacket = 4; myPCMFormat.mBytesPerFrame = 4; CheckResult (ExtAudioFileSetProperty(inputFile, kExtAudioFileProperty_ClientDataFormat, sizeof (myPCMFormat), &myPCMFormat), "ExtAudioFileSetProperty failed"); // allocate a big buffer. size can be arbitrary for ExtAudioFile. // you have 64 KB to spare, right? UInt32 outputBufferSize = 0x10000; void* ioBuf = malloc (outputBufferSize); UInt32 sizePerPacket = myPCMFormat.mBytesPerPacket; UInt32 packetsPerBuffer = outputBufferSize / sizePerPacket; // set up output file NSString *outputPath = [myDocumentsDirectory() stringByAppendingPathComponent:@"m_export.mp3"]; NSURL *outputURL = [NSURL fileURLWithPath:outputPath]; NSLog (@"creating output file %@", outputURL); AudioFileID outputFile; CheckResult(AudioFileCreateWithURL((__bridge CFURLRef)outputURL, kAudioFileCAFType, &myPCMFormat, kAudioFileFlags_EraseFile, &outputFile), "AudioFileCreateWithURL failed"); // start convertin' UInt32 outputFilePacketPosition = 0; //in bytes while (true) { // wrap the destination buffer in an AudioBufferList AudioBufferList convertedData; convertedData.mNumberBuffers = 1; convertedData.mBuffers[0].mNumberChannels = myPCMFormat.mChannelsPerFrame; convertedData.mBuffers[0].mDataByteSize = outputBufferSize; convertedData.mBuffers[0].mData = ioBuf; UInt32 frameCount = packetsPerBuffer; // read from the extaudiofile CheckResult (ExtAudioFileRead(inputFile, &frameCount, &convertedData), "Couldn't read from input file"); if (frameCount == 0) { printf ("done reading from file"); break; } // write the converted data to the output file CheckResult (AudioFileWritePackets(outputFile, false, frameCount, NULL, outputFilePacketPosition / myPCMFormat.mBytesPerPacket, &frameCount, convertedData.mBuffers[0].mData), "Couldn't write packets to file"); NSLog (@"Converted %ld bytes", outputFilePacketPosition); // advance the output file write location outputFilePacketPosition += (frameCount * myPCMFormat.mBytesPerPacket); } // clean up ExtAudioFileDispose(inputFile); AudioFileClose(outputFile); // show size in label NSLog (@"checking file at %@", outputPath); [self transMitFile:outputPath]; if ([[NSFileManager defaultManager] fileExistsAtPath:outputPath]) { NSError *fileManagerError = nil; unsigned long long fileSize = [[[NSFileManager defaultManager] attributesOfItemAtPath:outputPath error:&fileManagerError] fileSize]; } any suggestion?.......thanks for your great help!

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >