Search Results

Search found 13860 results on 555 pages for 'core graphics'.

Page 133/555 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • SVG Animation in Python? - Any options other than Things! 0.4?

    - by ThantiK
    I recently gave my daughter some time on the computer, and she really likes a program called 'BabySmash!', and I also have another program that came with a keyboard attachment called 'Laugh, Smile & Learn' from playschool I think. I'm not a python guru by any stretch of the imagination and I kind of have it in my head that I want to create some sort of program in the spirit of these two, with vector animation and sounds. Thing is, it doesn't look like there are many options for vector animation in python. I found a library called Things! 0.4, but it's lacking in-depth documentation and seems to be more of an experiment in vector animation rather than a full blown solution. Are there any options that I'm not immediately finding? What other animation-libraries do you recommend? Even if the library isn't vector-based, ease of use is probably the strongest factor in my use.

    Read the article

  • How to animate line-drawing in iPhone development?

    - by david
    I have been searching around, but there seems no good answer for this simple question. So I am asking again: how to animate line-drawing in iphone dev? Basically what I want is something like this: @implementation MyUIView - (void) triggerLineDrawing: (CGPathRef) path { ... // animate line drawing here } Can it be done?

    Read the article

  • Quartz 2D animating text?

    - by coure06
    i have to animate text around a circle. The text will also scale up/down. Whats the best approach to accomplish this? (i am using Quartz 2D) My approach is: -- Calculate point using sin and cos methods. -- Move Pen there and draw text with alpha and size. -- Clear the screen -- Calculate next point using sin and cos methods. -- Move pen there and draw text with alpha and size. -- clear the screen so on... any better way?

    Read the article

  • Blur CALayer's Superlayer

    - by eaigner
    Hi. I've got a CALayer and a sublayer in it. What i want to achieve is a blur of the superlayer (the area under the sublayer), just like the standard sheets do it. I've tried to set a .compositingFilter on the sublayer but this doesn't seem to work. Any ideas how to solve this?

    Read the article

  • How to -accurately- measure size in pixels of text being drawn on a canvas by drawTextOnPath()

    - by Nick
    I'm using drawTextOnPath() to display some text on a Canvas and I need to know the dimensions of the text being drawn. I know this is not feasible for paths composed of multiple segments, curves, etc. but my path is a single segment which is perfectly horizontal. I am using Paint.getTextBounds() to get a Rect with the dimensions of the text I want to draw. I use this rect to draw a bounding box around the text when I draw it at an arbitrary location. Here's some simplified code that reflects what I am currently doing: // to keep this example simple, always at origin (0,0) public drawBoundedText(Canvas canvas, String text, Paint paint) { Rect textDims = new Rect(); paint.getTextBounds(text,0, text.length(), textDims); float hOffset = 0; float vOffset = paint.getFontMetrics().descent; // vertically centers text float startX = textDims.left; / 0 float startY = textDims.bottom; float endX = textDims.right; float endY = textDims.bottom; path.moveTo(startX, startY); path.lineTo(endX, endY); path.close(); // draw the text canvas.drawTextOnPath(text, path, 0, vOffset, paint); // draw bounding box canvas.drawRect(textDims, paint); } The results are -close- but not perfect. If I replace the second to last line with: canvas.drawText(text, startX, startY - vOffset, paint); Then it works perfectly. Usually there is a gap of 1-3 pixels on the right and bottom edges. The error seems to vary with font size as well. Any ideas? It's possible I'm doing everything right and the problem is with drawTextOnPath(); the text quality very visibly degrades when drawing along paths, even if the path is horizontal, likely because of the interpolation algorithm or whatever its using behind the scenes. I wouldnt be surprised to find out that the size jitter is also coming from there.

    Read the article

  • Which OpenGL functions are not GPU-accelerated?

    - by Xavier Ho
    I was shocked when I read this (from the OpenGL wiki): glTranslate, glRotate, glScale Are these hardware accelerated? No, there are no known GPUs that execute this. The driver computes the matrix on the CPU and uploads it to the GPU. All the other matrix operations are done on the CPU as well : glPushMatrix, glPopMatrix, glLoadIdentity, glFrustum, glOrtho. This is the reason why these functions are considered deprecated in GL 3.0. You should have your own math library, build your own matrix, upload your matrix to the shader. For a very, very long time I thought most of the OpenGL functions use the GPU to do computation. I'm not sure if this is a common misconception, but after a while of thinking, this makes sense. Old OpenGL functions (2.x and older) are really not suitable for real-world applications, due to too many state switches. This makes me realise that, possibly, many OpenGL functions do not use the GPU at all. So, the question is: Which OpenGL function calls don't use the GPU? I believe knowing the answer to the above question would help me become a better programmer with OpenGL. Please do share some of your insights.

    Read the article

  • Generating a beveled edge for a 2D polygon

    - by Metaphile
    I'm trying to programmatically generate beveled edges for geometric polygons. For example, given an array of 4 vertices defining a square, I want to generate something like this. But computing the vertices of the inner shape is baffling me. Simply creating a copy of the original shape and then scaling it down will not produce the desired result most of the time. My algorithm so far involves analyzing adjacent edges (triples of vertices; e.g., the bottom-left, top-left, and top-right vertices of a square). From there, I need to find the angle between them, and then create a vertex somewhere along that angle, depending on how deep I want the bevel to be. And because I don't have much of a math background, that's where I'm stuck. How do I find that center angle? Or is there a much simpler way of attacking this problem?

    Read the article

  • Repeating animations using the Stop Selector

    - by Tiago
    I'm trying to repeat an animation until a certain condition is met. Something like this: - (void) animateUpdate { if (inUpdate) { [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:2.0]; [UIView setAnimationDelegate: self]; [UIView setAnimationDidStopSelector: @selector(animateUpdate)]; button.transform = CGAffineTransformMakeRotation( M_PI ); [UIView commitAnimations]; } } This will run the first time, but it won't repeat. The selector will just be called until the application crashes. How should I do this? Thanks.

    Read the article

  • Can XCode draw the call graph of a program?

    - by Werner
    Hi, I am new to Mac OSX, and I wonder if Xcode can generate , for a given C++ source code, the call graph of the program in a visual way. I also wonder if for each function, and after a run, whether it can also print the %time spent on the function If so, I would thank really some links with tutorials or info, after googling I did not find anything relevant Thanks

    Read the article

  • Sober, eye-catchy Color for Job site

    - by Knowledge Craving
    I will be creating one website related to jobs, which has almost all the features of monster & naukri. My client just asked to use any sober yet eye-catchy color(s) for that website. Can anybody please highlight as to what is / are the sober, yet eye-catchy color(s) for these type of websites?

    Read the article

  • Returning a users lat lng as a string iPhone

    - by Joshmattvander
    Is there a way to return the users location as a string from a model? I have a model thats job is to download same JSON data from a web service. When sending in my request I need to add ?lat=LAT_HERE&lng=LNG_HERE to the end of the string. I have seen tons of examples using the map or constantly updating a label. But I cant find out how to explicitly return the lat and lng values. Im only 2 days into iPhone dev so go easy on me :)

    Read the article

  • animating adding/removing layers on iPhone

    - by magesteve
    On the iPhone, when you add a sub layer to a visible view's layer, using either -addSublayer: or -removeFromSuperlayer, shouldn't that sub layer appear or disappear in an animated manner (ie. fade in or fade out gradually)? My program animates using layers (not views). When I change a property of a layer like position or image content, then the change does animate (layer streaks around it's parent layer, the layer fades from the old image to the new image), so I obviously have the layers & view setup correctly. However, when I add or remove a sub layer, the change occurs instantly; there is no animation. Reading the references, it says that if the layer is visible, the sub layer should animate when adder or removed. What am I doing wrong? Has anyone had a similar problem, and was able to find a solution? Thank you, Steve Sheets

    Read the article

  • Strange behavior: save video recorded within app?

    - by Josue Espinosa
    I allow the user to record a video within my app, then later play it again. When a user records a video, I save the URL of the video, then play the video later from the saved URL. I save the video both in the Photos app and in my app. If I delete the video within the photos app, it still plays. After about 7 days, the video gets deleted. I think I am saving in my tmp directory, but i'm not sure. Here is what I am doing: -(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info { NSString *mediaType = [info objectForKey: UIImagePickerControllerMediaType]; [self dismissViewControllerAnimated:YES completion:nil]; // Handle a movie capture if (CFStringCompare ((__bridge_retained CFStringRef) mediaType, kUTTypeMovie, 0) == kCFCompareEqualTo) { NSString *moviePath = [NSString stringWithFormat:@"%@",[[info objectForKey:UIImagePickerControllerMediaURL] path]]; NSURL *videoURL = [info objectForKey:UIImagePickerControllerMediaURL]; NSData *videoData = [NSData dataWithContentsOfURL:videoURL]; _justRecordedVideoURL = [NSString stringWithFormat:@"%@",videoURL]; AppDelegate *appDelegate = [[UIApplication sharedApplication] delegate]; _managedObjectContext = [appDelegate managedObjectContext]; Video *video = [NSEntityDescription insertNewObjectForEntityForName:@"Video" inManagedObjectContext:_managedObjectContext]; [video setVideoData:videoData]; [video setVideoURL:[NSString stringWithFormat:@"%@",videoURL]]; NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init]; dateFormatter.dateStyle = NSDateFormatterLongStyle; [dateFormatter setDateStyle:NSDateFormatterLongStyle]; NSDate *date = [dateFormatter dateFromString:[dateFormatter stringFromDate:[NSDate date]]]; NSString *dateAdded = [dateFormatter stringFromDate:date]; [video setDate_recorded:dateAdded]; if(_currentAthlete != nil){ [video setWhosVideo:_currentAthlete]; } NSError *error = nil; if(![_managedObjectContext save:&error]){ //handle dat error } NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; NSString *tempPath = [documentsDirectory stringByAppendingFormat:@"/vid1.mp4"]; BOOL success = [videoData writeToFile:tempPath atomically:NO]; if(success == FALSE){ NSLog(@"Video was not successfully saved."); } if (UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(moviePath)) { UISaveVideoAtPathToSavedPhotosAlbum(moviePath, self, @selector(video:didFinishSavingWithError:contextInfo:), nil); } } } Am I saving it incorrectly? When I go to play the video, it works fine, after a couple days the video will play without audio, then eventually it will be gone. Any ideas why?

    Read the article

  • iPhone Development - CoreData runtime error

    - by Mustafa
    I'm facing a strange CoreData issue. Here's the log: 2010-04-07 15:59:36.913 MyProject[263:207] <MyEntity: 0x180370> (entity: MyEntity; id: 0x17e890 <x-coredata://0F55C533-41BD-4F09-9CCA-0CB304CAB065/MyEntity/p380> ; data: <fault>) 2010-04-07 15:59:36.918 MyProject[263:207] *** Terminating app due to uncaught exception 'NSObjectInaccessibleException', reason: 'The NSManagedObject with ID:0x17e890 <x-coredata://0F55C533-41BD-4F09-9CCA-0CB304CAB065/MyEntity/p380> has been invalidated.' I have a hierarchy of UITableViewControllers that use NSFetchedResultsController to populate the table, and when a particular row is selected, the detail view is shown. UITableView (MyMainEntity) UITableView (MyEntity) UITableView (MyEntity) detail view Both MyMainEntity UITableView and MyEntity UITableView use NSFetchedResultsController to show the records. Sometimes it crashes when i'm scrolling the tableView, and sometimes it crashes when i try to open the detail view. I can navigate to the MyEntity detail view multiple times before application crashes. What does this error mean? and how can i fix it!?

    Read the article

  • fastest engine to convert PDF into PNG

    - by skyde
    I would like to know which of the opensource PDF engine can convert a pdf into a image the fastest. I don't care about the quality of the result (antialiasing ...) For my project it need to be very very fast. I would probably need to build my own but i dont wan't to start from scratch.

    Read the article

  • BlackBerry - Convert EncodedImage to byte []

    - by user324884
    I am using below code where i don't want to use JPEGEncodedImage.encode because it increases the size. So I need to directly convert from EncodedImage to byte array. FileConnection fc= (FileConnection)Connector.open(name); is=fc.openInputStream(); byte[] ReimgData = IOUtilities.streamToBytes(is); EncodedImage encode_image = EncodedImage.createEncodedImage(ReimgData, 0, (int)fc.fileSize()); encode_image = sizeImage(encode_image, (int)maxWidth,(int)maxHeight); JPEGEncodedImage encoder=JPEGEncodedImage.encode(encode_image.getBitmap(),50); ReimgData=encoder.getData(); is.read(ReimgData); HttpMultipartRequest( content[0], content[1], content[2], params, "image",txtfile.getText(), "image/jpeg", ReimgData );

    Read the article

  • When using Direct3D, how much math is being done on the CPU?

    - by zirgen
    Context: I'm just starting out. I'm not even touching the Direct3D 11 API, and instead looking at understanding the pipeline, etc. From looking at documentation and information floating around the web, it seems like some calculations are being handled by the application. That, is, instead of simply presenting matrices to multiply to the GPU, the calculations are being done by a math library that operates on the CPU. I don't have any particular resources to point to, although I guess I can point to the XNA Math Library or the samples shipped in the February DX SDK. When you see code like mViewProj = mView * mProj;, that projection is being calculated on the CPU. Or am I wrong? If you were writing a program, where you can have 10 cubes on the screen, where you can move or rotate cubes, as well as viewpoint, what calculations would you do on the CPU? I think I would store the geometry for the a single cube, and then transform matrices representing the actual instances. And then it seems I would use the XNA math library, or another of my choosing, to transform each cube in model space. Then get the coordinates in world space. Then push the information to the GPU. That's quite a bit of calculation on the CPU. Am I wrong? Am I reaching conclusions based on too little information and understanding? What terms should I Google for, if the answer is STFW? Or if I am right, why aren't these calculations being pushed to the GPU as well?

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >