Search Results

Search found 79 results on 4 pages for 'colorspace'.

Page 3/4 | < Previous Page | 1 2 3 4  | Next Page >

  • Hard crash when drawing content for CALayer using quartz

    - by Lukasz
    I am trying to figure out why iOS crash my application in the harsh way (no crash logs, immediate shudown with black screen of death with spinner shown for a while). It happens when I render content for CALayer using Quartz. I suspected the memory issue (happens only when testing on the device), but memory logs, as well as instruments allocation logs looks quite OK. Let me past in the fatal function: - (void)renderTiles{ if (rendering) { //NSLog(@"====== RENDERING TILES SKIP ======="); return; } rendering = YES; CGRect b = tileLayer.bounds; CGSize s = b.size; CGFloat imageScale = [[UIScreen mainScreen] scale]; s.height *= imageScale; s.width *= imageScale; dispatch_async(queue, ^{ NSLog(@""); NSLog(@"====== RENDERING TILES START ======="); NSLog(@"1. Before creating context"); report_memory(); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); NSLog(@"2. After creating color space"); report_memory(); NSLog(@"3. About to create context with size: %@", NSStringFromCGSize(s)); CGContextRef ctx = CGBitmapContextCreate(NULL, s.width, s.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast); NSLog(@"4. After creating context"); report_memory(); CGAffineTransform flipTransform = CGAffineTransformMake(1.0, 0.0, 0.0, -1.0, 0.0, s.height); CGContextConcatCTM(ctx, flipTransform); CGRect tileRect = CGRectMake(0, 0, tileImageScaledSize.width, tileImageScaledSize.height); CGContextDrawTiledImage(ctx, tileRect, tileCGImageScaled); NSLog(@"5. Before creating cgimage from context"); report_memory(); CGImageRef cgImage = CGBitmapContextCreateImage(ctx); NSLog(@"6. After creating cgimage from context"); report_memory(); dispatch_sync(dispatch_get_main_queue(), ^{ tileLayer.contents = (id)cgImage; }); NSLog(@"7. After asgning tile layer contents = cgimage"); report_memory(); CGColorSpaceRelease(colorSpace); CGContextRelease(ctx); CGImageRelease(cgImage); NSLog(@"8. After releasing image and context context"); report_memory(); NSLog(@"====== RENDERING TILES END ======="); NSLog(@""); rendering = NO; }); } Here are the logs: ====== RENDERING TILES START ======= 1. Before creating context Memory in use (in bytes): 28340224 / 519442432 (5.5%) 2. After creating color space Memory in use (in bytes): 28340224 / 519442432 (5.5%) 3. About to create context with size: {6324, 5208} 4. After creating context Memory in use (in bytes): 28344320 / 651268096 (4.4%) 5. Before creating cgimage from context Memory in use (in bytes): 153649152 / 651333632 (23.6%) 6. After creating cgimage from context Memory in use (in bytes): 153649152 / 783159296 (19.6%) 7. After asgning tile layer contents = cgimage Memory in use (in bytes): 153653248 / 783253504 (19.6%) 8. After releasing image and context context Memory in use (in bytes): 21688320 / 651288576 (3.3%) ====== RENDERING TILES END ======= Application crashes in random places. Sometimes when reaching en of the function and sometime in random step. Which direction should I look for a solution? Is is possible that GDC is causing the problem? Or maybe the context size or some Core Animation underlying references?

    Read the article

  • Fixing color in scatter plots in matplotlib

    - by ajhall
    Hi guys, I'm going to have to come back and add some examples if you need them, which you might. But, here's the skinny- I'm plotting scatter plots of lab data for my research. I need to be able to visually compare the scatter plots from one plot to the next, so I want to fix the color range on the scatter plots and add in a colorbar to each plot (which will be the same in each figure). Essentially, I'm fixing all aspects of the axes and colorspace etc. so that the plots are directly comparable by eye. For the life of me, I can't seem to get my scatter() command to properly set the color limits in the colorspace (default)... i.e., I figure out my total data's min and total data's max, then apply them to vmin, vmax, for the subset of data, and the color still does not come out properly in both plots. This must come up here and there, I can't be the only one that wants to compare various subsets of data amongst plots... so, how do you fix the colors so that each data keeps it's color between plots and doesn't get remapped to a different color due to the change in max/min of the subset -v- the whole set? I greatly appreciate all your thoughts!!! A mountain-dew and fiery-hot cheetos to all! -Allen

    Read the article

  • Trouble when changing pixel data with alpha on png on iphone --okay on simulator

    - by Ted
    I'm trying to change the color of the pixels (lighten or darken) without changing the value of the alpha channel using CGDataProviderCopyData. I leave every 4th databyte untouched. It work fine of the iphone simulator, however on the real thing the alpha goes white as I increase the values of the other pixels. I've tried changing just the first byte, or the second, or the third. Does anybody have any idea what is going on? The basic code is borrowed from Jorge. I like this simple approach --I'm new to this. But I want to make it work with png images with some transparency. here is most of the code by Jorge : CFDataRef CopyImagePixels(CGImageRef inImage){ return CGDataProviderCopyData(CGImageGetDataProvider(inImage)); } CGImageRef img=originalImage.CGImage; CFDataRef dataref=CopyImagePixels(img); UInt8 *data=(UInt8 *)CFDataGetBytePtr(dataref); int length=CFDataGetLength(dataref); for(int index=0;index255){ data[index+i]=255; }else{ data[index+i]+=value; } } } } size_t width=CGImageGetWidth(img); size_t height=CGImageGetHeight(img); size_t bitsPerComponent=CGImageGetBitsPerComponent(img); size_t bitsPerPixel=CGImageGetBitsPerPixel(img); size_t bytesPerRow=CGImageGetBytesPerRow(img); CGColorSpaceRef colorspace=CGImageGetColorSpace(img); CGBitmapInfo bitmapInfo=CGImageGetBitmapInfo(img); CGImageAlphaInfo alphaInfo = kCGBitmapAlphaInfoMask(img); NSLog(@"bitmapinfo: %d",bitmapInfo); CFDataRef newData=CFDataCreate(NULL,data,length); CGDataProviderRef provider=CGDataProviderCreateWithCFData(newData); CGImageRef newImg=CGImageCreate(width,height,bitsPerComponent,bitsPerPixel,bytesPerRow,colorspace,bitmapInfo,provider,NULL,true,kCGRenderingIntentDefault); [iv setImage:[UIImage imageWithCGImage:newImg]]; CGImageRelease(newImg); CGDataProviderRelease(provider);

    Read the article

  • How can I render an in-memory UIViewController's view Landscape?

    - by Aaron
    I'm trying to render an in-memory (but not in hierarchy, yet) UIViewController's view into an in-memory image buffer so I can do some interesting transition animations. However, when I render the UIViewController's view into that buffer, it is always rendering as though the controller is in Portrait orientation, no matter the orientation of the rest of the app. How do I clue this controller in? My code in RootViewController looks like this: MyUIViewController* controller = [[MyUIViewController alloc] init]; int width = self.view.frame.size.width; int height = self.view.frame.size.height; int bitmapBytesPerRow = width * 4; unsigned char *offscreenData = calloc(bitmapBytesPerRow * height, sizeof(unsigned char)); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef offscreenContext = CGBitmapContextCreate(offscreenData, width, height, 8, bitmapBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast); CGContextTranslateCTM(offscreenContext, 0.0f, height); CGContextScaleCTM(offscreenContext, 1.0f, -1.0f); [(CALayer*)[controller.view layer] renderInContext:offscreenContext]; At that point, the offscreen memory buffers contents are portrait-oriented, even when the window is in landscape orientation. Ideas?

    Read the article

  • How to clip and fill a canvas using an alpha mask

    - by Jay Koutavas
    I have some .png icons that are alpha masks. I need to render them as an drawable image using the Android SDK. On the iPhone, I use the following to get this result, converting the "image" alpha mask to the 'imageMasked' image using black as a fill: CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef context = CGBitmapContextCreate(NULL, thumbWidth, thumbHeight, 8, 4*thumbWidth, colorSpace, kCGImageAlphaPremultipliedFirst); CGRect frame = CGRectMake(0,0,thumbWidth,thumbHeight); CGContextClipToMask(context, frame, [image CGImage]); CGContextFillRect(context, frame); CGImageRef imageMasked = CGBitmapContextCreateImage(context); CGContextRelease(context); How do I accomplish the above in Android SDK? I've started to write the following: Drawable image = myPngImage; final int width = image.getMinimumWidth(); final int height = image.getMinimumHeight(); Bitmap imageMasked = Bitmap.createBitmap(width, height, Config.ARGB_8888); Canvas canvas = new Canvas(iconMasked); image.draw(canvas); ??? I'm not finding how to do the clipping on imageMasked using image.

    Read the article

  • Would this violate any copyright issues?

    - by Farhad
    I am currently publishing a paper on skin detection. However, I need to find the appropriate histogram bin size for each colorspace. I recently came upon a paper that published what it found to be the ideal bin size. The paper can be found at: http://www.inf.pucrs.br/~pinho/CG/Trabalhos/DetectaPele/Artigos/OPTIMUM%20COLOR%20SPACES%20FOR%20SKIN%20DETECTION.pdf. I am specifically talking about Table 1. If I cite the source, would it be okay for me to use data from the table? Note that I cannot contact the author of the paper.

    Read the article

  • Nearest color algorithm using Hex Triplet

    - by Lijo
    Following page list colors with names http://en.wikipedia.org/wiki/List_of_colors. For example #5D8AA8 Hex Triplet is "Air Force Blue". This information will be stored in a databse table (tbl_Color (HexTriplet,ColorName)) in my system Suppose I created a color with #5D8AA7 Hex Triplet. I need to get the nearest color available in the tbl_Color table. The expected anaser is "#5D8AA8 - Air Force Blue". This is because #5D8AA8 is the nearest color for #5D8AA7. Do we have any algorithm for finding the nearest color? How to write it using C# / Java? REFERENCE http://stackoverflow.com/questions/5440051/algorithm-for-parsing-hex-into-color-family http://stackoverflow.com/questions/6130621/algorithm-for-finding-the-color-between-two-others-in-the-colorspace-of-painte Suggested Formula: Suggested by @user281377. Choose the color where the sum of those squared differences is minimal (Square(Red(source)-Red(target))) + (Square(Green(source)-Green(target))) +(Square(Blue(source)-Blue(target)))

    Read the article

  • How can I get image data from QTKit without color or gamma correction in Snow Leopard?

    - by Nick Haddad
    Since Snow Leopard, QTKit is now returning color corrected image data from functions like QTMovies frameImageAtTime:withAttributes:error:. Given an uncompressed AVI file, the same image data is displayed with larger pixel values in Snow Leopard vs. Leopard. Currently I'm using frameImageAtTime to get an NSImage, then ask for the tiffRepresentation of that image. After doing this, pixel values are slightly higher in Snow Leopard. For example, a file with the following pixel value in Leopard: [0 180 0] Now has a pixel value like: [0 192 0] Is there any way to ask a QTMovie for video frames that are not color corrected? Should I be asking for a CGImageRef, CIImage, or CVPixelBufferRef instead? Is there a way to disable color correction altogether prior to reading in the video files? I've attempted to work around this issue by drawing into a NSBitmapImageRep with the NSCalibratedColroSpace, but that only gets my part of the way there: // Create a movie NSDictionary *dict = [NSDictionary dictionaryWithObjectsAndKeys : nsFileName, QTMovieFileNameAttribute, [NSNumber numberWithBool:NO], QTMovieOpenAsyncOKAttribute, [NSNumber numberWithBool:NO], QTMovieLoopsAttribute, [NSNumber numberWithBool:NO], QTMovieLoopsBackAndForthAttribute, (id)nil]; _theMovie = [[QTMovie alloc] initWithAttributes:dict error:&error]; // .... NSMutableDictionary *imageAttributes = [NSMutableDictionary dictionary]; [imageAttributes setObject:QTMovieFrameImageTypeNSImage forKey:QTMovieFrameImageType]; [imageAttributes setObject:[NSArray arrayWithObject:@"NSBitmapImageRep"] forKey: QTMovieFrameImageRepresentationsType]; [imageAttributes setObject:[NSNumber numberWithBool:YES] forKey:QTMovieFrameImageHighQuality]; NSError* err = nil; NSImage* image = (NSImage*)[_theMovie frameImageAtTime:frameTime withAttributes:imageAttributes error:&err]; // copy NSImage into an NSBitmapImageRep (Objective-C) NSBitmapImageRep* bitmap = [[image representations] objectAtIndex:0]; // Draw into a colorspace we know about NSBitmapImageRep *bitmapWhoseFormatIKnow = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL pixelsWide:getWidth() pixelsHigh:getHeight() bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO colorSpaceName:NSCalibratedRGBColorSpace bitmapFormat:0 bytesPerRow:(getWidth() * 4) bitsPerPixel:32]; [NSGraphicsContext saveGraphicsState]; [NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithBitmapImageRep:bitmapWhoseFormatIKnow]]; [bitmap draw]; [NSGraphicsContext restoreGraphicsState]; This does convert back to a 'Non color corrected' colorspace, but the color values NOT are exactly the same as what is stored in the Uncompressed AVI files we are testing with. Also this is much less efficient because it is converting from RGB - "Device RGB" - RGB. Also, I am working in a 64-bit application, so dropping down to the Quicktime-C API is not an option. Thanks for your help.

    Read the article

  • How to generate spectrum color palettes

    - by ddimitrov
    Is there an easy way to convert between color models in Java (RGB, HSV and Lab). Assuming RGB color model: How do I calculate black body spectrum color palette? I want to use it for a heatmap chart. How about single-wavelength spectrum? Edit: I found that the ColorSpace class can be used for conversions between RGB/CIE and many other color models.

    Read the article

  • Changing RGB color image to Grayscale image using Objective C

    - by user567167
    I was developing a application that changes color image to gray image. However, some how the picture comes out wrong. I dont know what is wrong with the code. maybe the parameter that i put in is wrong please help. UIImage *c = [UIImage imageNamed:@"downRed.png"]; CGImageRef cRef = CGImageRetain(c.CGImage); NSData* pixelData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(cRef)); size_t w = CGImageGetWidth(cRef); size_t h = CGImageGetHeight(cRef); unsigned char* pixelBytes = (unsigned char *)[pixelData bytes]; unsigned char* greyPixelData = (unsigned char*) malloc(w*h); for (int y = 0; y < h; y++) { for(int x = 0; x < w; x++){ int iter = 4*(w*y+x); int red = pixe lBytes[iter]; int green = pixelBytes[iter+1]; int blue = pixelBytes[iter+2]; greyPixelData[w*y+x] = (unsigned char)(red*0.3 + green*0.59+ blue*0.11); int value = greyPixelData[w*y+x]; } } CFDataRef imgData = CFDataCreate(NULL, greyPixelData, w*h); CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData(imgData); size_t width = CGImageGetWidth(cRef); size_t height = CGImageGetHeight(cRef); size_t bitsPerComponent = 8; size_t bitsPerPixel = 8; size_t bytesPerRow = CGImageGetWidth(cRef); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray(); CGBitmapInfo info = kCGImageAlphaNone; CGFloat *decode = NULL; BOOL shouldInteroplate = NO; CGColorRenderingIntent intent = kCGRenderingIntentDefault; CGDataProviderRelease(imgDataProvider); CGImageRef throughCGImage = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpace, info, imgDataProvider, decode, shouldInteroplate, intent); UIImage* newImage = [UIImage imageWithCGImage:throughCGImage]; CGImageRelease(throughCGImage); newImageView.image = newImage;

    Read the article

  • Convert a gray PNG with alpha to a 1-bit black rectanble with 8-bit alpha

    - by jcayzac
    I use a tool to render LaTeX equations as PNG. The resulting images are in RGBA8888 format. I would like to extract the luminance (grayscale from RGB channels, multiplied by the A channel) as my new alpha channel, set the picture fully black, and save the result in Gray1Alpha8 (G1A8) format. So far I've only managed to get G1A4 or G8A8 but not G1A8. Also, the resulting picture looks like it's not multiplied correctly… convert original.png \ \( -clone 0 -alpha extract \) \ \( -clone 0 -clone 1 -compose multiply -composite \) \ -delete 0 +swap -alpha off -compose copy_opacity -composite -colorspace Gray -depth 4 result.png What am I missing?

    Read the article

  • CoreGraphics on iPhone. What is the proper way to load a PNG file with pre-multiplied alpha?

    - by dugla
    For my iPhone 3D apps I am currently using CoreGraphics to load png files that have pre-multiplied alpha. Here are the essentials: // load a 4-channel rgba png file with pre-multiplied alpha. CGContextRef context = CGBitmapContextCreate(data, width, height, 8, num_channels * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); // Draw the image into the context CGRect rect = CGRectMake(0, 0, width, height); CGContextDrawImage(context, rect, image); CGContextRelease(context); I then go on and use the data as a texture in OpenGL. My question: By specifying pre-multiplied alpha - kCGImageAlphaPremultipliedLast - am I - inadvertently - telling CoreGraphics to multiply the r g b channels by alpha or - what I have assumed - am I merely indicating that the format of the incoming image has pre-multiplied alpha? Thanks, Doug

    Read the article

  • iPhone: Changing CGImageAlphaInfo of CGImage

    - by TechZen
    I have a PNG image that has an unsupported bitmap graphics context pixel format. Whenever I attempt to resize the image, CGBitmapContextCreate() chokes on the unsupported format (Error formatted for easy reading): CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 32 bits/pixel; 3-component colorspace; kCGImageAlphaLast; 1344 bytes/row. The list of supported pixel formats definitely does not support this combination. It appears I need to redraw the image and move the alpha channel information to kCGImageAlphaPremultipliedFirst or kCGImageAlphaPremultipliedLast. I have no idea how to go about doing this. There is nothing unusual about the PNG file and it isn't corrupted. It works in all other context just fine. I encountered this error just by chance but obviously my users might have similarly formatted files so I will have to check my app's imported images and correct for this problem.

    Read the article

  • How do I edit an interface builder object programmatically?

    - by Evelyn
    I created a label using Interface Builder, and now I want to set the background color of the label using code instead of IB's color picker. (I want to do this so that I can define the color using RGB and eventually change the colorspace for the label.) I'm in cocoa. How do I edit the attributes of an IB object using code? My code looks like this: //.h file #import <UIKit/UIKit.h> @interface IBAppDelegate : NSObject { UILabel *label; } @property (nonatomic, retain) IBOutlet UILabel *label; @end //.m file #import "IBAppDelegate.h" @implementation IBAppDelegate @synthesize label; (memory stuff...) @end

    Read the article

  • Imagemagick PDF to JPG conversion failing

    - by Scott
    I'm trying to convert the first page of a PDF to a JPG. I'm pretty sure I got this to work with certain PDFs, but is it really possible that certain PDFs are made incorrectly and cannot be converted? I tried running this first: $ convert 10-03-26.pdf[1] test.jpg And I got the follow: Error: /syntaxerror in readxref Operand stack: Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1 3 %oparray_pop 1 3 %oparray_pop --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- Dictionary stack: --dict:1062/1417(ro)(G)-- --dict:0/20(G)-- --dict:73/200(L)-- --dict:73/200(L)-- --dict:97/127(ro)(G)-- --dict:229/230(ro)(G)-- --dict:14/15(L)-- Current allocation mode is local ESP Ghostscript 7.07.1: Unrecoverable error, exit code 1 convert: Postscript delegate failed `10-03-26.pdf'. Running this instead: $ convert -verbose -colorspace rgb '10-03-26.pdf[1]' test.jpg I get the following: Error: /syntaxerror in readxref Operand stack: Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1 3 %oparray_pop 1 3 %oparray_pop --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- Dictionary stack: --dict:1062/1417(ro)(G)-- --dict:0/20(G)-- --dict:73/200(L)-- --dict:73/200(L)-- --dict:97/127(ro)(G)-- --dict:229/230(ro)(G)-- --dict:14/15(L)-- Current allocation mode is local ESP Ghostscript 7.07.1: Unrecoverable error, exit code 1 "gs" -q -dBATCH -dSAFER -dMaxBitmap=500000000 -dNOPAUSE -dAlignToPixels=0 "-sDEVICE=pnmraw" -dTextAlphaBits=4 -dGraphicsAlphaBits=4 "-g792x1611" "-r72x72" -dFirstPage=2 -dLastPage=2 "-sOutputFile=/tmp/magick-XXU3T44P" "-f/tmp/magick-XXoMKL8Z" "-f/tmp/magic2eec1F"Start of Image Define Huffman Table 0x00 0 1 5 1 1 1 1 1 1 0 0 0 0 0 0 0 Define Huffman Table 0x01 0 3 1 1 1 1 1 1 1 1 1 0 0 0 0 0 Define Huffman Table 0x10 0 2 1 3 3 2 4 3 5 5 4 4 0 0 1 125 Define Huffman Table 0x11 0 2 1 2 4 4 3 4 7 5 4 4 0 1 2 119 End Of Image convert: Postscript delegate failed `10-03-26.pdf'. Why would the conversion fail? Just as an aside, this is happening on a (gs) Grid-Service on (mt) Media Temple hosting. I cannot install programs on the server, but both Imagemagick and Ghostscript are installed Thanks!

    Read the article

  • Thumbnail generation with imagemagick doesn't render the correct colors

    - by Bastien
    Generating thumbnails of PDFs with imagemagick sometimes renders incorrect colors. We're using an old version of imagemagick (6.5.7-8, that's the version installed on the heroku servers). Here is the command we're currently using: convert -size "725x1200>" -colorspace RGB -flatten -density 300 -quality 100 input.pdf output.jpg I've tried using different colorspaces like sRGB,YIQ,.. but none of them are rendering the color correctly. Using imagemagick-6.7.7-6 locally works so I've tried to bundle the 'convert' command within my application /bin directory, the command works but the result is still wrong, so it seems that the problem comes either from another imagemagick command used by 'convert' or from another library. Here's an example of the outputs: Correct output: http://i.stack.imgur.com/gf9eG.jpg Wrong output: http://i.stack.imgur.com/imUeD.jpg Strangely, with some pages of the same pdf the output is always correct. Any idea which library or command could be the issue, or if there is a proper set of options to pass to imagemagick to always get it right? Thanks in advance for your help.

    Read the article

  • Resize and saving an image to disk in Monotouch

    - by Chris S
    I'm trying to resize an image loaded from disk - a JPG or PNG (I don't know the format when I load it) - and then save it back to disk. I've got the following code which I've tried to port from objective-c, however I've got stuck on the last parts. Original Objective-C. This may not be the best way of achieving what I want to do - any solution is fine for me. int width = 100; int height = 100; using (UIImage image = UIImage.FromFile(filePath)) { CGImage cgimage = image.CGImage; CGImageAlphaInfo alphaInfo = cgimage.AlphaInfo; if (alphaInfo == CGImageAlphaInfo.None) alphaInfo = CGImageAlphaInfo.NoneSkipLast; CGBitmapContext context = new CGBitmapContext(IntPtr.Zero, width, height, cgimage.BitsPerComponent, 4 * width, cgimage.ColorSpace, alphaInfo); context.DrawImage(new RectangleF(0, 0, width, height), cgimage); /* Not sure how to convert this part: CGImageRef ref = CGBitmapContextCreateImage(bitmap); UIImage* result = [UIImage imageWithCGImage:ref]; CGContextRelease(bitmap); // ok if NULL CGImageRelease(ref); */ }

    Read the article

  • Imagemagick PDF to JPG conversion failing

    - by sbressler
    I'm trying to convert the first page of a PDF to a JPG. I'm pretty sure I got this to work with certain PDFs, but is it really possible that certain PDFs are made incorrectly and cannot be converted? I tried running this first: $ convert 10-03-26.pdf[1] test.jpg And I got the follow: Error: /syntaxerror in readxref Operand stack: Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1 3 %oparray_pop 1 3 %oparray_pop --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- Dictionary stack: --dict:1062/1417(ro)(G)-- --dict:0/20(G)-- --dict:73/200(L)-- --dict:73/200(L)-- --dict:97/127(ro)(G)-- --dict:229/230(ro)(G)-- --dict:14/15(L)-- Current allocation mode is local ESP Ghostscript 7.07.1: Unrecoverable error, exit code 1 convert: Postscript delegate failed `10-03-26.pdf'. Running this instead: $ convert -verbose -colorspace rgb '10-03-26.pdf[1]' test.jpg I get the following: Error: /syntaxerror in readxref Operand stack: Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1 3 %oparray_pop 1 3 %oparray_pop --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- Dictionary stack: --dict:1062/1417(ro)(G)-- --dict:0/20(G)-- --dict:73/200(L)-- --dict:73/200(L)-- --dict:97/127(ro)(G)-- --dict:229/230(ro)(G)-- --dict:14/15(L)-- Current allocation mode is local ESP Ghostscript 7.07.1: Unrecoverable error, exit code 1 "gs" -q -dBATCH -dSAFER -dMaxBitmap=500000000 -dNOPAUSE -dAlignToPixels=0 "-sDEVICE=pnmraw" -dTextAlphaBits=4 -dGraphicsAlphaBits=4 "-g792x1611" "-r72x72" -dFirstPage=2 -dLastPage=2 "-sOutputFile=/tmp/magick-XXU3T44P" "-f/tmp/magick-XXoMKL8Z" "-f/tmp/magic2eec1F"Start of Image Define Huffman Table 0x00 0 1 5 1 1 1 1 1 1 0 0 0 0 0 0 0 Define Huffman Table 0x01 0 3 1 1 1 1 1 1 1 1 1 0 0 0 0 0 Define Huffman Table 0x10 0 2 1 3 3 2 4 3 5 5 4 4 0 0 1 125 Define Huffman Table 0x11 0 2 1 2 4 4 3 4 7 5 4 4 0 1 2 119 End Of Image convert: Postscript delegate failed `10-03-26.pdf'. Why would the conversion fail? Thanks!

    Read the article

  • How to convert CMYK to RGB programmatically in indesign.

    - by BBDeveloper
    Hi, I have a CMYK colorspace in indesign, i want to convert that as RGB color space, I got some codes, but I am getting incorrect data. Some of the codes which I tried are given below double cyan = 35.0; double magenta = 29.0; double yellow = 0.0; double black = 16.0; cyan = Math.min(255, cyan + black); //black is from K magenta = Math.min(255, magenta + black); yellow = Math.min(255, yellow + black); l_res[0] = 255 - cyan; l_res[1] = 255 - magenta; l_res[2] = 255 - yellow; @Override public float[] toRGB(float[] p_colorvalue) { float[] l_res = {0,0,0}; if (p_colorvalue.length >= 4) { float l_black = (float)1.0 - p_colorvalue[3]; l_res[0] = l_black * ((float)1.0 - p_colorvalue[0]); l_res[1] = l_black * ((float)1.0 - p_colorvalue[1]); l_res[2] = l_black * ((float)1.0 - p_colorvalue[2]); } return (l_res); } The values are C=35, M = 29, Y = 0, K = 16 in CMYK color space and the correct RGB values are R = 142, G = 148, B = 186. In adobe indesign, using swatches we can change the mode to CMYK or to RGB. But I want to do that programmatically, Can I get any algorithm to convert CMYK to RGB which will give the correct RGB values. And one more question, if the alpha value for RGB is 1 then what will be the alpha value for CMYK? Can anyone help me to solve these issues... Thanks in advance.

    Read the article

  • Saving NSBitmapImageRep as NSBMPFileType file. Wrong BMP headers and bitmap content

    - by niko34
    I save a NSBitmapImageRep to a BMP file (Snow Leopard). It seems ok when i open it on macos. But it makes an error on my multimedia device (which can show any BMP file from internet). I cannot figure out what is wrong, but when i look inside the file (with the cool hexfiend app on macos), 2 things wrong: the header have a wrong value for the biHeight parameter : 4294966216 (hex=C8FBFFFF) the header have a correct biWidth parameter : 1920 the first pixel in the bitmap content (after 54 bytes headers in BMP format) correspond to the upper left corner of the original image. In the original BMP file and as specified in the BMP format, it should be the down left corner pixel first. To explain the full workflow in my app, i have an NSImageView where i can drag a BMP image. This View is bind to an NSImage. After a drag & drop i have an action to save this image (with some text drawing over it) to a BMP file. Here's the code for saving the new BMP file : CGColorSpaceRefcolorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB); CGContextRefcontext = CGBitmapContextCreate(NULL, (int)1920, (int)1080, 8, 4*(int)1920, colorSpace, kCGImageAlphaNoneSkipLast); [duneScreenViewdrawBackgroundWithDuneFolder:self inContext:context inRect:NSMakeRect(0,0,1920,1080) needScale:NO]; if(folderType==DXFolderTypeMovie) { [duneScreenViewdrawSynopsisContentWithDuneFolder:self inContext:context inRect:NSMakeRect(0,0,1920,1080) withScale:1.0]; } CGImageRef backgroundImageRef = CGBitmapContextCreateImage(context); NSBitmapImageRep*bitmapBackgroundImageRef = [[NSBitmapImageRepalloc] initWithCGImage:backgroundImageRef]; NSData*data = [destinationBitmap representationUsingType:NSBMPFileType properties:nil]; [data writeToFile:[NSStringstringWithFormat:@"%@/%@", folderPath,backgroundPath] atomically: YES]; The duneScreenViewdrawSynopsisContentWithDuneFolder method uses CGContextDrawImage to draw the image. The duneScreenViewdrawSynopsis method uses CoreText to draw some text in the same context. Do you know what's wrong?

    Read the article

  • What is the most efficient way to display decoded video frames in Qt?

    - by Jason
    What is the fastest way to display images to a Qt widget? I have decoded the video using libavformat and libavcodec, so I already have raw RGB or YCbCr 4:2:0 frames. I am currently using a QGraphicsView with a QGraphicsScene object containing a QGraphicsPixmapItem. I am currently getting the frame data into a QPixmap by using the QImage constructor from a memory buffer and converting it to QPixmap using QPixmap::fromImage(). I like the results of this and it seems relatively fast, but I can't help but think that there must be a more efficient way. I've also heard that the QImage to QPixmap conversion is expensive. I have implemented a solution that uses an SDL overlay on a widget, but I'd like to stay with just Qt since I am able to easily capture clicks and other user interaction with the video display using the QGraphicsView. I am doing any required video scaling or colorspace conversions with libswscale so I would just like to know if anyone has a more efficient way to display the image data after all processing has been performed. Thanks.

    Read the article

  • Easiest way to plot values as symbols in scatter plot?

    - by AllenH
    In an answer to an earlier question of mine regarding fixing the colorspace for scatter images of 4D data, Tom10 suggested plotting values as symbols in order to double-check my data. An excellent idea. I've run some similar demos in the past, but I can't for the life of me find the demo I remember being quite simple. So, what's the easiest way to plot numerical values as the symbol in a scatter plot instead of 'o' for example? Tom10 suggested plt.txt(x,y,value)- and that is the implementation used in a number of examples. I however wonder if there's an easy way to evaluate "value" from my array of numbers? Can one simply say: str(valuearray) ? Do you need a loop to evaluate the values for plotting as suggested in the matplotlib demo section for 3D text scatter plots? Their example produces: However, they're doing something fairly complex in evaluating the locations as well as changing text direction based on data. So, is there a cute way to plot x,y,C data (where C is a value often taken as the color in the plot data- but instead I wish to make the symbol)? Again, I think we have a fair answer to this- I just wonder if there's an easier way?

    Read the article

  • pdf external streams in Max OS X Preview

    - by olpa
    According to the specification, a part of a PDF document can reside in an external file. An example for an image: 2 0 obj << /Type /XObject /Subtype /Image /Width 117 /Height 117 /BitsPerComponent 8 /Length 0 /ColorSpace /DeviceRGB /FFilter /DCTDecode /F (pinguine.jpg) >> stream endstream endobj I found that this functionality does work in Adobe Acrobat 5.0 for Windows (sample PDF with the image), also I managed to view this file in Adobe Acrobat Reader 8.1.3 for Mac OS X after I found the setting "Allow external content". Unfortunately, it seems that non-Adobe tools ignore the external stream feature. I hope I'm wrong, therefore ask the question: How to enable external streams in Mac OS X? (I think that all the system Mac OS X tools use the same library, therefore say "Mac OS X" instead of "Preview".) Or maybe there could be a programming hook to emulate external streams? My task is: store a big set of images (total ˜300Mb) outside of a small PDF (˜1Mb). At some moment, I want to filter PDF through a quartz filter and get a PDF with the images embedded. Any suggestions are welcome.

    Read the article

  • "java.lang.OutOfMemoryError: Java heap space" in image and array storage

    - by totalconscience
    I am currently working on an image processing demonstration in java (Applet). I am running into the problem where my arrays are too large and I am getting the "java.lang.OutOfMemoryError: Java heap space" error. The algorithm I run creates an NxD float array where: N is the number of pixel in the image and D is the coordinates of each pixel plus the colorspace components of each pixel (usually 1 for grayscale or 3 for RGB). For each iteration of the algorithm it creates one of these NxD float arrays and stores it for later use in a vector, so that the user of the applet may look at the individual steps. My client wants the program to be able to load a 500x500 RGB image and run as the upper bound. There are about 12 to 20 iterations per run so that means I need to be able to store a 12x500x500x5 float in some fashion. Is there a way to process all of this data and, if possible, how? Example of the issue: I am loading a 512 by 512 Grayscale image and even before the first iteration completes I run out of heap space. The line it points me to is: Y.add(new float[N][D]) where Y is a Vector and N and D are described as above. This is the second instance of the code using that line.

    Read the article

  • Using ImageMagick to create an image from a PDF...efficiently

    - by bigsweater
    I'm using ImageMagick to create a tiny JPG thumbnail image of an already-uploaded PDF. The code works fine. It's a WordPress widget, though this isn't necessarily WordPress specific. I'm unfamiliar with ImageMagick, so I was hoping somebody could tell me if this looks terrible or isn't following some best practices of some sort, or if I'm risking crashing the server. My questions, specifically, are: Is that image cached, or does the server have to re-generate the image every time somebody views the page? If it isn't cached, what's the best way to make sure the server doesn't have to regenerate the thumbnail? I tried to create a separate folder (/thumbs) for ImageMagick to put all the images in, instead of cluttering up the WP upload folders with images of PDFs. It kept throwing a permission error, despite 777 permissions on the folder in my testing environment. Why? Do the source/destination directories have to be the same? Am I doing anything incorrectly/inefficiently here that needs to be improved? The whole widget is on Pastebin: http://pastebin.com/WnSTEDm7 Relevant code: <?php if ( $url ) { $pdf = $url; $info = pathinfo($pdf); $filename = basename($pdf,'.'.$info['extension']); $uploads = wp_upload_dir(); $file_path = str_replace( $uploads['baseurl'], $uploads['basedir'], $url ); $dest_path = str_replace( '.pdf', '.jpg', $file_path ); $dest_url = str_replace( '.pdf', '.jpg', $pdf ); exec("convert \"{$file_path}[0]\" -colorspace RGB -geometry 60 $dest_path"); ?> <div class="entry"> <div class="widgetImg"> <p><a href="<?php echo $url; ?>" title="<?php echo $filename; ?>"><?php echo "<img src='".$dest_url."' alt='".$filename."' class='blueBorder' />"; ?></a></p> </div> <div class="widgetText"> <?php echo wpautop( $desc ); ?> <p><a class="downloadLink" href="<?php echo $url; ?>" title="<?php echo $filename; ?>">Download</a></p> </div> </div> <?php } ?> As you can see, the widget grabs whatever PDF is attached to the current page being viewed, creates an image of the first page of the PDF, stores it, then links to it in HTML. Thanks for any and all help!

    Read the article

< Previous Page | 1 2 3 4  | Next Page >