Search Results

Search found 21307 results on 853 pages for 'image capture'.

Page 71/853 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Looking for mass cropping software

    - by Bart van Heukelom
    I'm looking for a tool than runs on Ubuntu that can let me: Open an image in a folder which has thousands Crop and rotate it Save as a copy, automatically named (not manually), with one click. Preferably with something in the name that I can later use to filter these cropped copies in Nautilus (unless it saves in another directory, that'd be even better). Move to next image and repeat Does it exist?

    Read the article

  • Uploading images to a post like on the stackexchange websites [on hold]

    - by Loko
    Stackexchange uses imgur link to show the image to their website, but how do I do this? I am really curious on what ways I can upload an image to a file and show it immediatly again. Like webshops where you can post your products are also uploading images to show them immediatly. What way are there to do this and what is the best / most secure?(I assume how stackexchange does it), but I also have no idea how to do that.

    Read the article

  • Image classification using openCV and weka

    - by simk
    Hi i want to do image classification, so i am planning to use openCV for the preprocessing of image and weka to check which ML algorithm gives best result, so the problem i am facing is converting the image data in to weka ARFF file format, when i apply some image transformation to image and write the image data it become so large and not sure how to define @ATTRIBUTE for that. If you have done similar thing before please suggest how can i solve this and it particularly doesn't have to be openCV i can use other tools also, please suggest.

    Read the article

  • tcpdump on dd-wrt router

    - by Senica Gonzalez
    I'm trying to capture packets from two devices on my network. I have tcpdump installed on my dd-wrt router and working correctly. However, the only packets I capture are broadcast packets when using a tcpdump statement that states only those two devices ./tcpdump -w /tmp/capture.pcap dst 192.168.3.105 or src 192.168.3.105 or dst 192.168.3.136 or src 192.168.3.136 I'm capturing on intefface br0. Is that correct? Both devices are plugged in directly to the ports 1 and 2 with ip addresses 192.168.3.105 and 192.168.3.136 respectively. Do I need to set br0 in promiscuous mode? A little stuck. Thanks.

    Read the article

  • asp fpdf trying to output a image assigned to a variable

    - by bluffo
    this is the code i used to display image in the header. the problem i have is i want to use a variable for the image, when i put the variable name instead of the image name i get an error: Microsoft JScript runtime error '800a138f' 'undefined' is null or not an object /EKtestdb/fpdf/fpdf/includes/Basics.asp, line 121 this.Header=function Header() { this.SetY (10) this.SetFont ("Times","",10) //this.Cell (45,5, "HEADER", 0, 0, "L") this.SetFont ("Times","b",14) //this.Cell (190,5, this.title, 0, 0, "C") this.Cell (190,20, this.title, 0, 0) this.SetFont ("Times","",10) this.Image('logoSM1.jpg',165,3,33) this.Image( techpic ,165,3,33) this is the code for basics.asp line 121: this.strrpos=function strrpos(s,ch){ res = s.lastIndexOf(ch) if (res>0-1){return res}else{return false} } this.strpos=function strpos(s,ch,start){ if (arguments.length<3){start=0} res = s.indexOf(ch,start); if (res>-1){return res}else{return false} } if you just want to display an image this line should work: this.Image('logoSM1.jpg',165,3,33) but for using a variable instead of image name can someone help with this?

    Read the article

  • iPhone SDK: Rendering a CGLayer into an image object

    - by codemercenary
    Hi all, I am trying to add a curved border around an image downloaded and to be displayed in a UITableViewCell. In the large view (ie one image on the screen) I have the following: productImageView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:product.image]]; [productImageView setAlpha:0.4]; productImageView.frame = CGRectMake(10.0, 30.0, 128.0, 128.0); CALayer *roundedlayer = [productImageView layer]; [roundedlayer setMasksToBounds:YES]; [roundedlayer setCornerRadius:7.0]; [roundedlayer setBorderWidth:2.0]; [roundedlayer setBorderColor:[[UIColor darkGrayColor] CGColor]]; [self addSubview:productImageView]; In the table view cell, to get it to scroll fast, an image needs to be drawn in the drawRect method of a UIView which is then added to a custom cell. so in drawRect - (void)drawRect:(CGRect)rect { ... point = CGPointMake(boundsX + LEFT_COLUMN_OFFSET, UPPER_ROW_TOP); //CALayer *roundedlayer = [productImageView layer]; //[roundedlayer setMasksToBounds:YES]; //[roundedlayer setCornerRadius:7.0]; //[roundedlayer setBorderWidth:2.0]; //[roundedlayer setBorderColor:[[UIColor darkGrayColor] CGColor]]; //[productImageView drawRect:CGRectMake(boundsX + LEFT_COLUMN_OFFSET, UPPER_ROW_TOP, IMAGE_WIDTH, IMAGE_HEIGHT)]; // [productImageView.image drawInRect:CGRectMake(boundsX + LEFT_COLUMN_OFFSET, UPPER_ROW_TOP, IMAGE_WIDTH, IMAGE_HEIGHT)]; So this works well, but if I remove the comment and try to show the rounded CA layer the scrolling goes really slow. To fix this I suppose I would have to render this image context into a different image object, and store this in an array, then set this image as something like: productImageView.image = (UIImage*)[imageArray objectAtIndex:indexPath.row]; My question is "How do I render this layer into an image?" TIA.

    Read the article

  • how many fps can iPhone's UIGetScreenImage() actually do?

    - by M Katz
    Now that Apple is officially allowing UIGetScreenImage() to be used in iPhone apps, I've seen a number of blogs saying that this "opens the floodgates" for video capture on iPhones, including older models. But I've also seen blogs that say the fastest frame rate they can get with UIGetScreenImage() is like 6 FPS. Can anyone share specific frame-rate results you've gotten with UIGetScreenImage() (or other approved APIs)? Does restricting the area of the screen captured improve frame rate significantly? Also, for the wishful thinking segment of today's program, does anyone have pointers to code/library that uses UIGetScreenImage() to capture video? For instance, I'd like an API something like Capture( int fps, Rect bounds, int durationMs ) that would turn on the camera and for the given duration record a sequence of .png files at the given frame rate, copying from the given screen rect.

    Read the article

  • WDS updating raid drivers in an already existing image WIM

    - by Tim
    Here is my current setup. WDS installed on Server 2008 R2 for the new driverstore and multicast features. A Windows Server 2003 32bit Standard image built to support previous DL360 models. A new HP DL360 G6 which has a new raid controller in it. I need to add the driver for the raid controller into my Server 2003 32bit standard install image but I can't seem to figure out the correct method to do so. So far I've tried the following: Mounting the image and placing the drivers into the Sysprep drivers folder, adding the PCI device codes into the sysprep.inf file and committing the changes to the image. Pushing the image to a DL360 G4, ensuring the driver is in the correct locations and re-sysprepping the image. Hoping that the new driverstore feature would magically work with 2003 (a guy can dream cant he?) Is there some standard method that I can use to update this image with the new drivers or do I need to start from scratch with an entirely new build? Thanks in advance.

    Read the article

  • How to resize image to fit UITableView cell?

    - by stefanB
    How to fit UIImage into the cell of UITableView, UITableViewCell (?). Do you addSubview to cell or is there a way to resize cell.image or the UIImage before it is assigned to cell.image ? I want to keep the cell size default (whatever it is when you init with zero rectangle) and would like to add icon like pictures to each entry. Images are slightly bigger than the cell size (table row size). I think the code looks like this (from top of my head): UIImage * image = [[UIImage alloc] imageWithName:@"bart.jpg"]; cell = ... dequeue cell in UITableView data source (UITableViewController ...) cell.text = @"bart"; cell.image = image; What do I need to do to resize the image to fit the cell size? I've seen something like: UIImageView * iview = [[UIImageView alloc] initWithImage:image]; iview.frame = CGRectMake(...); // or something similar [cell.contentView addSubview:iview] The above will add image to cell and I can calculate the size to fit it, however: I'm not sure if there is a better way, isn't it too much overhead to add UIImageView just to resize the cell.image ? Now my label (cell.text) needs to be moved as it is obscured by image, I've seen a solution where you just add the text as a label: Example: UILabel * text = [[UILable alloc] init]; text.text = @"bart"; [cell.contentView addSubview:iview]; [cell.contentView addSubview:label]; // not sure if this will position the text next to the label // edited original example had [cell addSubview:label], maybe that's the problem Could someone point me in correct direction? EDIT: Doh [cell.contentview addSubview:view] not [cell addSubview:view] maybe I'm supposed to look at this: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { UITableViewCell *cell = ...; CGRect frame = cell.contentView.bounds; UILabel *myLabel = [[UILabel alloc] initWithFrame:frame]; myLabel.text = ...; [cell.contentView addSubview:myLabel]; [myLabel release]; }

    Read the article

  • Capture window close event

    - by -providergordienko.vladimir
    I want to capture events that close editor window (tab) in Visual Studio 2008 IDE. When I use dte2.Application.Events.get_CommandEvents(null, 0).BeforeExecute I successfully captured such events: File.Close File.CloseAllButThis File.Exit Window.CloseDocumentWindow and others. If code in window is not acceptable, I stop the event (CancelDefault = true). But if I click "X" button on the right hand side, "Save Changes"; dialog appears, tab with editor window close and I have no any captured events. In this case I can capture WindowClosing event, but can not cancel the event. Is it poosible to handle "x" button click and stop event?

    Read the article

  • C# DirectSound - Capture buffers not continuous

    - by Wizche
    Hi, I'm trying to capture raw data from my line-in using DirectSound. My problem is that, from a buffer to another the data are just inconsistent, if for example I capture a sine I see a jump from my last buffer and the new one. To detected this I use a graph widget to draw the first 500 elements of the last buffer and the 500 elements from the new one: Snapshot I initialized my buffer this way: format = new WaveFormat { SamplesPerSecond = 44100, BitsPerSample = (short)bitpersample, Channels = (short)channels, FormatTag = WaveFormatTag.Pcm }; format.BlockAlign = (short)(format.Channels * (format.BitsPerSample / 8)); format.AverageBytesPerSecond = format.SamplesPerSecond * format.BlockAlign; _dwNotifySize = Math.Max(4096, format.AverageBytesPerSecond / 8); _dwNotifySize -= _dwNotifySize % format.BlockAlign; _dwCaptureBufferSize = NUM_BUFFERS * _dwNotifySize; // my capture buffer _dwOutputBufferSize = NUM_BUFFERS * _dwNotifySize / channels; // my output buffer I set my notifications one at half the buffer and one at the end: _resetEvent = new AutoResetEvent(false); _notify = new Notify(_dwCapBuffer); bpn1 = new BufferPositionNotify(); bpn1.Offset = ((_dwCapBuffer.Caps.BufferBytes) / 2) - 1; bpn1.EventNotifyHandle = _resetEvent.SafeWaitHandle.DangerousGetHandle(); bpn2 = new BufferPositionNotify(); bpn2.Offset = (_dwCapBuffer.Caps.BufferBytes) - 1; bpn2.EventNotifyHandle = _resetEvent.SafeWaitHandle.DangerousGetHandle(); _notify.SetNotificationPositions(new BufferPositionNotify[] { bpn1, bpn2 }); observer.updateSamplerStatus("Events listener initialization complete!\r\n"); And here is how I process the events. /* Process thread */ private void eventReceived() { int offset = 0; _dwCaptureThread = new Thread((ThreadStart)delegate { _dwCapBuffer.Start(true); while (isReady) { _resetEvent.WaitOne(); // Notification received /* Read the captured buffer */ Array read = _dwCapBuffer.Read(offset, typeof(short), LockFlag.None, _dwOutputBufferSize - 1); observer.updateTextPacket("Buffer: " + count.ToString() + " # " + read.GetValue(read.Length - 1).ToString() + " # " + read.GetValue(0).ToString() + "\r\n"); /* Print last/new part of the buffer to the debug graph */ short[] graphData = new short[1001]; Array.Copy(read, graphData, 1000); db.SetBufferDebug(graphData, 500); observer.updateGraph(db.getBufferDebug()); offset = (offset + _dwOutputBufferSize) % _dwCaptureBufferSize; /* Out buffer not used */ /*_dwDevBuffer.Write(0, read, LockFlag.EntireBuffer); _dwDevBuffer.SetCurrentPosition(0); _dwDevBuffer.Play(0, BufferPlayFlags.Default);*/ } _dwCapBuffer.Stop(); }); _dwCaptureThread.Start(); } Any advise? I'm sure I'm failing somewhere in the event processing, but I cant find where. I had developed the same application using the WaveIn API and it worked well. Thanks a lot...

    Read the article

  • Adding image to RichTextBox programatically does not show in Xaml property

    - by rotary_engine
    Trying to add an image to a RichTextBox progamatically from a Stream. The image displays in the text box, however when reading the Xaml property there is no markup for the image. private void richTextBox3_Drop(object sender, DragEventArgs e) { if (e.Data.GetDataPresent(DataFormats.FileDrop)) { FileInfo[] files = (FileInfo[])e.Data.GetData(DataFormats.FileDrop); using (Stream s = files[0].OpenRead()) { InlineUIContainer container = new InlineUIContainer(); BitmapImage bmp = new BitmapImage(); bmp.SetSource(s); Image img = new Image(); img.SetValue(Image.SourceProperty, bmp); container.Child = img; richTextBox3.Selection.Insert(container); } } } private void Button_Click_1(object sender, RoutedEventArgs e) { // this doesn't have the markup from the inserted image System.Windows.MessageBox.Show(richTextBox3.Xaml); } What is the correct way to insert an image into the RichTextBox at runtime so that it can be persisted to a data store? In the Xaml property.

    Read the article

  • How to get image capture date and video duration when uploading files using SWFUpload and Paperclip

    - by Hatem
    Hi Guys, I'm using SWFUpload and Paperclip on Rails 2.3.5 to upload images and videos. How can I store the capture date of images and duration of videos? The following works correctly in irb: irb(main):001:0> File.new('hatem.jpg').mtime => Tue Mar 09 16:56:38 +0200 2010 But when I try to use Paperclip's before_post_process: before_post_process :get_file_info def get_file_info puts File.new(self.media.to_file.path).mtime # =>Wed Apr 14 18:36:22 +0200 2010 end I get the current date instead of the capture date. How can I fix this? Also, how can I get the video duration and store it with the model? Thank you.

    Read the article

  • iPhone SDK background thread image loading problem

    - by retailevolved
    I have created a grid view that displays six "cells" of content. In each cell, an image is loaded from the web. There are a multiple pages of this grid (the user moves through them by swiping up / down to see the next set of cells). Each cell has its own view controller. When these view controllers load, they use an ImageLoader class that I made to load and display an image. These view controllers implement an ImageLoaderDelegate that has a single method that gets called when the image is finished loading. ImageLoader does its work on a background thread and then simply notifies its delegate when it is done loading, passing the image to the delegate method. Trouble is that if the user moves on to the next page of grid content before the image has finished loading (releasing the GridCellViewControllers that use the ImageLoaders), the app crashes. I suspect that this is because along the line, an asynchronous method finishes and attempts to notify its delegate but can't because it's been released. Here's some code to give a better picture: GridCellViewController.m: - (void)viewDidLoad { [super viewDidLoad]; // ImageLoader _loader = [[ProductImageLoader alloc] init]; _loader.delegate = self; if(_boundObject) [_loader loadImageForProduct:_boundObject]; } //ImageLoaderDelegate method - (void) imageDidFinishLoading: (UIImage *)image { [_imgController setImage:image]; } ProductImageLoader.m - (void) loadImageForProduct: (Product *) product { // Get image on another thread [NSThread detachNewThreadSelector:@selector(getImageForProductInBackground:) toTarget:self withObject:product]; } - (void) getImageForProductInBackground: (Product *) product { NSAutoreleasePool *tempPool = [[NSAutoreleasePool alloc] init]; HttpRequestLoader *tempLoader = [[HttpRequestLoader alloc] init]; NSURL *tempUrl = [product getImageUrl]; NSData *imageData = tempUrl ? [tempLoader loadSynchronousDataFromAddress:[tempUrl absoluteString]] : nil; UIImage *image = [[UIImage alloc] initWithData:imageData]; [tempPool release]; if(delegate) [delegate imageDidFinishLoading:image]; } The app crashes with EXC_BAD_ACCESS. Disclaimer: The code has been slightly modified to focus on the issue at hand.

    Read the article

  • Capture ASP output for monitoring

    - by scourge.zero
    How do I Capture ASP.NET output and then store it as temp memory so that I can use them in an application to do comparison. example. there's this site which has ASP output. Sorry I do not have server access, what I can do is view the output. The site by the way is a monitor for all users logged in and in which ever channel. output e.g. Channel 1 Username logged in (0 / 1) Username 1 1 John Smith 1 George B 0 Channel 2 Username logged in (0 / 1) Username 1 1 John Smith 0 George B 0 what I wanted to do is to capture this output and then show them this way. Username Channel 1 Channel 2 Total Username 1 1 1 2 John Smith 1 0 1 George B 0 0 0 I dont knw where to start.

    Read the article

  • AVFoundation buffer comparison to a saved image

    - by user577552
    Hi, I am a long time reader, first time poster on StackOverflow, and must say it has been a great source of knowledge for me. I am trying to get to know the AVFoundation framework. What I want to do is save what the camera sees and then detect when something changes. Here is the part where I save the image to a UIImage : if (shouldSetBackgroundImage) { CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); // Create a bitmap graphics context with the sample buffer data CGContextRef context = CGBitmapContextCreate(rowBase, bufferWidth, bufferHeight, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); // Create a Quartz image from the pixel data in the bitmap graphics context CGImageRef quartzImage = CGBitmapContextCreateImage(context); // Free up the context and color space CGContextRelease(context); CGColorSpaceRelease(colorSpace); // Create an image object from the Quartz image UIImage * image = [UIImage imageWithCGImage:quartzImage]; [self setBackgroundImage:image]; NSLog(@"reference image actually set"); // Release the Quartz image CGImageRelease(quartzImage); //Signal that the image has been saved shouldSetBackgroundImage = NO; } and here is the part where I check if there is any change in the image seen by the camera : else { CGImageRef cgImage = [backgroundImage CGImage]; CGDataProviderRef provider = CGImageGetDataProvider(cgImage); CFDataRef bitmapData = CGDataProviderCopyData(provider); char* data = CFDataGetBytePtr(bitmapData); if (data != NULL) { int64_t numDiffer = 0, pixelCount = 0; NSMutableArray * pointsMutable = [NSMutableArray array]; for( int row = 0; row < bufferHeight; row += 8 ) { for( int column = 0; column < bufferWidth; column += 8 ) { //we get one pixel from each source (buffer and saved image) unsigned char *pixel = rowBase + (row * bytesPerRow) + (column * BYTES_PER_PIXEL); unsigned char *referencePixel = data + (row * bytesPerRow) + (column * BYTES_PER_PIXEL); pixelCount++; if ( !match(pixel, referencePixel, matchThreshold) ) { numDiffer++; [pointsMutable addObject:[NSValue valueWithCGPoint:CGPointMake(SCREEN_WIDTH - (column/ (float) bufferHeight)* SCREEN_WIDTH - 4.0, (row/ (float) bufferWidth)* SCREEN_HEIGHT- 4.0)]]; } } } numberOfPixelsThatDiffer = numDiffer; points = [pointsMutable copy]; } For some reason, this doesn't work, meaning that the iPhone detects almost everything as being different from the saved image, even though I set a very low threshold for detection in the match function... Do you have any idea of what I am doing wrong?

    Read the article

  • fuzzy implementaion for capture specific strings

    - by kasun-456
    I am going to develop a web crawler using java to capture hotel room prices from hotel websites. In this case i want to capture room price with the room type and the meal type, so my algorithm should intelligent for that. as an example: Room type: Delux Meal type: HalfBoad price : $20.00 The main problem is room prices can be in different different ways in different different hotel sites. so my algorithm should independent from hotel sites. I am plan to use above room types and meal types as a fuzzy sets and compare the words in webpage with above fuzzy sets using a suitable membership function. any one experienced with this??? or have an Idea for my problem??

    Read the article

  • display wait gif until image is fully loaded

    - by Dimitris Baltas
    Most popular browsers, while rendering an image, they display it line-by-line top-to-bottom as it loads. I have a requirement that a wait gif should be displayed while the image is loading. When the image is fully loaded then it should be displayed instead of the wait gif. If it helps, the site is http://farros.gr, main image: /images/bg.jpg and the div containing the image is named #main-content-image.

    Read the article

  • Best way to pass image to server?

    - by Chris
    I have an SL3 application that needs to be able to pass an image to the server, and then the server will generate a PDF file with the image in it, and display it to the user. What I already have in place are the following: (1) Code to convert image to byte array (2) Code to generate PDF file with image The main problem that I am running into is the following: In order to bypass the pop-up blocker, which is a requirement for my application, I am using the following code: var button = new NavigationButton(); button.NavigateUri = new Uri("http://localhost:3616/PrintReport.aspx?ReportIndex=11&ActionType=Get&ReportIdentifier=" + reportIdentifier.ToString() + ""); button.TargetName = "_blank"; button.PerformClick(); Initially, I would pass the image to a WCF web service (as a byte array), and then "navigate" to the ASP.NET page that would display the report. However, if I do this, then I can not use my custom HyperlinkButton class, and, certain browsers, including Safari, will block a new window from opening up. Therefore, it appears that the only option is to use the HyperlinkButton class. What I need to be able to do is to somehow pass the image, as a byte array or some other data type, to the server, such that it can temporarily store the image, even if it is in a server variable, and then immediately retrieve it when I navigate to the PrintReport.aspx page. If I upload the image to an ASP.NET form and then use the HyperlinkButton class to navigate to the PrintReport page, it doesn't work, as the app navigates to the PrintReport page before the system has finished uploading the image. I can't pass it to a web service, as that would require that I navigate to the PrintReport.aspx page in the callback code of the web method that I would be passing the image to, and the HyperlinkButton will not allow that, based on security rules. Any help or ideas would be appreciated. Thanks. Chris

    Read the article

  • Capture Backspace , is this a OK solution?

    - by f0rz
    Hello, I having a hard time to capture the backspace button in a UITextView. I am trying to capture it in the method - (BOOL)textView:(UITextView *)textView shouldChangeTextInRange:(NSRange)range replacementText:(NSString *)text I thought it was ok to do like this. if([text isEqualToString:@"\b") { // code ... } But for some reason, when backspace is pressed 'text' is empty. I know I can compare lenght of the UITextView but it isnt what I want to achieve. So I found a solution to this. If I look at '[text lenght]' every key on the defaultkeyboard returns 0 . Every key excepts the backspace wich is 0. In that way i know when backspace is pressed. I do this check. if([text lenght] == 0) { // BACKSPACE PRESSED   } What is your opinion about this? -or can I do it in a better way? Regards. - Martin

    Read the article

  • Capture and display console output at the same time

    - by Patrick
    Hi, MSDN states that it is possible in .NET to capture the output of a process and display it in the console window at the same time. Normally when you set StartInfo.RedirectStandardOutput = true; the console window stays blank. As the MSDN site doesn't provide a sample for this I was wondering if anyone would have a sample or could point me to a sample? When a Process writes text to its standard stream, that text is normally displayed on the console. By redirecting the StandardOutput stream, you can manipulate or suppress the output of a process. For example, you can filter the text, format it differently, or write the output to both the console and a designated log file. MSDN This post is similar to http://stackoverflow.com/questions/786726/capture-standard-output-and-still-display-it-in-the-console-window by the way. But that post didn't end up with a working sample. Thanks a lot, Patrick

    Read the article

  • Capture DDE Data that is being streamed in to a software

    - by user534391
    Hello, I have a trading software that gets data from the internet. I want to capture that tick data. There is one software that has been made by a local develop which is able to do that and it looks like it uses DDE (NDde.dll, NetSQL.dll). I want to write a custom application that does the same. Any pointers how I can check how the data is being streamed and how to capture that data. I don't think it is encrypted, since the other developer would not have been able to decrypt either. I just need to scan how the software is getting the data. Thank you.

    Read the article

  • From and Image to an ImageSource

    - by akaphenom
    I have an image (embedded resource) that i can get access to and form an Image object from. I actually can get the Image object or the Stream of bits the represent the image. However I want to sue that image programaticly to be a background image. So how do I set the ImageSource on the ImageBrush to an AcutalImage (PNG)?

    Read the article

  • Problem exporting NSOpenGLView pixel data to some image file formats using ImageKit & CGImageDestina

    - by walkytalky
    I am developing an application to visualise some experimental data. One of its functions is to render the data in an NSOpenGLView subclass, and allow the resulting image to be exported to a file or copied to the clipboard. The view exports the data as an NSImage, generated like this: - (NSImage*) image { NSBitmapImageRep* imageRep; NSImage* image; NSSize viewSize = [self bounds].size; int width = viewSize.width; int height = viewSize.height; [self lockFocus]; [self drawRect:[self bounds]]; [self unlockFocus]; imageRep=[[[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL pixelsWide:width pixelsHigh:height bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO colorSpaceName:NSDeviceRGBColorSpace bytesPerRow:width*4 bitsPerPixel:32] autorelease]; [[self openGLContext] makeCurrentContext]; glReadPixels(0,0,width,height,GL_RGBA,GL_UNSIGNED_BYTE,[imageRep bitmapData]); image=[[[NSImage alloc] initWithSize:NSMakeSize(width,height)] autorelease]; [image addRepresentation:imageRep]; [image setFlipped:YES]; // this is deprecated in 10.6 [image lockFocusOnRepresentation:imageRep]; // this will flip the rep [image unlockFocus]; return image; } Copying uses this image very simply, like this: - (IBAction) copy:(id) sender { NSImage* img = [self image]; NSPasteboard* pb = [NSPasteboard generalPasteboard]; [pb clearContents]; NSArray* copied = [NSArray arrayWithObject:img]; [pb writeObjects:copied]; } For file writing, I use the ImageKit IKSaveOptions accessory panel to set the output file type and associated options, then use the following code to do the writing: NSImage* glImage = [glView image]; NSRect rect = [glView bounds]; rect.origin.x = rect.origin.y = 0; img = [glImage CGImageForProposedRect:&rect context:[NSGraphicsContext currentContext] hints:nil]; if (img) { NSURL* url = [NSURL fileURLWithPath: path]; CGImageDestinationRef dest = CGImageDestinationCreateWithURL((CFURLRef)url, (CFStringRef)newUTType, 1, NULL); if (dest) { CGImageDestinationAddImage(dest, img, (CFDictionaryRef)[imgSaveOptions imageProperties]); CGImageDestinationFinalize(dest); CFRelease(dest); } } (I've trimmed a bit of extraneous code here, but nothing that would affect the outcome as far as I can see. The newUTType comes from the IKSaveOptions panel.) This works fine when the file is exported as GIF, JPEG, PNG, PSD or TIFF, but exporting to PDF, BMP, TGA, ICNS and JPEG-2000 produces a red colour artefact on part of the image. Example images are below, the first exported as JPG, the second as PDF. Copy to clipboard does not exhibit this red stripe with the current implementation of image, but it did with the original implementation, which generated the imageRep using NSCalibratedRGBColorSpace rather than NSDeviceRGBColorSpace. So I'm guessing there's some issue with the colour representation in the pixels I get from OpenGL that doesn't get through the subsequent conversions properly, but I'm at a loss as to what to do about it. So, can anyone tell me (i) what is causing this, and (ii) how can I make it go away? I don't care so much about all of the formats but I'd really like at least PDF to work.

    Read the article

  • How to capture actions taken on Windows Media Player

    - by bluenile
    Hi, I want to programmtically detect the state of movie currently being played in Windows Media Player. i..e if the movie is maximized I need to find that it is maximized and put the word "MAXIMIZED" in text file, if the movie is paused I need to capture PAUSED in text file, if movie is stopped I need to capture STOPPED in text file. The capturing needs to happen in the background i.e. totally transparent to end user as the user takes action while watching the movie on Windows Media player I am planning to achieve this using Visual Basic 6.0 Kindly provide me inputs / pointers on how to go about this. Thanks

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >