Search Results

Search found 10050 results on 402 pages for 'graphics card'.

Page 98/402 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • deleting HBITMAP causes an access violation at runtime.

    - by Oliver
    Hi, I have the following code to take a screenshot of a window, and get the colour of a specific pixel in it: void ProcessScreenshot(HWND hwnd){ HDC WinDC; HDC CopyDC; HBITMAP hBitmap; RECT rt; GetClientRect (hwnd, &rt); WinDC = GetDC (hwnd); CopyDC = CreateCompatibleDC (WinDC); //Create a bitmap compatible with the DC hBitmap = CreateCompatibleBitmap (WinDC, rt.right - rt.left, //width rt.bottom - rt.top);//height SelectObject (CopyDC, hBitmap); BitBlt (CopyDC, //destination 0,0, rt.right - rt.left, //width rt.bottom - rt.top, //height WinDC, //source 0, 0, SRCCOPY); COLORREF col = ::GetPixel(CopyDC,145,293); // Do some stuff with the pixel colour.... delete hBitmap; ReleaseDC(hwnd, WinDC); ReleaseDC(hwnd, CopyDC); } the line 'delete hBitmap;' causes a runtime error: an access violation. I guess I can't just delete it like that? Because bitmaps take up a lot of space, if I don't get rid of it I will end up with a huge memory leak. My question is: Does releasing the DC the HBITMAP is from deal with this, or does it stick around even after I have released the DC? If the later is the case, how do I correctly get rid of the HBITMAP?

    Read the article

  • Drawing performance with CGImageCreateWithJPEGDataProvider?

    - by Rnegi
    I've actually curious about this for the iPhone. I am getting an MJPEG stream from a server and trying to render it natively on the iphone (without the use of safari class). Reasons for this is because the safari class while CAN render MJPEG natively, does not do so at the framerate I would like. So I tried drawing it natively, but I've come up with performance issues, namely a syncing issue between what I'm getting from the server and what I am able to draw onto the screen of the phone. (There should be a little lag, but the drift gets really bad, which is what I want to avoid). So I have a connection set up to my server and I do get the JPEGS. It's just data I insert into a NSMutableArray buffer CFMutableDataRef _t_data_ref = (CFMutableDataRef)[_buffer_array objectAtIndex:0]; //CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData (_cf_buffer_data); CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData(_t_data_ref); CGImageRef image = CGImageCreateWithJPEGDataProvider(imgDataProvider, NULL, true, kCGRenderingIntentDefault); CGImageRef imgRef = image; CGContextDrawImage(context, CGRectMake(0, 17, 380, 285), imgRef); CGImageRelease(image); CGDataProviderRelease(imgDataProvider); please note this is the gist of my code, but it should summarize what I am trying to accomplish with regards to drawing. Also in order to get the framerate in sync, I had to detach a separate thread that sleeps X seconds and calls [self setNeedsDisplay]. NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; // Top-level pool while(1) { //[NSThread sleepForTimeInterval:TIMER_REFRESH_VALUE]; //sleep(unsigned int ); usleep(MICRO_REFRESH_VALUE); if ([_buffer_array count] > 10) { //NSLog(@"stuff %d", [_buffer_array count]); //[self setNeedsDisplay]; [self performSelectorOnMainThread:@selector(setNeedsDisplay) withObject:nil waitUntilDone:NO]; } } [pool release]; // Release the objects in the pool. My buffer of jpeg data actually fills up quite quick, but I can't seem to actually consume what i'm getting at the same rate, actually much slower. Are there any documents that can describe what kind of performance tuning I can do to make it go faster when rendering the JPEG to the screen? Or am I kind of stuck here? Thanks!

    Read the article

  • Reinterpret a CGImageRef using PyObjC in Python

    - by Michael Rondinelli
    Hi, I'm doing something that's a little complicated to sum up in the title, so please bear with me. I'm writing a Python module that provides an interface to my C++ library, which provides some specialized image manipulation functionality. It would be most convenient to be able to access image buffers as CGImageRefs from Python, so they could be manipulated further using Quartz (using PyObjC, which works well). So I have a C++ function that provides a CGImageRef representation from my own image buffers, like this: CGImageRef CreateCGImageRefForImageBuffer(shared_ptr<ImageBuffer> buffer); I'm using Boost::Python to create my Python bridge. What is the easiest way for me to export this function so that I can use the CGImageRef from Python? Problems: The CGImageRef type can't be exported directly because it is a pointer to an undefined struct. So I could make a wrapper function that wraps it in a PyCObject or something to get it to send the pointer to Python. But then how do I "cast" this object to a CGImageRef from Python? Is there a better way to go about this?

    Read the article

  • Python class variables not defined with called from outside module

    - by Jimmy
    I am having some issues with calling a function outside of a module. The scenario is I have a small class library that is using turtle to do some drawing, the function within the module calls the classes also within the module and draws things, etc. This all works fine and dandy when I call the function from within the same file, but if I have another file and call myLib.scene() I get variable undefined errors. Code examples: a class class Rectangle(object): def __init__(self, pen, height=100, width=100, fillcolor=''): self.pen = pen self.height = height self.width = width self.fillcolor = fillcolor def draw(self, x, y): '''draws the rectangle at coordinates x and y''' self.pen.goto(x, y) if self.fillcolor: self.pen.fillcolor(self.fillcolor) self.pen.fill(True) self.pen.down() for i in range(0,4): self.pen.forward(self.height if i%2 else self.width) self.pen.left(90) and the calling function is this def scene(pen): rect = Rectangle(pen) rect.draw(100,100) when I put the line scene(turtle.Turtle()) into the same file I have no issues, the rectangle is drawn and everyone goes home happy. However, if I try to call it from a separate python file like so: myLib.scene(turtle.Turtle()) I get an error: NameError: global name 'pen' is not defined, in the for loop of my draw method. Even if the line above is in the same file it still bombs out. What is going on?

    Read the article

  • Why there is an invalid context error?

    - by Tattat
    Here is the code I use to draw: - (void) drawSomething { CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetRGBStrokeColor(context, 1, 0, 0, 1); CGContextSetLineWidth(context, 6.0); CGContextMoveToPoint(context, 100.0f, 100.0f); CGContextAddLineToPoint(context, 200.0f, 200.0f); CGContextStrokePath(context); NSLog(@"draw"); } But I got the error like this: [Session started at 2010-04-03 17:51:07 +0800.] Sat Apr 3 17:51:09 MacBook.local MyApp[12869] <Error>: CGContextSetRGBStrokeColor: invalid context Sat Apr 3 17:51:09 MacBook.local MyApp[12869] <Error>: CGContextSetLineWidth: invalid context Sat Apr 3 17:51:09 MacBook.local MyApp[12869] <Error>: CGContextMoveToPoint: invalid context Sat Apr 3 17:51:09 MacBook.local MyApp[12869] <Error>: CGContextAddLineToPoint: invalid context Sat Apr 3 17:51:09 MacBook.local MyApp[12869] <Error>: CGContextDrawPath: invalid context Why it prompt me to say that the context is invalided?

    Read the article

  • Drawing CGPathRef in ScrollView using Iphone sdk.

    - by Ann
    Hi all. I want to draw some curve in my scroll view using CGPathRef. I tried to draw this path but i see only the part of my CGPathRef which located at 320x480 pixels. Is it way to do that. Thank you in advance. Here is my code path = CGPathCreateMutable(); CGContextBeginPath(ctx); CGContextMoveToPoint(ctx, [[pathArray objectAtIndex:0]CGPointValue].x,[[pathArray objectAtIndex:0]CGPointValue].y ); for(int i = 1; i < [pathArray count]; ++i) { CGContextAddLineToPoint(ctx, [[pathArray objectAtIndex:i]CGPointValue].x, [[pathArray objectAtIndex:i]CGPointValue].y); } CGContextClosePath(ctx); CGContextSetRGBFillColor(ctx, 0.5, 0.3, 0.2, 0.5); CGContextFillPath(ctx); And when I scroll my view the line does not change its location. My goal is to move this line during scrolling.

    Read the article

  • correct fisheye distortion

    - by Will
    I have some points that describe positions in a picture taken with a fisheye lens. I've found this description of how to generate a fisheye effect, but not how to reverse it. How do you calculate the radial distance from the centre to go from fisheye to rectilinear? My function stub looks like this: Point correct_fisheye(const Point& p,const Size& img) { // to polar const Point centre = {img.width/2,img.height/2}; const Point rel = {p.x-centre.x,p.y-centre.y}; const double theta = atan2(rel.y,rel.x); double R = sqrt((rel.x*rel.x)+(rel.y*rel.y)); // fisheye undistortion in here please //... change R ... // back to rectangular const Point ret = Point(centre.x+R*cos(theta),centre.y+R*sin(theta)); fprintf(stderr,"(%d,%d) in (%d,%d) = %f,%f = (%d,%d)\n",p.x,p.y,img.width,img.height,theta,R,ret.x,ret.y); return ret; }

    Read the article

  • For "draggable" div tags that are NOT nested: JQuery/JavaScript div tag “containment” approach/algor

    - by Pete Alvin
    Background: I've created an online circuit design application where .draggable() div tags are containers that contain smaller div containers and so forth. Question: For any particular div tag I need to quickly identify if it contains other div tags (that may in turn contain other div tags). -- Since the div tags are draggable, in the DOM they are NOT nested inside each other but I think are absolutely positioned. So I think that a "hit testing" approach is the only way to determine containment, unless there is some "secret" routine built-in somewhere that could help with this. I've searched JQuery and I don't see any built-in routine for this. Does anyone know of an algorithm that's quicker than O(n^2)? Seems like I have to walk the list of div tags in an outer loop (n) and have an inner loop (another n) to compare against all other div tags and do a "containment test" (position, width, height), building a list of contained div tags. That's n-squared. Then I have to build a list of all nested div tags by concatenating contained lists. So the total would be O(n^2)+n. There must be a better way?

    Read the article

  • Building a clip area in a UIView from path objects in its subviews

    - by hkatz
    I'm trying to produce a clipping area in a UIView that's generated from path objects in its subviews. For example, I might have one subview containing a square and another containing a circle. I want to be able to produce a clip in the parent superview that's the union of both these shapes. Can someone explain how to do this? About all I've been able to figure out so far is that: 1 - the superview's drawRect: method is called before its subviews' drawRects are, and 2 - the ContextRef that's accessible in all three instances is the same. Other than that I'm stumped. Thanks, Howard

    Read the article

  • How to write text(using CGContextShowTextAtPoint) dynamically near graph x and y-axis intervals?

    - by Rajendra Bhole
    I developed graph using NSObject class and using CGContext method. The following code displaying dynamically in X and Y-axis intervals, CGContextSetRGBStrokeColor(ctx, 2.0, 2.0, 2.0, 1.0); CGContextSetLineWidth(ctx, 2.0); CGContextMoveToPoint(ctx, 30.0, 200.0); CGContextAddLineToPoint(ctx, 30.0, 440.0); for(float y = 400.0; y >= 200.0; y-=30) { CGContextSetRGBStrokeColor(ctx, 2.0, 2.0, 2.0, 1.0); CGContextMoveToPoint(ctx, 28, y); CGContextAddLineToPoint(ctx, 32, y); CGContextStrokePath(ctx); //CGContextClosePath(ctx); } CGContextMoveToPoint(ctx, 10, 420.0); CGContextAddLineToPoint(ctx, 320, 420.0); //CGContextAddLineToPoint(ctx, 320.0, 420.0); //CGContextStrokePath(ctx); for(float x = 60.0; x <= 260.0; x+=30) { CGContextSetRGBStrokeColor(ctx, 2.0, 2.0, 2.0, 1.0); CGContextMoveToPoint(ctx, x, 418.0); CGContextAddLineToPoint(ctx, x, 422.0); CGContextStrokePath(ctx); CGContextClosePath(ctx); } I want to write the dynamic text on the X and Y-axis lines near the intervals (like X-axis is denoting number of days per week and Y-axis denoting something per someting)? Thanks.

    Read the article

  • Soft Paint Bucket Fill: Colour Equality

    - by Bart van Heukelom
    I'm making a small app where children can fill preset illustrations with colours. I've succesfully implemented an MS-paint style paint bucket using the flood fill argorithm. However, near the edges of image elements pixels are left unfilled, because the lines are anti-aliased. This is because the current condition on whether to fill is colourAtCurrentPixel == colourToReplace (the colours are RGB uints) I'd like to add a smoothing/treshold option like in Photoshop and other sophisticated tools, but what's the algorithm to determine the equality/distance between two colours? if (match(pixel(x,y), colourToReplace) setpixel(x,y,colourToReplaceWith) How to fill in match()? Here's my current full code: var b:BitmapData = settings.background; b.lock(); var from:uint = b.getPixel(x,y); var q:Array = []; var xx:int; var yy:int; var w:int = b.width; var h:int = b.height; q.push(y*w + x); while (q.length != 0) { var xy:int = q.shift(); xx = xy % w; yy = (xy - xx) / w; if (b.getPixel(xx,yy) == from) { b.setPixel(xx,yy,to); if (xx != 0) q.push(xy-1); if (xx != w-1) q.push(xy+1); if (yy != 0) q.push(xy-w); if (yy != h-1) q.push(xy+w); } } b.unlock(null);

    Read the article

  • CGPath and Quartz 2D

    - by iphonedevnoob
    I want to create a triangle using Quartz 2D functions. The 3 edges of the triangle should be in different colors. I am able to create the triangle but not able to set the color of each edge or subpath separately. Any suggestions or sample code are much appreciated.

    Read the article

  • iPhone - Is it ok to override UITableViewCell setSelected:animated

    - by Brian
    I am drawing custom UITableViewCells. My cells are opaque and are completely drawn in the drawRect of the cell to help with performance. I want to handle the look of a selected cell myself. If I don't, then the contents of my cell is covered up by the selectedBackgroundView that is added. Is it common or acceptable to override the setSelected:animated method of my cell so this is done properly. I guess if I did that, then I would not call the super's setSelected method since I would be handling how the cell will show that its selected. I would also have to set the selected property of the cell. Any help would be great. Thanks.

    Read the article

  • Metric 3d reconstruction

    - by srand
    I'm trying to reconstruct 3D points from 2D image correspondences. My camera is calibrated. The test images are of a checkered cube and correspondences are hand picked. Radial distortion is removed. After triangulation the construction seems to be wrong however. The X and Y values seem to be correct, but the Z values are about the same and do not differentiate along the cube. The 3D points look like as if the points were flattened along the Z-axis. What is going wrong in the Z values? Do the points need to be normalized or changed from image coordinates at any point, say before the fundamental matrix is computed? (If this is too vague I can explain my general process or elaborate on parts)

    Read the article

  • How do I translate a point from one rect to a point in a scaled rect?

    - by Bill
    I have an image (i) that's been scaled from its original size (rectLarge) to fit a smaller rectangle (rectSmall). Given a point p in rectSmall, how can I translate this to a point in rectLarge. To make this concrete, suppose I have a 20x20 image that's been scaled to a 10x10 rect. The point (1, 1) in the smaller rect should be scaled up to a point in the larger rect (i.e. (2,2)). I'm trying to achieve this with: result.x = point.x * (destRect.size.width / srcRect.size.width ); result.y = point.y * (destRect.size.height / srcRect.size.height); However, the points generated by this code are not correct - they do not map to the appropriate point in the original image. What am I doing wrong?

    Read the article

  • How do bezier handles work?

    - by user146780
    On Wikipedia I found information about bezier curves and made a function to generate the inbetween points for a bezier polygon. I noticed that Expression Design uses bezier handles. This allows a circle to be made with 4 points each with a bezier handle. I'm just not sure mathematically how this works in relation with the formula for bezier point at time T. How do these handle vectors work to modify the shape? Basically what's there relation to the bezier formula? Thanks

    Read the article

  • Odd values/movement with UITouch and CGPoint.

    - by Joshua
    I'm getting odd numbers from UITouch and CGPoint and one is different, I also think this maybe causing a flickering affect in my app when I try to move something by following a touch. This is the code I'm using: - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { NSLog(@"touchDown"); UITouch *touch = [touches anyObject]; firstTouch = [touch locationInView:self.view]; if (CGRectContainsPoint(but.frame, firstTouch)) { butContains = YES; NSLog(@"butContains = %d", butContains); } } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; currentTouch = [touch locationInView:self.view]; NSInteger x = currentTouch.x; NSInteger y = currentTouch.y; CGFloat CGX = (CGFloat)x; CGFloat CGY = (CGFloat)y; if (butContains == YES) { NSLog(@"touch in subView/contentView"); sub.frame = CGRectMake(CGX, CGY, 130.0, 21.0); } NSLog(@"touch moved"); } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; currentTouch = [touch locationInView:self.view]; NSLog(@"User tapped at %@", NSStringFromCGPoint(currentTouch)); NSLog(@"Point %a, %a", currentTouch.x, currentTouch.y); NSInteger x = currentTouch.x; NSInteger y = currentTouch.y; NSLog(@"Point %a, %a", y, x); CGFloat CGX = (CGFloat)x; CGFloat CGY = (CGFloat)y; NSLog(@"Point %g, %g", CGX, CGY); if (butContains == YES) { NSLog(@"touch in subView/contentView"); sub.frame = CGRectMake(CGX, CGY, 130.0, 21.0); } butContains = NO; NSLog(@"touch ended"); } - (IBAction)add:(id)sender{ InSightViewController *contentView = [[InSightViewController alloc] initWithNibName:@"SubView" bundle:[NSBundle mainBundle]]; [contentView loadView]; [self.view insertSubview:contentView.view atIndex:0]; } This is what I get from the touchesEnded method in the Debugger. 2010-04-20 20:06:13.045 InSight[25042:207] User tapped at {50, 78} 2010-04-20 20:06:13.047 InSight[25042:207] Point 0x1.9p+5, 0x1.38p+6 2010-04-20 20:06:13.048 InSight[25042:207] Point 0x1.900000027p-1037, 0x1.38p+6 2010-04-20 20:06:13.048 InSight[25042:207] Point 50, 78 And this is what's happening in the Simulator. fwdr.org/file:y8bd As this is a complicated problem this is the source code of my XCode Project aswell. http://cl.ly/Qjj

    Read the article

  • iPhone image distortion

    - by Corey Floyd
    Are there any reasons why the simulator will display UIImageViews properly, but incorrectly on the iPhone? My Process: An image in a PNG file Start a UIGraphicsBeginImageContext() Draw the PNG in a CGrect Draw text in the CGRect Create an UIImage from the context Set the image of a UIImaveView to the UIImage Set the frame of the UIImageView to the size of the PNG Add as a subview Outcome: The image does not display correctly. The rightmost 1-3 pixels of the image is just a vertical white line. This occurs only on the device and not on the simulator. I can fix the problem, but only by increasing the size of the UIImageView. If I increase the size.height of the UIImageView by 1 pixel, it displays the UIImage correctly. Of course, these leaves the iPhone to scale my image before drawing it on screen which is not desirable. Any ideas why this occurs or any fixes for it? (I will post my code if needed)

    Read the article

  • How to read and modify the colorspace of an image in c#

    - by Matthias
    I'm loading a Bitmap from a jpg file. If the image is not 24bit RGB, I'd like to convert it. The conversion should be fairly fast. The images I'm loading are up to huge (9000*9000 pixel with a compressed size of 40-50MB). How can this be done? Btw: I don't want to use any external libraries if possible. But if you know of an open source utility class performing the most common imaging tasks, I'd be happy to hear about it. Thanks in advance.

    Read the article

  • Nonblocking texture upload on iPhone and other OpenGL ES platforms

    - by spurserh
    Hello, I am doing some work which involves drawing video frames in real time in OpenGL ES. Right now I am using glTexImage2D to transfer the data, in the absence of Pixel Buffer Objects and the like. A below answer suggests that glTexImage2D is always blocking, even if texture object referenced does is not used for any drawing. Is there a way to do a nonblocking texture upload with OpenGL ES (any version)? Thank you very much, Sean

    Read the article

  • Visual Python - Visualize graphs relating to a movement

    - by Francisco P.
    Hello, everyone! I'm working with visual python on a project where I need to simulate a physical movement. I'd like to present, in a different window than the one the actual, 3D sim is running, two graphs, both related to the movement: How the velocity and angular velocity progress over time. How the movement and rotation progress over time. All these vars are refreshed once per cycle (inside a while(true)) How can I accomplish this? Thank you for your time!

    Read the article

  • Video Synthesis - Making waves, patterns, gradients...

    - by Nathan
    I'm writing a program to generate some wild visuals. So far I can paint each pixel with a random blue value: for (y = 0; y < YMAX; y++) { for (x = 0; x < XMAX; x++) { b = rand() % 255; setPixelColor(x,y,r,g,b); } } I'd like to do more than just make blue noise, but I'm not sure where to start (Google isn't helping me much today), so it would be great if you could share anything you know on the subject or some links to related resources.

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >