Search Results

Search found 8165 results on 327 pages for '3d graphics'.

Page 109/327 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • How do I arbitrarily distort a textured polygon?

    - by Archagon
    I'd like to write a program that lets me arbitrarily distort a textured polygon by dragging its vertices. I want the texture to distort fluidly and without overlap, assuming the new polygon doesn't intersect itself. I should also be able to repeat the process with the new shape, and with a minimum amount of loss. Are there any algorithms for doing this?

    Read the article

  • How to access pixels of an NSBitmapImageRep?

    - by Paperflyer
    I have an NSBitmapImageRep that is created like this: NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL pixelsWide:waveformSize.width pixelsHigh:waveformSize.height bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:YES colorSpaceName:NSCalibratedRGBColorSpace bytesPerRow:0 bitsPerPixel:0]; Now I want to access the pixel data so I get a pointer to the pixel planes using unsigned char *bitmapData; [imageRep getBitmapDataPlanes:&bitmapData]; According to the Documentation this returns a C array of five character pointers. But how can it do that? since the type of the argument is unsigned char **, it can only return an array of chars, but not an array of char pointers. So, this leaves me wondering how to access the individual pixels. Do you have an idea how to do that? (I know there is the method – setColor:atX:y:, but it seems to be pretty slow if invoked for every single pixel of a big bitmap.)

    Read the article

  • Video Synthesis - Making waves, pattern, gradients...

    - by Nathan
    I'm writing a program to generate some trippy visuals. My code paints each pixel with a random blue value which loops at 0.04 second intervals. for (y = 0; y < 5.5; y += 0.2) { for (x = 0; x < 7.5; x += 0.2) { b = rand() / ((double) RAND_MAX); setPixelColor(x,y,r,g,b); } } I'd like to do more than just make blue noise... but my maths is a bit rusty, and Google isn't helping me much today, so it would be great if you could share anything you know about making waves, patterns, gradient animations, etc or links to such material.

    Read the article

  • imageWithCGImage and memory

    - by Adam Ernst
    If I use [UIImage imageWithCGImage:], passing in a CGImageRef, do I then release the CGImageRef or does UIImage take care of this itself when it is deallocated? The documentation isn't entirely clear. It says "This method does not cache the image object." Originally I called CGImageRelease on the CGImageRef after passing it to imageWithCGImage:, but that caused a malloc_error_break warning in the Simulator claiming a double-free was occurring.

    Read the article

  • Auto scrolling or shifting a bitmap in .NET

    - by mikej
    I have a .NET GDI+ bitmap object (or if it makes the problem easier a WPF bitmap object) and what I want to to is shift the whole lot by dx,dy (whole pixels) and I would ideally like to do it using .NET but API calls are ok. It has to be efficient bacause its going to be called 10,000 times say with moderately large bitmaps. I have implemented a solution using DrawImage - but its slow and it halts the application for minutes while the GC cleans up the temp objects that have been used. I have also started to work on a version using ScrollDC but so far have had no luck getting it to work on the DC of the bitmap (I can make it work buy creating an API bitmap with bitmap handle, then creating a compatible DC asnd calling ScrollDC but then I have to put it back into the bitmap object). There has to be an "inplace" way of shifting a bitmap. mikej

    Read the article

  • How to use Mesa3D on Mac OS X and Windows

    - by gutsblow
    Hello all, I need to use Mesa3D for a cross platform application(windows and Mac only) which uses only offline software rendering. The reason I wanted to use Mesa3D is because it has the same Drawing calls as OpenGL and they are really easy. Now I know that Apple itself has a software implementation (which I heard is flaky), but I prefer using Mesa so that it's a lot easier for me to maintain the code on both platforms. On windows I managed to compile three DLL's from the Mesa3d source, but don't know what to do with them. On Mac OS X I am completely clueless. I would highly appreciate your help. Thank you once again very much!

    Read the article

  • QPainter paints garbage

    - by DSblizzard
    Fragment of program code: def add_link(Item0Num, Item1Num): global Mw, View # Mw - MainWindow if Item0Num != Item1Num and not link_exists(Item0Num, Item1Num): append( links_to(Item1Num), Item0Num ) append( links_from(Item0Num), Item1Num ) LinkGi = TLinkGi() Mw.Scene.addItem(LinkGi) LinkGi.setZValue(200) LinkGi.scale(1 / View.Scale, 1 / View.Scale) LinkGi.Item0Num = Item0Num LinkGi.Item1Num = Item1Num class TLinkGi(QGraphicsItem): def paint(self, Painter, StyleOptionGraphicsItem, Widget): global Mw, View Pen = QPen(Qt.black, 1) Painter.setPen(Pen) X0, Y0 = task_center(self.Item0Num) self.setPos(X0, Y0) X1, Y1 = task_center(self.Item1Num) X, Y = int( (X1 - X0) * View.Scale ), int( (Y1 - Y0) * View.Scale ) Painter.drawLine(0, 0, X, Y) #Mw.Scene.update(0, 0, Plan.Size, Plan.Size) # (1) #Mw.gvMain.repaint() # (2) def boundingRect(self): global View Rect = QRectF(0, 0, Plan.Size, Plan.Size) return Rect This paints such garbage: http://img697.imageshack.us/content_round.php?page=done&l=img697/5395/qpaintergarbage1.jpg When lines (1) and (2) are uncommented things doesn't become much better: http://img63.imageshack.us/content_round.php?page=done&l=img63/9693/qpaintergarbage0.jpg Please help me to solve this problem.

    Read the article

  • tangent of two circles

    - by harryovers
    Hello, I am trying to write some code that that will draw the line which is a tangent between 2 circles. so far i have been able to draw multiple circles, and lines between the centers. i have a class which stores the values used in drawing the circles (radius, position). what i need is a method in this class to find all posible tangents between 2 circles. any help would be great. this is what i have so far (it could very well be a load of rubbish) public static Vector2[] Tangents(circle c1, circle c2) { if (c2.radius > c1.radius) { circle temp = c1; c1 = c2; c2 = temp; } circle c0 = new circle(c1.radius - c2.radius, c1.center); Vector2[] tans = new Vector2[2]; Vector2 dir = _point - _center; float len = (float)Math.Sqrt((dir.X * dir.X) + (dir.Y * dir.Y)); float angle = (float)Math.Atan2(dir.X, dir.Y); float tan_length = (float)Math.Sqrt((len * len) - (_radius * _radius)); float tan_angle = (float)Math.Asin(_radius / len); tans[0] = new Vector2((float)Math.Cos(angle + tan_angle), (float)Math.Sin(angle + tan_angle)); tans[1] = new Vector2((float)Math.Cos(angle - tan_angle), (float)Math.Sin(angle - tan_angle)); Vector2 dir0 = c0.center - tans[0]; Vector2 dir1 = c0.center - tans[1]; Vector2 tan00 = Vector2.Add(Vector2.Multiply(tans[0], (float)c2.radius), c1.center); Vector2 tan01 = c2.center; Vector2 tan10 = Vector2.Add(Vector2.Multiply(tans[1], (float)c2.radius), c1.center); Vector2 tan11 = c2.center; }

    Read the article

  • How do I remove those rotation artefacts from my CATiledLayer?

    - by Felix
    Hello all, I have a CATiledLayer into which I render content in the following method - (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx I used the QuartzDemo code to draw a pattern. This works very well until I apply a rotation transform to the layer's parentLayer (a UIView): rotated: These zigzag artefacts become worse when I start drawing lines and texts into the CATiledLayer. I applied the transform as follows (I also tried using an affine transform on the view itself): self.containerView.layer.transform = CATransform3DMakeRotation(angleRadians, 0.0f, 0.0f, 1.0f); I transform the containerView rather than the layer itself, as I have several layers in that view that I would like to rotate at the same time without changing the relative positions. I did not have problems when rotating UIImageViews in the past. Is there a way that I can rotate the CATiledLayer without these problems? Any help would be greatly appreciated. Yours, Felix

    Read the article

  • CSG operations on implicit surfaces with marching cubes

    - by Mads Elvheim
    I render isosurfaces with marching cubes, (or perhaps marching squares as this is 2D) and I want to do set operations like set difference, intersection and union. I thought this was easy to implement, by simply choosing between two vertex scalars from two different implicit surfaces, but it is not. For my initial testing, I tried with two spheres, and the set operation difference. i.e A - B. One sphere is moving and the other one is stationary. Here's the approach I tried when picking vertex scalars and when classifying corner vertices as inside or outside. The code is written in C++. OpenGL is used for rendering, but that's not important. Normal rendering without any CSG operations does give the expected result. void march(const vec2& cmin, //min x and y for the grid cell const vec2& cmax, //max x and y for the grid cell std::vector<vec2>& tri, float iso, float (*cmp1)(const vec2&), //distance from stationary sphere float (*cmp2)(const vec2&) //distance from moving sphere ) { unsigned int squareindex = 0; float scalar[4]; vec2 verts[8]; /* initial setup of the grid cell */ verts[0] = vec2(cmax.x, cmax.y); verts[2] = vec2(cmin.x, cmax.y); verts[4] = vec2(cmin.x, cmin.y); verts[6] = vec2(cmax.x, cmin.y); float s1,s2; /********************************** ********For-loop of interest****** *******Set difference between **** *******two implicit surfaces****** **********************************/ for(int i=0,j=0; i<4; ++i, j+=2){ s1 = cmp1(verts[j]); s2 = cmp2(verts[j]); if((s1 < iso)){ //if inside sphere1 if((s2 < iso)){ //if inside sphere2 scalar[i] = s2; //then set the scalar to the moving sphere } else { scalar[i] = s1; //only inside sphere1 squareindex |= (1<<i); //mark as inside } } else { scalar[i] = s1; //inside neither sphere } } if(squareindex == 0) return; /* Usual interpolation between edge points to compute the new intersection points */ verts[1] = mix(iso, verts[0], verts[2], scalar[0], scalar[1]); verts[3] = mix(iso, verts[2], verts[4], scalar[1], scalar[2]); verts[5] = mix(iso, verts[4], verts[6], scalar[2], scalar[3]); verts[7] = mix(iso, verts[6], verts[0], scalar[3], scalar[0]); for(int i=0; i<10; ++i){ //10 = maxmimum 3 triangles, + one end token int index = triTable[squareindex][i]; //look up our indices for triangulation if(index == -1) break; tri.push_back(verts[index]); } } This gives me weird jaggies: It looks like the CSG operation is done without interpolation. It just "discards" the whole triangle. Do I need to interpolate in some other way, or combine the vertex scalar values? I'd love some help with this. A full testcase can be downloaded HERE

    Read the article

  • Finding intersection of two spheres

    - by Onkar Deshpande
    Hi, Consider the following problem - I am given 2 links of length L0 and L1. P0 is the point that the first link starts at and P1 is the point that I want the end of second link to be at in 3-D space. I am supposed to write a function that should take in these 3-D points (P0 and P1) as inputs and should find all configurations of the links that put the second link's end point at P1. My understanding of how to go about it is - Each link L0 and L1 will create a sphere S0 and S1 around itself. I should find out the intersection of those two spheres (which will be a circle) and print all points that are on the circumference of that circle. I saw gmatt's first reply on the http://stackoverflow.com/questions/1406375/finding-intersection-points-between-3-spheres but could not understand it properly since the images did not show up. I also saw a formula for finding out the intersection at mathworld[dot]wolfram[dot]com/Sphere-SphereIntersection[dot]html . I could find the radius of intersection by the method given on mathworld. Also I can find the center of that circle and then use the parametric equation of circle to find the points. The only doubt that I have is will this method work for the points P0 and P1 mentioned above ? Please comment and let me know your thoughts.

    Read the article

  • How can I "best fit" an arbitrary cairo (pycairo) path?

    - by Daniel Straight
    It seems like given the information in stroke_extents() and the translate(x, y) and scale(x, y) functions, I should be able to take any arbitrary cairo (I'm using pycairo) path and "best fit" it. In other words, center it and expand it to fill the available space. Before drawing the path, I have scaled the canvas such that the origin is the lower left corner, up is y+, right is x+, and the height and width are both 1. Given these conditions, this code seems to correctly scale the path: # cr is the canvas extents = cr.stroke_extents() x_size = abs(extents[0]) + abs(extents[2]) y_size = abs(extents[1]) + abs(extents[3]) cr.scale(1.0 / x_size, 1.0 / y_size) I cannot for the life of me figure out the translating though. Is there a simpler approach? How can I "best fit" a cairo path on its canvas? Please ask for clarification if anything is unclear in this question.

    Read the article

  • Equivalent of CGPoint with integers?

    - by Ivan Vucica
    Cheers, I like strict typing in C. Therefore, I don't want to store a 2D vector of floats if I specifically need integers. Is there an Apple-provided equivalent of CGPoint which stores data as integers? I've implemented my type Vector2i and its companion function Vector2iMake() à la CGPoint, but something deep in me screams that Apple was there already.

    Read the article

  • Generate colors between red and green for a power meter?

    - by Simucal
    I'm writing a java game and I want to implement a power meter for how hard you are going to shoot something. I need to write a function that takes a int between 0 - 100, and based on how high that number is, it will return a color between Green (0 on the power scale) and Red (100 on the power scale). Similar to how volume controls work: What operation do I need to do on the Red, Green, and Blue components of a color to generate the colors between Green and Red? So, I could run say, getColor(80) and it will return an orangish color (its values in R, G, B) or getColor(10) which will return a more Green/Yellow rgb value. I know I need to increase components of the R, G, B values for a new color, but I don't know specifically what goes up or down as the colors shift from Green-Red. Progress: I ended up using HSV/HSB color space because I liked the gradiant better (no dark browns in the middle). The function I used was (in java): public Color getColor(double power) { double H = power * 0.4; // Hue (note 0.4 = Green, see huge chart below) double S = 0.9; // Saturation double B = 0.9; // Brightness return Color.getHSBColor((float)H, (float)S, (float)B); } Where "power" is a number between 0.0 and 1.0. 0.0 will return a bright red, 1.0 will return a bright green. Java Hue Chart: Thanks everyone for helping me with this!

    Read the article

  • Tracking object entries when "playing" a Windows Enhanced Metafile

    - by lzcd
    One of my current projects requires that I work out what colours are being used in an EMF file. I have been able to successfully whip up a file parser in C# that notes all references to colours... but haven't had any luck tracking which objects are in use across the entire file so I can apart colours that are referenced from colours that are used to paint on screen. The older style WMF files are easy as the object library starts at zero and one can simply track each "Create Object" style command... but EMF files are proving to be trickier as there seems to be preexisting entries in the library (if the "Select Object" commands I'm seeing are to be believed). Would anyone be able to either enlighten me on how to track objects in the library correctly with EMF files... or suggest an easier alternative to work out which colours are actually being used in the file (as opposed to just being defined)?

    Read the article

  • getting window screenshot windows API

    - by Oliver
    Hi, I am trying to make a program to work on top of an existing GUI to annotate it and provide extra calculations and statistical information. I want to do this using image recognition, as I have learned a fair amount about this in University using Matlab and similar things. I can get a handle to the window I want to perform image recognition on, but I don't know how to turn that handle into an image of that window, and all its visible child windows. I suppose I am looking for something like the screenshot function, but restricted to a single window. How would I go about doing this? I suppose I'd need something like a .bmp to mess about with. Also, it would have to be efficient enough that I could call it several times a second without my PC grinding to a halt. Hopefully this isn't an obvious question, I typed some things into google but didn't get anything related.

    Read the article

  • OpenGL Color Interpolation across vertices

    - by gutsblow
    Right now, I have more than 25 vertices that form a model. I want to interpolate color linearly between the first and last vertex. The Problem is when I write the following code glColor3f(1.0,0.0,0.0); vertex3f(1.0,1.0,1.0); vertex3f(0.9,1.0,1.0); . .`<more vertices>; glColor3f(0.0,0.0,1.0); vertex3f(0.0,0.0,0.0); All the vertices except that last one are red. Now I am wondering if there is a way to interpolate color across these vertices without me having to manually interpolate color natively (like how opengl does it automatically) at each vertex since, I will be having a lot more number of colors at various vertices. Any help would be extremely appreciated. Thank you!

    Read the article

  • How to write text(using CGContextShowTextAtPoint) on graph x and y-axis intervals points?

    - by Rajendra Bhole
    I developed graph using NSObject class and using CGContext method. The following code displaying dynamically in X and Y-axis intervals, CGContextSetRGBStrokeColor(ctx, 2.0, 2.0, 2.0, 1.0); CGContextSetLineWidth(ctx, 2.0); CGContextMoveToPoint(ctx, 30.0, 200.0); CGContextAddLineToPoint(ctx, 30.0, 440.0); for(float y = 400.0; y >= 200.0; y-=30) { CGContextSetRGBStrokeColor(ctx, 2.0, 2.0, 2.0, 1.0); CGContextMoveToPoint(ctx, 28, y); CGContextAddLineToPoint(ctx, 32, y); CGContextStrokePath(ctx); //CGContextClosePath(ctx); } CGContextMoveToPoint(ctx, 10, 420.0); CGContextAddLineToPoint(ctx, 320, 420.0); //CGContextAddLineToPoint(ctx, 320.0, 420.0); //CGContextStrokePath(ctx); for(float x = 60.0; x <= 260.0; x+=30) { CGContextSetRGBStrokeColor(ctx, 2.0, 2.0, 2.0, 1.0); CGContextMoveToPoint(ctx, x, 418.0); CGContextAddLineToPoint(ctx, x, 422.0); CGContextStrokePath(ctx); CGContextClosePath(ctx); } I want to write the dynamic text on the X and Y-axis lines near the intervals (like X-axis is denoting number of days per week and Y-axis denoting something per someting)? Thanks.

    Read the article

  • correcting fisheye distortion programmatically

    - by Will
    I have some points that describe positions in a picture taken with a fisheye lens. I've found this description of how to generate a fisheye effect, but not how to reverse it. How do you calculate the radial distance from the centre to go from fisheye to rectilinear? My function stub looks like this: Point correct_fisheye(const Point& p,const Size& img) { // to polar const Point centre = {img.width/2,img.height/2}; const Point rel = {p.x-centre.x,p.y-centre.y}; const double theta = atan2(rel.y,rel.x); double R = sqrt((rel.x*rel.x)+(rel.y*rel.y)); // fisheye undistortion in here please //... change R ... // back to rectangular const Point ret = Point(centre.x+R*cos(theta),centre.y+R*sin(theta)); fprintf(stderr,"(%d,%d) in (%d,%d) = %f,%f = (%d,%d)\n",p.x,p.y,img.width,img.height,theta,R,ret.x,ret.y); return ret; } Alternatively, I could somehow convert the image from fisheye to rectilinear before finding the points, but I'm completely befuddled by the OpenCV documentation. Is there a straightforward way to do it in OpenCV, and does it perform well enough to do it to a live video feed?

    Read the article

  • OpenGL 2D Texture Mapping problem.

    - by gutsblow
    Hi there, I am relatively new to OpenGL and I am having some issues when I am rendering an image as a texture for a QUAD which is as the same size of the image. Here is my code. I would be very grateful if someone helps me to solve this problem. The image appears way smaller and is squished. (BTW, the image dimensions are 500x375). glGenTextures( 1, &S_GLator_InputFrameTextureIDSu ); glBindTexture(GL_TEXTURE_2D, S_GLator_InputFrameTextureIDSu); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR); glTexImage2D( GL_TEXTURE_2D, 0, 4, S_GLator_EffectCommonData.mRenderBufferWidthSu, S_GLator_EffectCommonData.mRenderBufferHeightSu, 0, GL_RGBA, GL_UNSIGNED_BYTE, dataP); glBindTexture(GL_TEXTURE_2D, S_GLator_InputFrameTextureIDSu); glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, S_GLator_EffectCommonData.mRenderBufferWidthSu, S_GLator_EffectCommonData.mRenderBufferHeightSu, GL_RGBA, GL_UNSIGNED_BYTE, bufferP); //set the matrix modes glMatrixMode( GL_PROJECTION ); glLoadIdentity(); //gluPerspective( 45.0, (GLdouble)widthL / heightL, 0.1, 100.0 ); glOrtho (0, 1, 0, 1, -1, 1); // Set up the frame-buffer object just like a window. glViewport( 0, 0, widthL, heightL ); glDisable(GL_DEPTH_TEST); glClearColor( 0.0f, 0.0f, 0.0f, 0.0f ); glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); glMatrixMode( GL_MODELVIEW ); glLoadIdentity(); glBindTexture( GL_TEXTURE_2D, S_GLator_InputFrameTextureIDSu ); //Render the geometry to the frame-buffer object glBegin(GL_QUADS); //input frame glColor4f(1.f,1.f,1.f,1.f); glTexCoord2f(0.0f,0.0f); glVertex3f(0.f ,0.f ,0.0f); glTexCoord2f(1.0f,0.0f); glVertex3f(1.f ,0.f,0.0f); glTexCoord2f(1.0f,1.f); glVertex3f(1.f ,1.f,0.0f); glTexCoord2f(0.0f,1.f); glVertex3f(0.f ,1.f,0.0f); glEnd();

    Read the article

  • How to fix rotations in a Rubik's Cube?

    - by Eindbaas
    I'm trying to create a Rubik's Cube in Flash & Papervision and i'm really stuck here. I'm up to the point where i can rotate any plane of cubes once, but after that...it's messed up because all local coordinate systems are messy. I dont really know where to go from here, can anybody give any advice on what do do? I'm not looking for 'read about transformation matrices', i know i should (and i am doing that), but i'm not really sure what to look for. My idea is that, after each rotation, i should fix each coordinate system of each cube again, but i have no idea how. Any hints on what i want to achieve (in words), and why, are much appreciated. http://dl.dropbox.com/u/250155/rubik/main.html (you can press the K key once ;) )

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >