Search Results

Search found 5873 results on 235 pages for 'raster graphics'.

Page 79/235 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Repeating animations using the Stop Selector

    - by Tiago
    I'm trying to repeat an animation until a certain condition is met. Something like this: - (void) animateUpdate { if (inUpdate) { [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:2.0]; [UIView setAnimationDelegate: self]; [UIView setAnimationDidStopSelector: @selector(animateUpdate)]; button.transform = CGAffineTransformMakeRotation( M_PI ); [UIView commitAnimations]; } } This will run the first time, but it won't repeat. The selector will just be called until the application crashes. How should I do this? Thanks.

    Read the article

  • fastest engine to convert PDF into PNG

    - by skyde
    I would like to know which of the opensource PDF engine can convert a pdf into a image the fastest. I don't care about the quality of the result (antialiasing ...) For my project it need to be very very fast. I would probably need to build my own but i dont wan't to start from scratch.

    Read the article

  • How does Photoshop (Or drawing programs) blit?

    - by user146780
    I'm getting ready to make a drawing application in Windows. I'm just wondering, do drawing programs have a memory bitmap which they lock, then set each pixel, then blit? I don't understand how Photoshop can move entire layers without lag or flicker without using hardware acceleration. Also in a program like Expression Design, I could have 200 shapes and move them around all at once with no lag. I'm really wondering how this can be done without GPU help. I don't think super efficient algorithms could justify that? Thanks

    Read the article

  • How to -accurately- measure size in pixels of text being drawn on a canvas by drawTextOnPath()

    - by Nick
    I'm using drawTextOnPath() to display some text on a Canvas and I need to know the dimensions of the text being drawn. I know this is not feasible for paths composed of multiple segments, curves, etc. but my path is a single segment which is perfectly horizontal. I am using Paint.getTextBounds() to get a Rect with the dimensions of the text I want to draw. I use this rect to draw a bounding box around the text when I draw it at an arbitrary location. Here's some simplified code that reflects what I am currently doing: // to keep this example simple, always at origin (0,0) public drawBoundedText(Canvas canvas, String text, Paint paint) { Rect textDims = new Rect(); paint.getTextBounds(text,0, text.length(), textDims); float hOffset = 0; float vOffset = paint.getFontMetrics().descent; // vertically centers text float startX = textDims.left; / 0 float startY = textDims.bottom; float endX = textDims.right; float endY = textDims.bottom; path.moveTo(startX, startY); path.lineTo(endX, endY); path.close(); // draw the text canvas.drawTextOnPath(text, path, 0, vOffset, paint); // draw bounding box canvas.drawRect(textDims, paint); } The results are -close- but not perfect. If I replace the second to last line with: canvas.drawText(text, startX, startY - vOffset, paint); Then it works perfectly. Usually there is a gap of 1-3 pixels on the right and bottom edges. The error seems to vary with font size as well. Any ideas? It's possible I'm doing everything right and the problem is with drawTextOnPath(); the text quality very visibly degrades when drawing along paths, even if the path is horizontal, likely because of the interpolation algorithm or whatever its using behind the scenes. I wouldnt be surprised to find out that the size jitter is also coming from there.

    Read the article

  • Generating a beveled edge for a 2D polygon

    - by Metaphile
    I'm trying to programmatically generate beveled edges for geometric polygons. For example, given an array of 4 vertices defining a square, I want to generate something like this. But computing the vertices of the inner shape is baffling me. Simply creating a copy of the original shape and then scaling it down will not produce the desired result most of the time. My algorithm so far involves analyzing adjacent edges (triples of vertices; e.g., the bottom-left, top-left, and top-right vertices of a square). From there, I need to find the angle between them, and then create a vertex somewhere along that angle, depending on how deep I want the bevel to be. And because I don't have much of a math background, that's where I'm stuck. How do I find that center angle? Or is there a much simpler way of attacking this problem?

    Read the article

  • Is it possible to achieve MAX(As,Ad) openGL blending?

    - by Jeff B
    I am working on a game where I want to create shadows under a series of sprites on a grid. The shadows are larger than the sprites themselves and the sprites are animated (i.e. move and rotate). I cannot simply render them into the sprite png, or the shadows will overlap adjacent sprites. I also cannot simply put shadows on a lower layer by themselves, because when they overlap, they will create dark bands at their intersection. These sprites are animated, so it is not feasible to render these en masse. Basically, I want the sprites' shadows to blend together such that they max out at a set opacity. Example: I believe this is equivalent to an openGL blending of (Rs,Gs,Bs,Max(As,Ds)), where I don't really care about R,G, and B, as it will always be the same color in src and dst. However, this is not a valid openGL blending mode. Is there an easy way to accomplish this, especially in cocos2d-iphone? I would be able to approximate this by making the shadow sprites opaque, then applying them both to a parent sprite, and making the parent sprite 40% opacity. However, the way cocos2d works, this only sets the opacity of each child to 40%, rather than the combined sprite image, which results in the same stripe.

    Read the article

  • BlackBerry - Convert EncodedImage to byte []

    - by user324884
    I am using below code where i don't want to use JPEGEncodedImage.encode because it increases the size. So I need to directly convert from EncodedImage to byte array. FileConnection fc= (FileConnection)Connector.open(name); is=fc.openInputStream(); byte[] ReimgData = IOUtilities.streamToBytes(is); EncodedImage encode_image = EncodedImage.createEncodedImage(ReimgData, 0, (int)fc.fileSize()); encode_image = sizeImage(encode_image, (int)maxWidth,(int)maxHeight); JPEGEncodedImage encoder=JPEGEncodedImage.encode(encode_image.getBitmap(),50); ReimgData=encoder.getData(); is.read(ReimgData); HttpMultipartRequest( content[0], content[1], content[2], params, "image",txtfile.getText(), "image/jpeg", ReimgData );

    Read the article

  • quartz2d translating the origin

    - by qwertyp96
    My understanding of quartz2d is that the code CGContextTranslateCTM(context, x, y); translates the coordinate system. I have a quartz2d view with lots of shapes on it, and the user needs to be able to pan around and zoom it. However, when using the CGContextScaleCTM(context, scaleX, scaleY); code, everything scales around the origin, not the center of the viewpoint the user is viewing. My solution to this was to use the following code: CGContextRef context = UIGraphicsGetCurrentContext(); CGContextTranslateCTM(context, 512.0+offset.x, 384.0+offset.y); //(512, 384) is the center of the iPad screen CGContextScaleCTM(context, scale, scale); You can translate around fine, but things still scale into the corner. What's wrong? EDIT: Oh. Wow. Duh. If you move the origin, the shapes move too, so you can't move it relative to the shapes. Now I know what's wrong, but how do I do that?(move the origin independently of the shapes)

    Read the article

  • Can XCode draw the call graph of a program?

    - by Werner
    Hi, I am new to Mac OSX, and I wonder if Xcode can generate , for a given C++ source code, the call graph of the program in a visual way. I also wonder if for each function, and after a run, whether it can also print the %time spent on the function If so, I would thank really some links with tutorials or info, after googling I did not find anything relevant Thanks

    Read the article

  • Strange iPhone application icon view in the iTunes's Applications section

    - by Spiel
    When I drag 'My iPhone App' application's file into iTunes it has proper view. Rounded corners and transparent background. Then I close iTunes and open it again. Corners are still rounded but... What has happened with the background? http://www.freeimagehosting.net/uploads/15fee337bc.png Icon is a project's resource file named 'iTunesArtwork' with dimension 512x512, PNG format.

    Read the article

  • When using Direct3D, how much math is being done on the CPU?

    - by zirgen
    Context: I'm just starting out. I'm not even touching the Direct3D 11 API, and instead looking at understanding the pipeline, etc. From looking at documentation and information floating around the web, it seems like some calculations are being handled by the application. That, is, instead of simply presenting matrices to multiply to the GPU, the calculations are being done by a math library that operates on the CPU. I don't have any particular resources to point to, although I guess I can point to the XNA Math Library or the samples shipped in the February DX SDK. When you see code like mViewProj = mView * mProj;, that projection is being calculated on the CPU. Or am I wrong? If you were writing a program, where you can have 10 cubes on the screen, where you can move or rotate cubes, as well as viewpoint, what calculations would you do on the CPU? I think I would store the geometry for the a single cube, and then transform matrices representing the actual instances. And then it seems I would use the XNA math library, or another of my choosing, to transform each cube in model space. Then get the coordinates in world space. Then push the information to the GPU. That's quite a bit of calculation on the CPU. Am I wrong? Am I reaching conclusions based on too little information and understanding? What terms should I Google for, if the answer is STFW? Or if I am right, why aren't these calculations being pushed to the GPU as well?

    Read the article

  • Which OpenGL functions are not GPU-accelerated?

    - by Xavier Ho
    I was shocked when I read this (from the OpenGL wiki): glTranslate, glRotate, glScale Are these hardware accelerated? No, there are no known GPUs that execute this. The driver computes the matrix on the CPU and uploads it to the GPU. All the other matrix operations are done on the CPU as well : glPushMatrix, glPopMatrix, glLoadIdentity, glFrustum, glOrtho. This is the reason why these functions are considered deprecated in GL 3.0. You should have your own math library, build your own matrix, upload your matrix to the shader. For a very, very long time I thought most of the OpenGL functions use the GPU to do computation. I'm not sure if this is a common misconception, but after a while of thinking, this makes sense. Old OpenGL functions (2.x and older) are really not suitable for real-world applications, due to too many state switches. This makes me realise that, possibly, many OpenGL functions do not use the GPU at all. So, the question is: Which OpenGL function calls don't use the GPU? I believe knowing the answer to the above question would help me become a better programmer with OpenGL. Please do share some of your insights.

    Read the article

  • Detecting coincident subset of two coincident line segments

    - by Jared Updike
    This question is related to: How do I determine the intersection point of two lines in GDI+? (great explanation of algebra but no code) How do you detect where two line segments intersect? (accepted answer doesn't actually work) But note that an interesting sub-problem is completely glossed over in most solutions which just return null for the coincident case even though there are three sub-cases: coincident but do not overlap touching just points and coincident overlap/coincident line sub-segment For example we could design a C# function like this: public static PointF[] Intersection(PointF a1, PointF a2, PointF b1, PointF b2) where (a1,a2) is one line segment and (b1,b2) is another. This function would need to cover all the weird cases that most implementations or explanations gloss over. In order to account for the weirdness of coincident lines, the function could return an array of PointF's: zero result points (or null) if the lines are parallel or do not intersect (infinite lines intersect but line segments are disjoint, or lines are parallel) one result point (containing the intersection location) if they do intersect or if they are coincident at one point two result points (for the overlapping part of the line segments) if the two lines are coincident

    Read the article

  • Get Highest Res Favicon

    - by Jeremy
    I'm making a website that needs to dynamically obtain the favicon of sites upon request. I've found a few api's that can accomplish this fairly well, and so far I'm liking http://www.fvicon.com/. The final image for my website will be 64x64px, and some websites such as Google and Wordpress have nice images of this size that are easily retrieved via this api. Though, of course, most websites only have a 16x16 favicon image and scaling that image to 64x64 has very bad quality loss. Examples: (high res) http://a.fvicon.com/wordpress.com?format=png&width=64&height=64 (low res) http://a.fvicon.com/yahoo.com?format=png&width=64&height=64 Keeping this in mind, I'm planning on somehow determining whether a high-res image is available and, if so, the website will use this image. If not, I want to use a pre-made 64x64 icon with the smaller icon layered over it. What I'm having trouble with is determining if there is a high res favicon available or not. Also, I'm curious if there's a better approach to this situation. I'd rather not use smaller images (64x64 works out really well for this project). The lowest res I'm willing to drop to is 48x48 but even then there will be a significant quality loss for scaling up 16x16 favicons. Any ideas? If you need any more information I will gladly provide it. Thank you!

    Read the article

  • GDI+, Smaller images ?

    - by Tony
    Hi I create a bitmap from bytes coming from the web, and I downsample it, the resulted Jpg is still too big although I use small pixel format, someone know how to manipulate compression of the image, because I had impression that the saved image is not compressed anymore. Thanks

    Read the article

  • How can you deflect a direction/magnitude vector based on a direction/magnitude vector and a collided triangle?

    - by JeanOTF
    So, I have a Triangle-AABB collision algorithm and I have it returning the triangle that the AABB collided with. I was hoping with the 3 vectors of the triangle and the direction/magnitude of the movement would let me determine a deflected vector so that when you run against the wall at an angle you move slower, depending on the angle of collision, but along side the wall. This would remove the sticky collision problem with only moving when there is not a collision. Any suggestions or references would be greatly appreciated! Thanks.

    Read the article

  • Python OpenGL Can't Redraw Scene

    - by RobbR
    I'm getting started with OpenGL and shaders using GLUT and PyOpenGL. I can draw a basic scene but for some reason I can't get it to update. E.g. any changes I make during idle(), display(), or reshape() are not reflected. Here are the methods: def display(self): glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ) glMatrixMode(GL_MODELVIEW) glLoadIdentity() glUseProgram(self.shader_program) self.m_vbo.bind() glEnableClientState( GL_VERTEX_ARRAY ) glVertexPointerf(self.m_vbo) glDrawArrays(GL_TRIANGLES, 0, len(self.m_vbo)) glutSwapBuffers() glutReportErrors() def idle(self): test_change += .1 self.m_vbo = vbo.VBO( array([ [ test_change, 1, 0 ], # triangle [ -1,-1, 0 ], [ 1,-1, 0 ], [ 2,-1, 0 ], # square [ 4,-1, 0 ], [ 4, 1, 0 ], [ 2,-1, 0 ], [ 4, 1, 0 ], [ 2, 1, 0 ], ],'f') ) glutPostRedisplay() def begin(self): glutInit() glutInitWindowSize(400, 400) glutCreateWindow("Simple OpenGL") glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB) glutDisplayFunc(self.display) glutReshapeFunc(self.reshape) glutMouseFunc(self.mouse) glutMotionFunc(self.motion) glutIdleFunc(self.idle) self.define_shaders() glutMainLoop() I'd like to implement a time step in idle() but even basic changes to the vertices or tranlastions and rotations on the MODELVIEW matrix don't display. It just puts up the initial state and does not update. Am I missing something?

    Read the article

  • Resizing gif images

    - by Danny
    I have a gif image 720 * 40 pixels used as a footer on a website. I need to extend the height of the gif by 10 pixels. I was unable to resize to this, using Office Picture Manager. What is the best way to achieve this?

    Read the article

  • opengl: question about glutMainLoop()

    - by lego69
    can somebody explain how does glutMainLoop work? and second question, why glClearColor(0.0f, 0.0f, 1.0f, 1.0f); defined after glutDisplayFunc(RenderScene); cause firstly we call glClear(GL_COLOR_BUFFER_BIT); and only then define glClearColor(0.0f, 0.0f, 1.0f, 1.0f); int main(int argc, char* argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); glutInitWindowSize(800, 00); glutInitWindowPosition(300,50); glutCreateWindow("GLRect"); glutDisplayFunc(RenderScene); glutReshapeFunc(ChangeSize); glClearColor(0.0f, 0.0f, 1.0f, 1.0f); <-- glutMainLoop(); return 0; } void RenderScene(void) { // Clear the window with current clearing color glClear(GL_COLOR_BUFFER_BIT); // Set current drawing color to red // R G B glColor3f(1.0f, 0.0f, 1.0f); // Draw a filled rectangle with current color glRectf(0.0f, 0.0f, 50.0f, -50.0f); // Flush drawing commands glFlush(); }

    Read the article

  • uiview rotation reset size problem

    - by user564968
    i have uiview i have rotate view left and right side ...i m transform view ...but problem is view fream is not changing... my code is like this ... CGAffineTransform tRotate45 = CGAffineTransformMakeRotation(-1.57); self.view.transform = tRotate45; imageScrollView.contentSize = CGSizeMake(480, 320); self.view.frame =[[UIView alloc] ] CGRectMake(0, 0, 480, 320); view is showing proper how can i adjust this ...

    Read the article

  • Generate a polygon from line.

    - by VOX
    I want to draw a line with thickness in j2me. This can easily be achieved in desktop java by setting Pen width as thickness value. However in j2me, Pen class does not support width. My idea is to generate a polygon from the line I have, that resembles the line with thickness i want to draw. In picture, on the left is what I have, a line with points. On the right is what I want, a polygon that when filled, a line with thickness. Could anyone know how to generate a polygon from line? http://www.freeimagehosting.net/uploads/140e43c2d2.gif

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >