Search Results

Search found 8165 results on 327 pages for '3d graphics'.

Page 116/327 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Getting invalid context errors

    - by Andrew
    I don't have much code thus far, only this to start: UIGraphicsBeginImageContextWithOptions(bounds.size, NO, 0); CGContextRef context = UIGraphicsGetCurrentContext(); CGMutablePathRef outerPath; CGMutablePathRef highlightPath; CGRect outerRect = rectForRectWithInset(bounds, 1); CGRect highlightRect = CGRectMake(outerRect.origin.x, outerRect.origin.y + 1, outerRect.size.width, outerRect.size.height); And then the problematic bit, which when commented out, the error goes away: CGContextSaveGState(context); CGContextAddPath(context, highlightPath); CGContextSetFillColorWithColor(context, [[UIColor colorWithWhite:1.0 alpha:0.05]CGColor]); CGContextFillPath(context); CGContextRestoreGState(context); Below that is simply: UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext();

    Read the article

  • When not to do maximum compression in png?

    - by user1444680
    Intro When saving png images through GIMP, I've always used level 9 (maximum) compression, as I knew that it's lossless. Now I've to specify compression level when saving png format image through GD extension of PHP. Question Is there any case when I shouldn't compress PNG to maximum level? Like any compatibility issues? If there's no problem then why to ask user; why not automatically compress to max?

    Read the article

  • Simulating brush strokes for painting application

    - by DrRobot
    I'm trying to write an application that can be used to create pictures that look like paintings using simulated brush strokes. Are there any good sources for simple ways of simulating brush strokes? For example, given a list of mouse positions that the user has dragged the mouse through, a brush width and a brush texture, how do I determine what to draw to the canvas? I've tried angling the brush texture in the direction of the mouse movement and dabbing several brush texture images along the path, but it doesn't look great. I think I'm missing something where the brush texture should shrink and grow on corners. Any simple to follow links would be appreciated. I've found complex academic papers on simulating e.g. oil paints but I just want a basic algorithm to use that produces OK results if possible.

    Read the article

  • How to write an intersects for Shapes in android

    - by Rafael T
    I have an written an Object called Shape which has a Point representing the topLeftCorner and a Dimension with represents its width and height. To get the topRightCorner I can simply add the width to the topLeftPoint.x. I use to rotate them on a certain degree around their center. The problem after the rotation is, that my intersects(Shape) method fails, because it does not honor the rotation of the Shapes. The rotation will be the same for each Shape. My current implementation looks like this inside my Shape Object: public boolean intersects(Shape s){ // functions returning a Point of shape s return intersects(s.topLeft()) || intersects(s.topRight()) || intersects(s.bottomLeft()) || intersects(s.bottomRight()) || intersects(s.leftCenter()) || intersects(s.rightCenter()) || intersects(s.center()); } public boolean intersects(Point p){ return p.x >= leftX() && p.x <= rightX() && p.y >= topY() && p.y <= bottomY(); } Basically I need functions like rotatedLeftX() or rotatedTopRight() to work properly. Also for that calculation I think it doesn't matter when the topLeft point before a rotation of ie 90 will turn into topRight... I already read this and this Question here, but do not understand it fully.

    Read the article

  • How can I tell if a closed path contains a given point?

    - by Tom Seago
    In Android, I have a Path object which I happen to know defines a closed path, and I need to figure out if a given point is contained within the path. What I was hoping for was something along the lines of path.contains(int x, int y) but that doesn't seem to exist. The specific reason I'm looking for this is because I have a collection of shapes on screen defined as paths, and I want to figure out which one the user clicked on. If there is a better way to be approaching this such as using different UI elements rather than doing it "the hard way" myself, I'm open to suggestions. I'm open to writing an algorithm myself if I have to, but that means different research I guess.

    Read the article

  • Monochrome BitMap Library

    - by Asad Jibran Ahmed
    I am trying to create a piece of software that can be used to create VERY large (10000x10000) sized bitmaps. All I need is something that can work in monochrome, since the required output is a matrix containing details of black and white pixels in the bitmap. The closest thing I can think of is a font editor, but the size is a problem. Is there any library out there that I can use to create the software, or will I have to write the whole thing from the start?

    Read the article

  • Is there a table of OpenGL extensions, versions, and hardware support somewhere?

    - by Thomas
    I'm looking for some resource that can help me decide what OpenGL version my game needs at minimum, and what features to support through extensions. Ideally, a table of the following format: 1.0 1.1 1.2 1.2.1 1.3 ... multitexture - ARB ARB core core texture_float - EXT EXT ARB ARB ... (Not sure about the values I put in, but you get the idea.) The extension specs themselves, at opengl.org, list the minimum OpenGL version they need, so that part is easy. However, many extensions have been accepted and became core standard in subsequent OpenGL versions, but it is very hard to find when that happened. The only way I could find is to compare the full OpenGL standards document for each version. On a related note, I would also very much like to know which extensions/features are supported by which hardware, to help me decide what features I can safely use in my game, and which ones I need to make optional. For example, a big honkin' table like this: MAX_TEXTURE_IMAGE_UNITS MAX_VERTEX_TEXTURE_IMAGE_UNITS ... GeForce 6xxx 8 4 GeForce 7xxx 16 8 ATi x300 8 4 ... (Again, I'm making the values up.) The table could list hardware limitations from glGet but also support for particular extensions, and limitations of such extension support (e.g. what floating-point texture formats are supported in hardware). Any pointers to these or similar resources would be hugely appreciated!

    Read the article

  • How can I draw a shadow beyond a UIView's bounds?

    - by Christian
    I'm using the method described at http://stackoverflow.com/questions/805872/how-do-i-draw-a-shadow-under-a-uiview to draw shadow behind a view's content. The shadow is clipped to the view's bounds, although I disabled "Clip Subviews" in Interface Builder for the view. Is it possible to draw a shadow around a view and not only in a view? I don't want to draw the shadow inside the view because the view would receive touch events for the shadow area, which really belongs to the background.

    Read the article

  • [CA_COLOR_OPAQUE] things that make a layer non-opaque. scaled CAGradientLayer?

    - by mahal tertin
    i spent some time with the environment variable CA_COLOR_OPAQUE = 1 and have my findings to share. things that make a CALayer non-opaque (slow, more memory, ...): * contents with alpha (like an NSImage with an icon) * NSImage/CGImage from a pdf as contents (even when the pdf does not contain any alpha and opaque=YES) * backgroundColor = nil * CATextLayer with text in a (because it is contents with alpha) * rounded corners? maybe/sometimes * masksToBounds? not necessarily as we scale most of tree with CATransform3DScale on sublayerTransform i found also these rather irritating non-opaque: * CAGradientLayer that is somewhere down in this scaled tree (even when set all the gradient colors without alpha) * edgeAntialiasingMask != 0 of a layer that is somewhere down in this scaled tree the last two do not make sense to me. why should it be non opaque? what do i see? if anyone has any thoughts on these findings, i'm happy to learn as i couldn't find such a list yet.

    Read the article

  • Codebase for making a Flash-based interactive map with SVG vector data?

    - by Mike
    I'm looking for a way to take SVG path info (basically a string of coordinates) and dynamically draw it with Actionscript. Icing on the cake would be if those shapes could detect mouse events to trigger JS and dynamically change their appearance (fill, stroke, etc...). I'm currently trying something similar to this (http://raphaeljs.com/australia.html) using SVG but it's just too slow in IE. I've also tried Google's SVG Web (http://code.google.com/p/svgweb/) which basically does exactly what I'm looking for (it converts SVG to Flash in IE) but again, it's sloooooow - which is why I'm considering doing the whole shebang in Flash. Anyone know of some links to point me in the right direction?

    Read the article

  • Image/"most resembling pixel" search optimization?

    - by SigTerm
    The situation: Let's say I have an image A, say, 512x512 pixels, and image B, 5x5 or 7x7 pixels. Both images are 24bit rgb, and B have 1bit alpha mask (so each pixel is either completely transparent or completely solid). I need to find within image A a pixel which (with its' neighbors) most closely resembles image B, OR the pixel that probably most closely resembles image B. Resemblance is calculated as "distance" which is sum of "distances" between non-transparent B's pixels and A's pixels divided by number of non-transparent B's pixels. Here is a sample SDL code for explanation: struct Pixel{ unsigned char b, g, r, a; }; void fillPixel(int x, int y, SDL_Surface* dst, SDL_Surface* src, int dstMaskX, int dstMaskY){ Pixel& dstPix = *((Pixel*)((char*)(dst->pixels) + sizeof(Pixel)*x + dst->pitch*y)); int xMin = x + texWidth - searchWidth; int xMax = xMin + searchWidth*2; int yMin = y + texHeight - searchHeight; int yMax = yMin + searchHeight*2; int numFilled = 0; for (int curY = yMin; curY < yMax; curY++) for (int curX = xMin; curX < xMax; curX++){ Pixel& cur = *((Pixel*)((char*)(dst->pixels) + sizeof(Pixel)*(curX & texMaskX) + dst->pitch*(curY & texMaskY))); if (cur.a != 0) numFilled++; } if (numFilled == 0){ int srcX = rand() % src->w; int srcY = rand() % src->h; dstPix = *((Pixel*)((char*)(src->pixels) + sizeof(Pixel)*srcX + src->pitch*srcY)); dstPix.a = 0xFF; return; } int storedSrcX = rand() % src->w; int storedSrcY = rand() % src->h; float lastDifference = 3.40282347e+37F; //unsigned char mask = for (int srcY = searchHeight; srcY < (src->h - searchHeight); srcY++) for (int srcX = searchWidth; srcX < (src->w - searchWidth); srcX++){ float curDifference = 0; int numPixels = 0; for (int tmpY = -searchHeight; tmpY < searchHeight; tmpY++) for(int tmpX = -searchWidth; tmpX < searchWidth; tmpX++){ Pixel& tmpSrc = *((Pixel*)((char*)(src->pixels) + sizeof(Pixel)*(srcX+tmpX) + src->pitch*(srcY+tmpY))); Pixel& tmpDst = *((Pixel*)((char*)(dst->pixels) + sizeof(Pixel)*((x + dst->w + tmpX) & dstMaskX) + dst->pitch*((y + dst->h + tmpY) & dstMaskY))); if (tmpDst.a){ numPixels++; int dr = tmpSrc.r - tmpDst.r; int dg = tmpSrc.g - tmpDst.g; int db = tmpSrc.g - tmpDst.g; curDifference += dr*dr + dg*dg + db*db; } } if (numPixels) curDifference /= (float)numPixels; if (curDifference < lastDifference){ lastDifference = curDifference; storedSrcX = srcX; storedSrcY = srcY; } } dstPix = *((Pixel*)((char*)(src->pixels) + sizeof(Pixel)*storedSrcX + src->pitch*storedSrcY)); dstPix.a = 0xFF; } This thing is supposed to be used for texture generation. Now, the question: The easiest way to do this is brute force search (which is used in example routine). But it is slow - even using GPU acceleration and dual core cpu won't make it much faster. It looks like I can't use modified binary search because of B's mask. So, how can I find desired pixel faster? Additional Info: It is allowed to use 2 cores, GPU acceleration, CUDA, and 1.5..2 gigabytes of RAM for the task. I would prefer to avoid some kind of lengthy preprocessing phase that will take 30 minutes to finish. Ideas?

    Read the article

  • What is the most mature library to render equirectangular images in Flash?

    - by Dave Viner
    I have a series of equirectangular images. I'd like to display them in a custom Flash player so that the user could see the spherical nature of the images, and "look up", "look down", "look left/right" (or pan, zoom, etc). (Note that I have a long series of images, so the library must allow for dynamic loading of the images themselves, rather than having the images "baked" into the SWF player.) What is the best library to manage the display of the equirectangular images in Flash? By "best", I mean the most mature, most reliable, most robust, and fastest performing. For reference, an example of an equirectangular image can be found at http://archive.bigben.id.au/tutorials/360/background/projections.html.

    Read the article

  • CoreGraphics taking a while to show on a large view - can i get it to repeat pixels?

    - by Andrew
    This is my coregraphics code: void drawTopPaperBackground(CGContextRef context, CGRect rect) { CGRect paper3 = CGRectMake(10, 14, 300, rect.size.height - 14); CGRect paper2 = CGRectMake(13, 12, 294, rect.size.height - 12); CGRect paper1 = CGRectMake(16, 10, 288, rect.size.height - 10); //Shadow CGContextSetShadowWithColor(context, CGSizeMake(0,0), 10, [[UIColor colorWithWhite:0 alpha:0.5]CGColor]); CGPathRef path = createRoundedRectForRect(paper3, 0); CGContextSetFillColorWithColor(context, [[UIColor blackColor] CGColor]); CGContextAddPath(context, path); CGContextFillPath(context); //Layers of paper //CGContextSaveGState(context); drawPaper(context, paper3); drawPaper(context, paper2); drawPaper(context, paper1); //CGContextRestoreGState(context); } void drawPaper(CGContextRef context, CGRect rect) { //Shadow CGContextSaveGState(context); CGContextSetShadowWithColor(context, CGSizeMake(0,0), 1, [[UIColor colorWithWhite:0 alpha:0.5]CGColor]); CGPathRef path = createRoundedRectForRect(rect, 0); CGContextSetFillColorWithColor(context, [[UIColor blackColor] CGColor]); CGContextAddPath(context, path); CGContextFillPath(context); //CGContextRestoreGState(context); //Gradient //CGContextSaveGState(context); CGColorRef startColor = [UIColor colorWithWhite:0.92 alpha:1.0].CGColor; CGColorRef endColor = [UIColor colorWithWhite:0.94 alpha:1.0].CGColor; CGRect firstHalf = CGRectMake(rect.origin.x, rect.origin.y, rect.size.width / 2, rect.size.height); CGRect secondHalf = CGRectMake(rect.origin.x + (rect.size.width / 2), rect.origin.y, rect.size.width / 2, rect.size.height); drawVerticalGradient(context, firstHalf, startColor, endColor); drawVerticalGradient(context, secondHalf, endColor, startColor); //CGContextRestoreGState(context); //CGContextSaveGState(context); CGRect redRect = rectForRectWithInset(rect, -1); CGMutablePathRef redPath = createRoundedRectForRect(redRect, 0); //CGContextSaveGState(context); CGContextSetStrokeColorWithColor(context, [[UIColor blackColor] CGColor]); CGContextAddPath(context, path); CGContextClip(context); CGContextAddPath(context, redPath); CGContextSetShadowWithColor(context, CGSizeMake(0, 0), 15.0, [[UIColor colorWithWhite:0 alpha:0.1] CGColor]); CGContextStrokePath(context); CGContextRestoreGState(context); } The view is a UIScrollView, which contains a textview. Every time the user types something and goes onto a new line, I call [self setNeedsDisplay]; and it redraws the code. But when the view starts to get long - around 1000 height, it has very noticeable lag. How can i make this code more efficient? Can i take a line of pixels and make it just repeat that, or stretch it, all the way down?

    Read the article

  • OpenGL - lighting of vertices outside clip range

    - by hmp
    I have a problem with lighting in my OpenGL application. When one of the vertices of a drawn polygon goes outside the front clip plane (or has z<0, I'm not sure which), the polygon stops being lighted properly. This however happens on only one machine I tested, with Intel GMA950 card. On nVidia and ATI cards everything looks fine. I guess I am breaking some OpenGL rule here? How should I deal with it? I'd try dividing the scene into smaller polygons, but I'm not sure if it guarantees the case is eliminated (all polygons stepping outside the clipping range are offscreen).

    Read the article

  • Blit Queue Optimization Algorithm

    - by martona
    I'm looking to implement a module that manages a blit queue. There's a single surface, and portions of this surface (bounded by rectangles) are copied to elsewhere within the surface: add_blt(rect src, point dst); There can be any number of operations posted, in order, to the queue. Eventually the user of the queue will stop posting blits, and ask for an optimal set of operations to actually perform on the surface. The task of the module is to ensure that no pixel is copied unnecessarily. This gets tricky because of overlaps of course. A blit could re-blit a previously copied pixel. Ideally blit operations would be subdivided in the optimization phase in such a way that every block goes to its final place with a single operation. It's tricky but not impossible to put this together. I'm just trying to not reinvent the wheel. I looked around on the 'net, and the only thing I found was the SDL_BlitPool Library which assumes that the source surface differs from the destination. It also does a lot of grunt work, seemingly unnecessarily: regions and similar building blocks are a given. I'm looking for something higher-level. Of course, I'm not going to look a gift horse in the mouth, and I also don't mind doing actual work... If someone can come forward with a basic idea that makes this problem seem less complex than it does right now, that'd be awesome too. EDIT: Thinking about aaronasterling's answer... could this work? Implement customized region handler code that can maintain metadata for every rectangle it contains. When the region handler splits up a rectangle, it will automatically associate the metadata of this rectangle with the resulting sub-rectangles. When the optimization run starts, create an empty region handled by the above customized code, call this the master region Iterate through the blt queue, and for every entry: Let srcrect be the source rectangle for the blt beng examined Get the intersection of srcrect and master region into temp region Remove temp region from master region, so master region no longer covers temp region Promote srcrect to a region (srcrgn) and subtract temp region from it Offset temp region and srcrgn with the vector of the current blt: their union will cover the destination area of the current blt Add to master region all rects in temp region, retaining the original source metadata (step one of adding the current blt to the master region) Add to master region all rects in srcrgn, adding the source information for the current blt (step two of adding the current blt to the master region) Optimize master region by checking if adjacent sub-rectangles that are merge candidates have the same metadata. Two sub-rectangles are merge candidates if (r1.x1 == r2.x1 && r1.x2 == r2.x2) | (r1.y1 == r2.y1 && r1.y2 == r2.y2). If yes, combine them. Enumerate master region's sub-rectangles. Every rectangle returned is an optimized blt operation destination. The associated metadata is the blt operation`s source.

    Read the article

  • Optimum ordering for packed vertex arrays on iPhone

    - by Pestilence
    Is there an optimum packing format for vertex arrays on the iPhone hardware? My textured (triangle) arrays are ordered: Vertex (x, y, z) Vertex Normal (x, y, z) Texture Coordinates (u, v) This is the way I've always done it. Should the UVs come before the normals? I'm not sure if it matters. I'd assume that the texturing & lighting units would have a preference, but I can't find anything about it. I certainly can't detect a difference.

    Read the article

  • Optimally place a pie slice in a rectangle.

    - by Lisa
    Given a rectangle (w, h) and a pie slice with start angle and end angle, how can I place the slice optimally in the rectangle so that it fills the room best (from an optical point of view, not mathematically speaking)? I'm currently placing the pie slice's center in the center of the rectangle and use the half of the smaller of both rectangle sides as the radius. This leaves plenty of room for certain configurations. Examples to make clear what I'm after, based on the precondition that the slice is drawn like a unit circle: A start angle of 0 and an end angle of PI would lead to a filled lower half of the rectangle and an empty upper half. A good solution here would be to move the center up by 1/4*h. A start angle of 0 and an end angle of PI/2 would lead to a filled bottom right quarter of the rectangle. A good solution here would be to move the center point to the top left of the rectangle and to set the radius to the smaller of both rectangle sides. This is fairly easy for the cases I've sketched but it becomes complicated when the start and end angles are arbitrary. I am searching for an algorithm which determines center of the slice and radius in a way that fills the rectangle best. Pseudo code would be great since I'm not a big mathematician.

    Read the article

  • Displaying bitmaps in relative positions

    - by JonF
    I'd like to put a couple images on a surfaceview. I understand that the screen sizes of android devices can vary, so I don't think I can just use an x y position or I might end up placing it off different screens. Say I want to put two boxes in the center of the screen, a blue one and a red one. The blue one is to the left of the red one. How can I accomplish that while accounting for different screen sizes?

    Read the article

  • .GIF re edit! Can't figure it out!!

    - by Adam C
    http://img227.imageshack.us/img227/1892/hatersgonna.gif That is the photo.. I am trying to cut around it so its a little smaller and make him walk the opposite direction. The reason I am doing this is for a VBulletin forum signature since it marquees left to right. I have tried editing the animation in Photoshop and I flipped the canvas to horizontal... I can't figure this out.. I've been at it for HOURS. hah Also if anyone can make it just a little darker that would be amazing. "no I'm not asking for free help" but any help would be great Thank you so much

    Read the article

  • How to change the coordinate of a point that is inside a GraphicsPath?

    - by Ben
    Is there anyway to change the coordinates of some of the points within a GraphicsPath object while leaving the other points where they are? The GraphicsPath object that gets passed into my method will contain a mixture of polygons and lines. My method would want to look something like: void UpdateGraphicsPath(GraphicsPath gPath, RectangleF regionToBeChanged, PointF delta) { // Find the points in gPath that are inside regionToBeChanged // and move them by delta. // gPath.PathPoints[i].X += delta.X; // Compiles but doesn't work } GraphicsPath.PathPoints seems to be readonly, so does GraphicsPath.PathData.Points. So I am wondering if this is even possible. Perhaps generating a new GraphicsPath object with an updated set of points? How can I know if a point is part of a line or a polygon? If anyone has any suggestions then I would be grateful.

    Read the article

  • Methods for making R plots look like Excel plots?

    - by brianjd
    I've been poking around with R graphical parameters trying to make my plots look a little more professional (e.g., las=1, bty="n" usually help). But not quite there. Started playing with tikzDevice. A huge improvement! Amazing how much better things look when the font sizes and styles in the figure match those of the surrounding document. Still, not quite there. What I'm ultimately looking for are those professional gradient shading, rounded corners, and shadow effects found in MS Excel plots. I know they're probably considered chart junk, but I like them. They're just nice looking. Q: How can I get these effects into my R plots? Do people usually just export to Inkscape and doodle over there? It would be nice if there were a literate programming approach. Is there an R package that handles these effects outright?

    Read the article

  • Resizing an image with alpha channel

    - by Hafthor
    I am writing some code to generate images - essentially I have a source image that is large and includes transparent regions. I use GDI+ to open that image and add additional objects. What I want to do next is to save this new image much smaller, so I used the Bitmap constructor that takes a source Image object and a height and width, then saved that. I was expecting the alpha channel to be smoothed like the color channels, but this did not happen -- it did result in a couple of semitransparent pixels, but overall it is very blocky. What gives? Using img As New Bitmap("source100x100.png") ''// Drawing stuff Using simg As New Bitmap(img, 20, 20) simg.Save("target20x20.png") End Using End Using Edit: I think what I want is SuperSampling, like what Paint.NET does when set to "Best Quality"

    Read the article

  • Code Interaction with Quartz Composition

    - by Alberto MQO
    Hi, i have a Quartz Composition with a Cube, and X/Y/Z rotation inputs are published. On Interface Builder i made a QCView and a QCPatchController with the previous Quartz Composition loaded. In QCView the Patch Controller is binded, and the rotation published ports are binded too to three NSSlider, so when i change the value of the NSSlider's then the cube rotates. All this works fine, but i want to change the rotation values of the cube from the App Delegate on XCode. I tried to change the value of the NSSliders with IBoulets pointing to them, but this change doesnt apply to the cube, like it does when i change the Sliders directly with my mouse. What should i instanciate and/or how to access and change this Input_Ports.value throught the CQPatchController? Thank you very much for reading, i really need help!

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >