Search Results

Search found 12844 results on 514 pages for 'dual graphics card'.

Page 155/514 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • Codebase for making a Flash-based interactive map with SVG vector data?

    - by Mike
    I'm looking for a way to take SVG path info (basically a string of coordinates) and dynamically draw it with Actionscript. Icing on the cake would be if those shapes could detect mouse events to trigger JS and dynamically change their appearance (fill, stroke, etc...). I'm currently trying something similar to this (http://raphaeljs.com/australia.html) using SVG but it's just too slow in IE. I've also tried Google's SVG Web (http://code.google.com/p/svgweb/) which basically does exactly what I'm looking for (it converts SVG to Flash in IE) but again, it's sloooooow - which is why I'm considering doing the whole shebang in Flash. Anyone know of some links to point me in the right direction?

    Read the article

  • How can I draw a shadow beyond a UIView's bounds?

    - by Christian
    I'm using the method described at http://stackoverflow.com/questions/805872/how-do-i-draw-a-shadow-under-a-uiview to draw shadow behind a view's content. The shadow is clipped to the view's bounds, although I disabled "Clip Subviews" in Interface Builder for the view. Is it possible to draw a shadow around a view and not only in a view? I don't want to draw the shadow inside the view because the view would receive touch events for the shadow area, which really belongs to the background.

    Read the article

  • Why are all my masked views unmasked in my view snapshot?

    - by mystify
    I'm taking a snapshot of an view. This view has got some subviews which have layer masks applied to them. For some reason, those masks take no effect in the snapshot and the masked parts are completely visible. UIGraphicsBeginImageContext(theView.frame.size); [theView.layer renderInContext:UIGraphicsGetCurrentContext()]; UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); I assume this is a bug in the framework. But maybe it's not? Did I do anything wrong here?

    Read the article

  • Interpolating height for a point inside a grid based on a discrete height function.

    - by fastrack20
    Hi, I have been wracking my brain to come up with a solution to this problem. I have a lookup table that returns height values for various points (x,z) on the grid. For instance I can calculate the height at A, B, C and D in Figure 1. However, I am looking for a way to interpolate the height at P (which has a known (x,z)). The lookup table only has values at the grid intervals, and P lies between these intervals. I am trying to calculate values s and t such that: A'(s) = A + s(C-A) B'(t) = B + t(P-B) I would then use the these two equations to find the intersection point of B'(t) with A'(s) to find a point X on the line A-C. With this I can calculate the height at this point X and with that the height at point P. My issue lies in calculating the values for s and t. Any help would be greatly appreciated.

    Read the article

  • CoreGraphics taking a while to show on a large view - can i get it to repeat pixels?

    - by Andrew
    This is my coregraphics code: void drawTopPaperBackground(CGContextRef context, CGRect rect) { CGRect paper3 = CGRectMake(10, 14, 300, rect.size.height - 14); CGRect paper2 = CGRectMake(13, 12, 294, rect.size.height - 12); CGRect paper1 = CGRectMake(16, 10, 288, rect.size.height - 10); //Shadow CGContextSetShadowWithColor(context, CGSizeMake(0,0), 10, [[UIColor colorWithWhite:0 alpha:0.5]CGColor]); CGPathRef path = createRoundedRectForRect(paper3, 0); CGContextSetFillColorWithColor(context, [[UIColor blackColor] CGColor]); CGContextAddPath(context, path); CGContextFillPath(context); //Layers of paper //CGContextSaveGState(context); drawPaper(context, paper3); drawPaper(context, paper2); drawPaper(context, paper1); //CGContextRestoreGState(context); } void drawPaper(CGContextRef context, CGRect rect) { //Shadow CGContextSaveGState(context); CGContextSetShadowWithColor(context, CGSizeMake(0,0), 1, [[UIColor colorWithWhite:0 alpha:0.5]CGColor]); CGPathRef path = createRoundedRectForRect(rect, 0); CGContextSetFillColorWithColor(context, [[UIColor blackColor] CGColor]); CGContextAddPath(context, path); CGContextFillPath(context); //CGContextRestoreGState(context); //Gradient //CGContextSaveGState(context); CGColorRef startColor = [UIColor colorWithWhite:0.92 alpha:1.0].CGColor; CGColorRef endColor = [UIColor colorWithWhite:0.94 alpha:1.0].CGColor; CGRect firstHalf = CGRectMake(rect.origin.x, rect.origin.y, rect.size.width / 2, rect.size.height); CGRect secondHalf = CGRectMake(rect.origin.x + (rect.size.width / 2), rect.origin.y, rect.size.width / 2, rect.size.height); drawVerticalGradient(context, firstHalf, startColor, endColor); drawVerticalGradient(context, secondHalf, endColor, startColor); //CGContextRestoreGState(context); //CGContextSaveGState(context); CGRect redRect = rectForRectWithInset(rect, -1); CGMutablePathRef redPath = createRoundedRectForRect(redRect, 0); //CGContextSaveGState(context); CGContextSetStrokeColorWithColor(context, [[UIColor blackColor] CGColor]); CGContextAddPath(context, path); CGContextClip(context); CGContextAddPath(context, redPath); CGContextSetShadowWithColor(context, CGSizeMake(0, 0), 15.0, [[UIColor colorWithWhite:0 alpha:0.1] CGColor]); CGContextStrokePath(context); CGContextRestoreGState(context); } The view is a UIScrollView, which contains a textview. Every time the user types something and goes onto a new line, I call [self setNeedsDisplay]; and it redraws the code. But when the view starts to get long - around 1000 height, it has very noticeable lag. How can i make this code more efficient? Can i take a line of pixels and make it just repeat that, or stretch it, all the way down?

    Read the article

  • Is there a table of OpenGL extensions, versions, and hardware support somewhere?

    - by Thomas
    I'm looking for some resource that can help me decide what OpenGL version my game needs at minimum, and what features to support through extensions. Ideally, a table of the following format: 1.0 1.1 1.2 1.2.1 1.3 ... multitexture - ARB ARB core core texture_float - EXT EXT ARB ARB ... (Not sure about the values I put in, but you get the idea.) The extension specs themselves, at opengl.org, list the minimum OpenGL version they need, so that part is easy. However, many extensions have been accepted and became core standard in subsequent OpenGL versions, but it is very hard to find when that happened. The only way I could find is to compare the full OpenGL standards document for each version. On a related note, I would also very much like to know which extensions/features are supported by which hardware, to help me decide what features I can safely use in my game, and which ones I need to make optional. For example, a big honkin' table like this: MAX_TEXTURE_IMAGE_UNITS MAX_VERTEX_TEXTURE_IMAGE_UNITS ... GeForce 6xxx 8 4 GeForce 7xxx 16 8 ATi x300 8 4 ... (Again, I'm making the values up.) The table could list hardware limitations from glGet but also support for particular extensions, and limitations of such extension support (e.g. what floating-point texture formats are supported in hardware). Any pointers to these or similar resources would be hugely appreciated!

    Read the article

  • Blit Queue Optimization Algorithm

    - by martona
    I'm looking to implement a module that manages a blit queue. There's a single surface, and portions of this surface (bounded by rectangles) are copied to elsewhere within the surface: add_blt(rect src, point dst); There can be any number of operations posted, in order, to the queue. Eventually the user of the queue will stop posting blits, and ask for an optimal set of operations to actually perform on the surface. The task of the module is to ensure that no pixel is copied unnecessarily. This gets tricky because of overlaps of course. A blit could re-blit a previously copied pixel. Ideally blit operations would be subdivided in the optimization phase in such a way that every block goes to its final place with a single operation. It's tricky but not impossible to put this together. I'm just trying to not reinvent the wheel. I looked around on the 'net, and the only thing I found was the SDL_BlitPool Library which assumes that the source surface differs from the destination. It also does a lot of grunt work, seemingly unnecessarily: regions and similar building blocks are a given. I'm looking for something higher-level. Of course, I'm not going to look a gift horse in the mouth, and I also don't mind doing actual work... If someone can come forward with a basic idea that makes this problem seem less complex than it does right now, that'd be awesome too. EDIT: Thinking about aaronasterling's answer... could this work? Implement customized region handler code that can maintain metadata for every rectangle it contains. When the region handler splits up a rectangle, it will automatically associate the metadata of this rectangle with the resulting sub-rectangles. When the optimization run starts, create an empty region handled by the above customized code, call this the master region Iterate through the blt queue, and for every entry: Let srcrect be the source rectangle for the blt beng examined Get the intersection of srcrect and master region into temp region Remove temp region from master region, so master region no longer covers temp region Promote srcrect to a region (srcrgn) and subtract temp region from it Offset temp region and srcrgn with the vector of the current blt: their union will cover the destination area of the current blt Add to master region all rects in temp region, retaining the original source metadata (step one of adding the current blt to the master region) Add to master region all rects in srcrgn, adding the source information for the current blt (step two of adding the current blt to the master region) Optimize master region by checking if adjacent sub-rectangles that are merge candidates have the same metadata. Two sub-rectangles are merge candidates if (r1.x1 == r2.x1 && r1.x2 == r2.x2) | (r1.y1 == r2.y1 && r1.y2 == r2.y2). If yes, combine them. Enumerate master region's sub-rectangles. Every rectangle returned is an optimized blt operation destination. The associated metadata is the blt operation`s source.

    Read the article

  • How to distort the desktop screen

    - by HaifengWang
    Hi friends, I want to change the shape of the desktop screen, so what are displayed on the desktop will be distorted at the same time. And the user can still operate the PC with the mouse on the distorted desktop(Run the applications, Open the "My Computer" and so on). I think I must get the projection matrix of the screen coordinate at first. Then transform the matrix, and map the desktop buffer image to the distorted mesh. Are there any interfaces which can modify the shape of the desktop screen in OpenGL or DirectX? Would you please give me some tip on it. Thank you very much in advance. Please refer to the picture from http://oi53.tinypic.com/bhewdx.jpg BR, Haifeng Addition1: I'm sorry! Maybe I didn't express clearly what I want to implement. What I want to implement is to modify the shape of the screen. So we can distort the shapes of all the applications which are run on Windows at the same time. For example that the window of "My Computer" will be distorted with the distortion of the desktop screen. And we can still operate the PC with mouse from the distorted desktop(Click the shortcut to run a program). Addition2: The projection matrix is just my assume. There isn't any desktop projection matrix by which the desktop surface is projected to the screen. What I want to implement is to change the shape of the desktop, as the same with mapping the desktop to an 3D mesh. But the user can still operate the OS on the distorted desktop(Click the shortcut to run a program, open the ie to surf the internet). Addition3: The shapes of all the programs run on the OS are changed with the distortion of the screen. It's realtime. The user can still operate the OS on the distorted screen as usually. Maybe we can intercept or override the GPU itself to implement the effect. I'm investigating GDI, I think I can find some clue for that. The first step is to find how to show the desktop on the screen.

    Read the article

  • Displaying bitmaps in relative positions

    - by JonF
    I'd like to put a couple images on a surfaceview. I understand that the screen sizes of android devices can vary, so I don't think I can just use an x y position or I might end up placing it off different screens. Say I want to put two boxes in the center of the screen, a blue one and a red one. The blue one is to the left of the red one. How can I accomplish that while accounting for different screen sizes?

    Read the article

  • Resizing an image with alpha channel

    - by Hafthor
    I am writing some code to generate images - essentially I have a source image that is large and includes transparent regions. I use GDI+ to open that image and add additional objects. What I want to do next is to save this new image much smaller, so I used the Bitmap constructor that takes a source Image object and a height and width, then saved that. I was expecting the alpha channel to be smoothed like the color channels, but this did not happen -- it did result in a couple of semitransparent pixels, but overall it is very blocky. What gives? Using img As New Bitmap("source100x100.png") ''// Drawing stuff Using simg As New Bitmap(img, 20, 20) simg.Save("target20x20.png") End Using End Using Edit: I think what I want is SuperSampling, like what Paint.NET does when set to "Best Quality"

    Read the article

  • How to change the coordinate of a point that is inside a GraphicsPath?

    - by Ben
    Is there anyway to change the coordinates of some of the points within a GraphicsPath object while leaving the other points where they are? The GraphicsPath object that gets passed into my method will contain a mixture of polygons and lines. My method would want to look something like: void UpdateGraphicsPath(GraphicsPath gPath, RectangleF regionToBeChanged, PointF delta) { // Find the points in gPath that are inside regionToBeChanged // and move them by delta. // gPath.PathPoints[i].X += delta.X; // Compiles but doesn't work } GraphicsPath.PathPoints seems to be readonly, so does GraphicsPath.PathData.Points. So I am wondering if this is even possible. Perhaps generating a new GraphicsPath object with an updated set of points? How can I know if a point is part of a line or a polygon? If anyone has any suggestions then I would be grateful.

    Read the article

  • Optimally place a pie slice in a rectangle.

    - by Lisa
    Given a rectangle (w, h) and a pie slice with start angle and end angle, how can I place the slice optimally in the rectangle so that it fills the room best (from an optical point of view, not mathematically speaking)? I'm currently placing the pie slice's center in the center of the rectangle and use the half of the smaller of both rectangle sides as the radius. This leaves plenty of room for certain configurations. Examples to make clear what I'm after, based on the precondition that the slice is drawn like a unit circle: A start angle of 0 and an end angle of PI would lead to a filled lower half of the rectangle and an empty upper half. A good solution here would be to move the center up by 1/4*h. A start angle of 0 and an end angle of PI/2 would lead to a filled bottom right quarter of the rectangle. A good solution here would be to move the center point to the top left of the rectangle and to set the radius to the smaller of both rectangle sides. This is fairly easy for the cases I've sketched but it becomes complicated when the start and end angles are arbitrary. I am searching for an algorithm which determines center of the slice and radius in a way that fills the rectangle best. Pseudo code would be great since I'm not a big mathematician.

    Read the article

  • Methods for making R plots look like Excel plots?

    - by brianjd
    I've been poking around with R graphical parameters trying to make my plots look a little more professional (e.g., las=1, bty="n" usually help). But not quite there. Started playing with tikzDevice. A huge improvement! Amazing how much better things look when the font sizes and styles in the figure match those of the surrounding document. Still, not quite there. What I'm ultimately looking for are those professional gradient shading, rounded corners, and shadow effects found in MS Excel plots. I know they're probably considered chart junk, but I like them. They're just nice looking. Q: How can I get these effects into my R plots? Do people usually just export to Inkscape and doodle over there? It would be nice if there were a literate programming approach. Is there an R package that handles these effects outright?

    Read the article

  • Code Interaction with Quartz Composition

    - by Alberto MQO
    Hi, i have a Quartz Composition with a Cube, and X/Y/Z rotation inputs are published. On Interface Builder i made a QCView and a QCPatchController with the previous Quartz Composition loaded. In QCView the Patch Controller is binded, and the rotation published ports are binded too to three NSSlider, so when i change the value of the NSSlider's then the cube rotates. All this works fine, but i want to change the rotation values of the cube from the App Delegate on XCode. I tried to change the value of the NSSliders with IBoulets pointing to them, but this change doesnt apply to the cube, like it does when i change the Sliders directly with my mouse. What should i instanciate and/or how to access and change this Input_Ports.value throught the CQPatchController? Thank you very much for reading, i really need help!

    Read the article

  • .GIF re edit! Can't figure it out!!

    - by Adam C
    http://img227.imageshack.us/img227/1892/hatersgonna.gif That is the photo.. I am trying to cut around it so its a little smaller and make him walk the opposite direction. The reason I am doing this is for a VBulletin forum signature since it marquees left to right. I have tried editing the animation in Photoshop and I flipped the canvas to horizontal... I can't figure this out.. I've been at it for HOURS. hah Also if anyone can make it just a little darker that would be amazing. "no I'm not asking for free help" but any help would be great Thank you so much

    Read the article

  • Is it possible to dither a gradient drawable?

    - by Seu
    I'm using the following drawable: <?xml version="1.0" encoding="utf-8"?> <shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="rectangle" > <gradient android:startColor="@color/content_background_gradient_start" android:endColor="@color/content_background_gradient_end" android:angle="270" /> </shape> The problem is that I get severe banding on hdpi devices (like the Nexus One and Droid) since the gradient goes from the top of the screen to the very bottom. According to http://idunnolol.com/android/drawables.html#shape_gradient there isn't a "dither" attribute for a gradient. Is there anything I can do to smooth the gradient?

    Read the article

  • Payment Gateways for Mid-Sized Business?

    - by Eric
    My company is a bit unhappy with the support we've been getting from Cybersource and we're about to embark on a billing re-write so we're taking the opportunity to look at other gateways. Anyone have any positive or negative experiences they'd like to share? I'd rather not hear about small website gateways like paypal, we run tens of thousands of transactions and millions a year. If you know, I'd love to hear how much you're paying in transaction/gateway fees too. We're primarily a .NET shop if you'd like to speak to a particular API. Gateway must support the big 4 credit cards (mc, visa, disc, amex) and ACH. Thanks in advance for the help from the hive mind. :)

    Read the article

  • What OpenGL functions are not GPU accelerated?

    - by Xavier Ho
    I was shocked when I read this (from the OpenGL wiki): glTranslate, glRotate, glScale Are these hardware accelerated? No, there are no known GPUs that execute this. The driver computes the matrix on the CPU and uploads it to the GPU. All the other matrix operations are done on the CPU as well : glPushMatrix, glPopMatrix, glLoadIdentity, glFrustum, glOrtho. This is the reason why these functions are considered deprecated in GL 3.0. You should have your own math library, build your own matrix, upload your matrix to the shader. For a very, very long time I thought most of the OpenGL functions use the GPU to do computation. I'm not sure if this is a common misconception, but after a while of thinking, this makes sense. Old OpenGL functions (2.x and older) are really not suitable for real-world applications, due to too many state switches. This makes me realise that, possibly, many OpenGL functions do not use the GPU at all. So, the question is: Which OpenGL function calls don't use the GPU? I believe knowing the answer to the above question would help me become a better programmer with OpenGL. Please do share some of your insights.

    Read the article

  • RaphaelJS HTML5 Library pathIntersection() bug or alternative optimisation (screenshots)

    - by user1236048
    I have a chart generated using RaphaelJS library. It is just on long path: M 50 122 L 63.230769230769226 130 L 76.46153846153845 130 L 89.6923076923077 128 L 102.92307692307692 56 L 116.15384615384615 106 L 129.3846153846154 88 L 142.6153846153846 114 L 155.84615384615384 52 L 169.07692307692307 30 L 182.3076923076923 62 L 195.53846153846152 130 L 208.76923076923077 74 L 222 130 L 235.23076923076923 66 L 248.46153846153845 102 L 261.6923076923077 32 L 274.9230769230769 130 L 288.15384615384613 130 L 301.38461538461536 32 L 314.6153846153846 86 L 327.8461538461538 130 L 341.07692307692304 70 L 354.30769230769226 130 L 367.53846153846155 102 L 380.7692307692308 120 L 394 112 L 407.2307692307692 68 L 420.46153846153845 48 L 433.6923076923077 92 L 446.9230769230769 128 L 460.15384615384613 110 L 473.38461538461536 78 L 486.6153846153846 130 L 499.8461538461538 56 L 513.0769230769231 116 L 526.3076923076923 80 L 539.5384615384614 58 L 552.7692307692307 40 L 566 130 L 579.2307692307692 94 L 592.4615384615385 64 L 605.6923076923076 122 L 618.9230769230769 98 L 632.1538461538461 120 L 645.3846153846154 70 L 658.6153846153845 82 L 671.8461538461538 76 L 685.0769230769231 124 L 698.3076923076923 110 L 711.5384615384615 94 L 724.7692307692307 130 L 738 130 L 751.2307692307692 66 L 764.4615384615385 118 L 777.6923076923076 70 L 790.9230769230769 130 L 804.1538461538461 44 L 817.3846153846154 130 L 830.6153846153845 36 L 843.8461538461538 92 L 857.076923076923 130 L 870.3076923076923 76 L 883.5384615384614 130 L 896.7692307692307 60 L 910 88 Also below these chart I have a jqueryUI slider of the same width (860px) and centered with the chart. I want when I move the slider to move a dot on the chart accordingly with the slider position. See attached screenshot: As you can see it seems to work fine. I've implemented this behaviour using the pathIntersection() method. On the slide event at each ui.value (x coordinate) I intersect my chartPath (the one from above) with a vertical straight line at the x coordinate. But still there are some problems. One of them is that it runs very hard, and it kinda freezes sometimes.. and very weird sometimes it doesn't seem to intersect at all even it should.. I'll example below 2 cases I identified: M 499.8461538461538 0 L 499.8461538461538 140 M 910 0 L 910 140 Could you please explain why this intersect behaviour happens (it should return a dot).. and the worst part it seems like it happens randomly.. if I use another chartdata. Also if you can identify another (better) solution to syncronise the slider position with the dot on the chart.. would be perfect. I thought about using Element.getPointAtLength(length), but I don't know how. I think I should save the pathSegments and for each to compute the start Length and the finish Length.

    Read the article

  • simple CATextLayer scaling problem

    - by dersubtile
    I'm having a simple scaling problem with CATextLayer, but i just couldn't figure it out: I want the CATextLayer to proportionally grow in size with it's superlayer: if the superlayer's width is 300 the text size of CATextLayer should be 12 and if the supeview's width is 600 the text size should be 24. I couldn't find a working solution! Can you please give me a clue? Thanks, Julian.

    Read the article

  • How do I fix "java.lang.OutOfMemoryError at sun.misc.Unsafe.allocateMemory(Native Method)"?

    - by Jephir
    I'm making a Java application that uses the Slick library to load images. However, on some computers, I get this error when trying to run the program: Exception in thread "main" java.lang.OutOfMemoryError at sun.misc.Unsafe.allocateMemory(Native Method) at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:99) at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288) at org.lwjgl.BufferUtils.createByteBuffer(BufferUtils.java:60) at org.newdawn.slick.opengl.PNGImageData.loadImage(PNGImageData.java:692) at org.newdawn.slick.opengl.CompositeImageData.loadImage(CompositeImageData.java:62) at org.newdawn.slick.opengl.CompositeImageData.loadImage(CompositeImageData.java:43) My VM options are: -Djava.library.path=lib -Xms1024M -Xmx1024M -XX:PermSize=256M -XX:MaxPermSize=256M The program loads a few large images (1024 x 768 resolution) at the beginning. Any help to solve this problem would be greatly appreciated.

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >