Search Results

Search found 5873 results on 235 pages for 'raster graphics'.

Page 18/235 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Can emsripten compile down to Canvas-based Js instead of WebGL?

    - by Sebastian Scholle
    I understand that emscripten compiles down LLVM to JS and it converts OpenGL Calls to WebGL. Thats a fairly simple translation. Is there a way to tell emscripten to use some other graphics Library ( for example Pixi JS ) for its rendering code translations? Is the compiled JS code easy to update or would it be better to merge in your own Graphics API that handles WebGL/Canvas calls. IE: can we use a C++ Graphics Wrapper Library that when compiles to JS, will simply plug into our own JS Graphics Wrapper Library? Im assuming YES, but has anyone tried this? And if So, what would be your technique, as my C++ skills are basic.

    Read the article

  • How do I disable a Nvidia 9600GT on MacBookPro 5.1?

    - by Gjan
    i finally put up a Dual Boot with Ubuntu and Lion on my old MacBookPro 5.1 As reported in many cases the discrete graphics card is turned on all the time consuming a lot of power and thus heating up the laptop. Since the discrete graphics card does not support the nvidia optimus technology, the corresponding packge nvidia-prime does not help in this case. Therefore my question is, how to manually disable the discrete graphics card Nvdidia 9600GT ? Preferably a 'switch-on-the-run' version, but a 'set-on-boot' would be totally fine!

    Read the article

  • Unable to install cedarview-graphics-drivers

    - by antnchv
    $ sudo apt-get install cedarview-graphics-drivers Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: cedarview-graphics-drivers : Depends: xserver-xorg-core (= 2:1.10.99.901) Depends: xorg-video-abi-11 E: Unable to correct problems, you have held broken packages. Please advice.

    Read the article

  • Drawing line graphics leads Flash to spiral out of control!

    - by drpepper
    Hi, I'm having problems with some AS3 code that simply draws on a Sprite's Graphics object. The drawing happens as part of a larger procedure called on every ENTER_FRAME event of the stage. Flash neither crashes nor returns an error. Instead, it starts running at 100% CPU and grabs all the memory that it can, until I kill the process manually or my computer buckles under the pressure when it gets up to around 2-3 GB. This will happen at a random time, and without any noticiple slowdown beforehand. WTF? Has anyone seen anything like this? PS: I used to do the drawing within a MOUSE_MOVE event handler, which brought this problem on even faster. PPS: I'm developing on Linux, but reproduced the same problem on Windows. UPDATE: You asked for some code, so here we are. The drawing function looks like this: public static function drawDashedLine(i_graphics : Graphics, i_from : Point, i_to : Point, i_on : Number, i_off : Number) : void { const vecLength : Number = Point.distance(i_from, i_to); i_graphics.moveTo(i_from.x, i_from.y); var dist : Number = 0; var lineIsOn : Boolean = true; while(dist < vecLength) { dist = Math.min(vecLength, dist + (lineIsOn ? i_on : i_off)); const p : Point = Point.interpolate(i_from, i_to, 1 - dist / vecLength); if(lineIsOn) i_graphics.lineTo(p.x, p.y); else i_graphics.moveTo(p.x, p.y); lineIsOn = !lineIsOn; } } and is called like this (m_graphicsLayer is a Sprite): m_graphicsLayer.graphics.clear(); if (m_destinationPoint) { m_graphicsLayer.graphics.lineStyle(2, m_fixedAim ? 0xff0000 : 0x333333, 1); drawDashedLine(m_graphicsLayer.graphics, m_initialPos, m_destinationPoint, 10, 10); }

    Read the article

  • Java Graphics not displaying on successive function calls, why?

    - by primehunter326
    Hi, I'm making a visualization for a BST implementation (I posted another question about it the other day). I've created a GUI which displays the viewing area and buttons. I've added code to the BST implementation to recursively traverse the tree, the function takes in coordinates along with the Graphics object which are initially passed in by the main GUI class. My idea was that I'd just have this function re-draw the tree after every update (add, delete, etc...), drawing a rectangle over everything first to "refresh" the viewing area. This also means I could alter the BST implementation (i.e by adding a balance operation) and it wouldn't affect the visualization. The issue I'm having is that the draw function only works the first time it is called, after that it doesn't display anything. I guess I don't fully understand how the Graphics object works since it doesn't behave the way I'd expect it to when getting passed/called from different functions. I know the getGraphics function has something to do with it. Relevant code: private void draw(){ Graphics g = vPanel.getGraphics(); tree.drawTree(g,ORIGIN,ORIGIN); } vPanel is what I'm drawing on private void drawTree(Graphics g, BinaryNode<AnyType> n, int x, int y){ if( n != null ){ drawTree(g, n.left, x-10,y+10 ); if(n.selected){ g.setColor(Color.blue); } else{ g.setColor(Color.gray); } g.fillOval(x,y,20,20); g.setColor(Color.black); g.drawString(n.element.toString(),x,y); drawTree(g,n.right, x+10,y+10); } } It is passed the root node when it is called by the public function. Do I have to have: Graphics g = vPanel.getGraphics(); ...within the drawTree function? This doesn't make sense!! Thanks for your help.

    Read the article

  • 3 Monitor PCI-e Graphics card on Linux (without tremendous pain)?

    - by N Rahl
    As we are all painfully aware, the only way to get multiple monitors AND compositing (Compiz) on Linux is to use a single graphics card that can drive both (or in my case all three) screens. I bought a Radeon 5750 specifically because it claims to able to drive 3 monitors. I can plug in 3 monitors (2 DVI, 1 HDMI) and the Catalyst Control Center shows all 3, but only 2 can be enabled at a time. The exact message is: The current settings cannot be applied. Possible issues may include: - Display(s) cannot be enabled. - Setting(s) cannot be applied due to insufficient video memory. So I'm going to assume that either the 5750 doesn't support 3 monitors, OR, more likely, ATI couldn't be bothered to add that support to their Linux drivers. So this is a multipart question: First, can anyone suggest a PCI Express Graphics card that can run 3 screens on linux without tremendous pain? I'm looking for something where you install the driver and all three screens "just work". Does such a card exist? Second, if you have a 5750, have you been able to get it to do 3 monitors? I'm running Ubuntu 10.04 at the moment.

    Read the article

  • Linux: 3 Monitor PCI-e Graphics card (without tremendous pain)?

    - by N Rahl
    As we are all painfully aware, the only way to get multiple monitors AND compositing (Compiz) on Linux is to use a single graphics card that can drive both (or in my case all three) screens. I bought a Radeon 5750 specifically because it claims to able to drive 3 monitors. I can plug in 3 monitors (2 DVI, 1 HDMI) and the Catalyst Control Center shows all 3, but only 2 can be enabled at a time. I'll post the exact error message here soon, but it's very useless. So I'm going to assume that either the 5750 doesn't support 3 monitors, OR, more likely, ATI couldn't be bothered to add that support to their Linux drivers. So this is a multipart question: First, can anyone suggest a PCI Express Graphics card that can run 3 screens on linux without tremendous pain? I'm looking for something where you install the driver and all three screens "just work". Does such a card exist? Second, if you have a 5750, have you been able to get it to do 3 monitors? I'm running Ubuntu 10.04 at the moment. Thanks, Nick

    Read the article

  • 3 Monitor PCI-e Graphics card (without tremendous pain)?

    - by N Rahl
    As we are all painfully aware, the only way to get multiple monitors AND compositing (Compiz) on Linux is to use a single graphics card that can drive both (or in my case all three) screens. I bought a Radeon 5750 specifically because it claims to able to drive 3 monitors. I can plug in 3 monitors (2 DVI, 1 HDMI) and the Catalyst Control Center shows all 3, but only 2 can be enabled at a time. The exact message is: The current settings cannot be applied. Possible issues may include: - Display(s) cannot be enabled. - Setting(s) cannot be applied due to insufficient video memory. So I'm going to assume that either the 5750 doesn't support 3 monitors, OR, more likely, ATI couldn't be bothered to add that support to their Linux drivers. So this is a multipart question: First, can anyone suggest a PCI Express Graphics card that can run 3 screens on linux without tremendous pain? I'm looking for something where you install the driver and all three screens "just work". Does such a card exist? Second, if you have a 5750, have you been able to get it to do 3 monitors? I'm running Ubuntu 10.04 at the moment.

    Read the article

  • Event taps: Varying results with CGEventPost, kCGSessionEventTap, kCGAnnotatedSessionEventTap, CGEve

    - by kevingessner
    I'm running into a thorny problem with posting an event from an event tap. I'm tapping for NSSystemDefined at kCGHIDEventTap, then replacing the event with a new one. The problem I'm running in to is that depending on how I post the event, it's being seen only by some applications. My test applications are Opera, Firefox, Quicksilver, and Xcode. Here are the different techniques I've tried within my event tap callback, with results. I'm expecting an action (the "correct response") from each app; "system beep" means the nothing-is-bound-to-that-key system sound. Create a new event, and return it from the callback. Opera: no response/system beep, Firefox: no response/system beep, Quicksilver: correct response, Xcode: no response/system beep Create a new event, post to kCGSessionEventTap with CGEventPost, return null. Opera: no response/system beep, Firefox: no response/system beep, Quicksilver: correct response, Xcode: no response/system beep Create a new event, post to kCGAnnotatedSessionEventTap with CGEventPost, return null. Opera: correct response, Firefox: correct response, Quicksilver: no response/system beep, Xcode: no response/system beep Create a new event, post with CGEventTapPostEvent, return null. Opera: no response/system beep, Firefox: no response/system beep, Quicksilver: correct response, Xcode: no response/system beep Create a new event, post to kCGSessionEventTap with CGEventPost, and return new event. Opera: no response/system beep, Firefox: no response/system beep, Quicksilver: correct response, Xcode: no response/system beep Create a new event, post to kCGAnnotatedSessionEventTap with CGEventPost, and return new event. Opera: correct response and system beep, Firefox: correct response and system beep, Quicksilver: correct response and system beep, Xcode: no response/double system beep Create a new event, post with CGEventTapPostEvent, and return new event. Opera: no response/system beep, Firefox: no response/system beep, Quicksilver: correct response, Xcode: no response/system beep (6) is the best, but users are complaining about the extra system beep on correct responses, which I'm guessing is coming from the double-posting of the event. I'm not sure of other combinations to try, or where else to look. Can anyone offer any guidance? Is there any way to get the results of both returning the event from my callback and posting to the annotated tap without doing both? Sorry for the lengthy question; I've been doing a lot of experimenting. Thanks in advance Update: this is the code I use to create the event tap: CFMachPortRef eventTap; eventTap = CGEventTapCreate(kCGHIDEventTap, kCGHeadInsertEventTap, 0,CGEventMaskBit(NX_SYSDEFINED) | (1 << kCGEventKeyDown) | (1 << kCGEventKeyUp), myCGEventCallback, (void *)hidEventQueue);

    Read the article

  • Quartz compositions created in Snow Leopard (10.6) doesn't work in Leopard (10.5) despite testing in

    - by adib
    Hi I have a reasonably advanced (many patches and subpatches) quartz composition that was created in Snow Leopard but doesn't run well (many elements are not rendered) in Leopard. The composition tested OK via Quartz Composer's Test in Runtime option and works fine for both Leopard 32-bits and Leopard 64-bits (menu item "File | Test in Runtime | Leopard 32-bits". In an actual Leopard (32-bits) system, a lot of elements are not rendered in the quartz composition. Below are the log file excerpt when the composition is run in QuickTime Player under Leopard: QuickTime Player[134] *** <QCNodeManager | namespace = "com.apple.QuartzComposer" | 335 nodes>: Patch with name "/units to pixels" is missing QuickTime Player[134] *** Message from <QCPatch = 0x06D82880 "(null)">:Cannot create node of class "/units to pixels" and identifier "(null)" QuickTime Player[134] *** Message from <QCPatch = 0x06D7C130 "(null)">:Cannot create node of class "/resize image to target" and identifier "(null)" QuickTime Player[134] *** Message from <QCPatch = 0x06D7C130 "(null)">:Cannot create connection from ["outputValue" @ "Math_1"] to ["Target_Pixels" @ "Patch_2"] The patch "units to pixels" is a system patch in Snow Leopard whereas the patch "resize image to target" is a custom virtual patch located in my home directory. It seems that we can cross out problems in which the composition is referencing a missing virtual patch. I have tested the composition under another user's account and it ran fine which shows that it already embeds the "resize image to target" virtual patch that is located in my home directory. I'm really puzzled why the composition passes the Leopard Runtime test but yet fail to run in an actual Leopard OS? Is there a post-processing step that I need to run to the composition file? Is there any way to make this patch more compatible with Leopard? Thanks in advance.

    Read the article

  • Which are the best tools for Graphic Designing?

    - by Jen
    Hello, I want to take up Graphic Designing as my profession. I would be designing Logos, Icons, Stationery, Brochures, Handouts, Book Covers, etc. But I am thoroughly confused as to which tools are the best and which books/resources will help me learn these tools and graphic designing like a professional. I am ready to shell out money to purchase the resources. Please help me out! Thanks, Jen

    Read the article

  • CATiledLayer blanking tiles before drawing contents

    - by Greg Plesur
    All, I'm having trouble getting behavior that I want from CATiledLayer. Is there a way that I can trigger the tiles to redraw without having the side-effect that their areas are cleared to white first? I've already subclassed CATiledLayer to set fadeDuration to return 0. To be more specific, here are the details of what I'm seeing and what I'm trying to achieve: I have a UIScrollView with a big content size...~12000x800. Its content view is a UIView backed by a CATiledLayer. The UIView is rendered with a lot of custom-drawn lines Everything works fine, but the contents of the UIView sometimes change. When that happens, I'd like to redraw the tiles as seamlessly as possible. When I use setNeedsDisplay on the view, the tiles redraw but they are first cleared to white and there's a fraction-of-a-second delay before the new content is drawn. I've already subclassed CATiledLayer so that fadeDuration is set to 0. The behavior that I want seems like it should be possible...when you zoom in on the scrollview and the content gets redrawn at a higher resolution, there's no blanking before the redraw; the new content is drawn right on top of the old one. That's what I'm looking for. Thanks; I appreciate your ideas. Update: Just to follow up - I realized that the tiles weren't being cleared to white before the redraw, they're being taken out entirely; the white that I was seeing is the color of the view that's beneath my CATiledLayer-backed view. As a quick hack/fix, I put a UIImageView beneath the UIScrollView, and before triggering a redraw of the CATiledLayer-backed view I render its visible section into the UIImageView and let it show. This smooths out the redraw significantly. If anyone has a better solution, like keeping the redraw-targeted tiles from going away before being redrawn in the first place, I'd still love to hear it.

    Read the article

  • How to fill a path with gradient in drawRect:?

    - by Derrick
    filling a path with a solid color is easy enough: CGPoint aPoint; for (id pointValue in points) { aPoint = [pointValue CGPointValue]; CGContextAddLineToPoint(context, aPoint.x, aPoint.y); } [[UIColor redColor] setFill]; [[UIColor blackColor] setStroke]; CGContextDrawPath(context, kCGPathFillStroke); I'd like to draw a gradient instead of solid red, but I am having trouble. I've tried the code listed in the Question/Answer: http://stackoverflow.com/questions/422066/gradients-on-uiview-and-uilabels-on-iphone which is: CAGradientLayer *gradient = [CAGradientLayer layer]; [gradient setFrame:rect]; [gradient setColors:[NSArray arrayWithObjects:(id)[[UIColor blueColor] CGColor], (id)[[UIColor whiteColor] CGColor], nil]]; [[self layer] setMasksToBounds:YES]; [[self layer] insertSublayer:gradient atIndex:0]; However, this paints the entire view that this is in with the gradient, covering up my original path.

    Read the article

  • Problem draw line by Quartz 2D with alpha property < 1.0 on iPhone

    - by The Khanh
    Hello Everybody ! This code i use to draw in my app. So i have problem, if i draw with alpha property = 1. It is very good but if i change alpha property = 0.2 then my paint is not good. How do i make for better with alpha property = 0.2. http://www.flickr.com/photos/9601621@N05/page1/ Draw with alpha = 1: It is good Draw with alpha = 0.2: It is bad - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { if ([self.view superview] && (headerView.frame.origin.y == -30)) { mouseSwiped = YES; UITouch *touch = [touches anyObject]; CGPoint currentPoint = [touch locationInView:self.view]; currentPoint.y -= 20; UIGraphicsBeginImageContext(self.view.frame.size); CGContextRef context = UIGraphicsGetCurrentContext(); [drawImage.image drawInRect:CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height)]; CGContextSetLineCap(context, kCGLineCapRound); CGContextSetLineWidth(context, currentBrushProperty.brushSize); CGContextSetRGBStrokeColor(context, [self red], [self green], [self blue], currentBrushProperty.brushTransparency); CGContextSetRGBStrokeColor(context, 1.0, 0.0, 0.0, 1.0); CGContextBeginPath(context); CGContextMoveToPoint(context, lastPoint.x, lastPoint.y); CGContextAddLineToPoint(context, currentPoint.x, currentPoint.y); CGContextStrokePath(context); drawImage.image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); lastPoint = currentPoint; }} Help me, please.

    Read the article

  • Draw a parallel line

    - by VOX
    I have x1,y1 and x2,y2 which forms a line segment. How can I get another line x3,y3 - x4,y4 which is parallel to the first line as in the picture. I can simply add n to x1 and x2 to get a parallel line but it is not what i wanted. I want the lines to be as parallel in the picture.

    Read the article

  • gluLookAt vectors and FPS-style camera

    - by Kevin Pamplona
    I am attempting to implemented an FPS-style camera by updating three vectors: EYE, DIR, UP. These vectors are the same that are used by gluLookAt (since gluLookAt is specified by the position of the camera, the direction it is looking at, and an up vector). I have already implemented the left-right and up-down strafing movements, but I'm having a lot of trouble understanding the math behind making the camera look-around while remaining stationary. In this case, the EYE vector remains the same, while I must update DIR and UP. Below is the code I tried, but it doesn't seem to work properly. Any suggestions? Thanks. void Transform::left(float degrees, vec3& dir, vec3& up) { vec3 axis; axis = glm::normalize(up); mat3 R = rotate(-degrees, axis); dir = R*dir; dir = R*up; }; void Transform::up(float degrees, vec3& dir, vec3& up) { vec3 axis; axis=glm::normalize(glm::cross(dir,up)); mat3 R = rotate(-degrees, axis); dir = R*dir-; up = R*up; };

    Read the article

  • Draw 2 parallel lines

    - by Ben Martin
    How can I calculate the points to draw 2 parallel lines. I know the start and end points for the centre of the parallel lines. To makes thing a little bit harder, it needs to support straight and Bezier curved lines.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >