Search Results

Search found 6029 results on 242 pages for 'discrete graphics'.

Page 17/242 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Graphics problem after updating to Kernel 3.11 with Ubuntu 13.04 64bit

    - by Gaurav Sharma
    I am new to ubuntu.I have a 64 bit intel processor with ATI 6570 graphics card. Now Ubuntu 13.04 is working fine with stock kernel(3.8). But as I read somewhere that kernel 3.11 do support ATI graphics card better, I tired updating them with no success. I tried 3.11.6 3.11.4 3.11.0 but with all of them I am facing the same problem ... after installing them and restart, the screen resolution get distorted and the unity becomes too slow transparency in dash and the launcher is also lost. Now whatever little I know , this may be related with graphics diver either they are not present or the graphic card is not turned on. Can some one help with this. And yeah i tried ubuntu 13.10 that worked fine but it has some bugs. and yeah please pardon me for my bad English.

    Read the article

  • Intel HD Graphics vs NVIDIA Quadro FX 380 PCI-E

    - by Michael
    I recently purchased an Acer Veriton which has an i5-650 processor, Windows 7 Pro (64 bit) and Intel HD Graphics listed as the video card. I also purchased a PNY nVIDIA Quadro FX 380 PCI-E card for improved picture and home video viewing and editing. I have already replaced the original 300 wattt power supply to a 430 watt Antec Truepower I had on hand and boosted the RAM to 8 gigs from the original 4. Question 1) Am I getting any improvement in visual quality or system speed with the Quadro or is it a waste of money and I should just save up to buy a bigger video card? This card was on sale for $115. If I am getting improvement then I need to ask another question. Question 2) Instructions for the Quadro installation are as follows... 1--Uninstall the existing VGA driver. -Remove the existing Display Driver via "Add or Remove Porgrams". -Shut down your computer. 2--Remove your Existing Graphics Board (or Disable Integrated 3D Graphics Controller). skipping instructions on how to remove existing graphics board -Systems with integrated (also know as on-board) 3D graphics may require you to disable the integrated 3D graphics system. Consult the owners or vendor manual that came with your PC on how to properly do this. So is the Intel HD Graphics considered a 3D graphics controller? If so should I just contact Acer or can anyone give me instructions? Thanks in advance for any help.

    Read the article

  • How to stop switchable graphics from switching to high-power GPU when charging the laptop?

    - by Saifallah
    I've an Acer laptop with Windows 7 64 bit and an ATI Radeon HD 6550M Graphics card. Whenever I connect the power to charge the battery it automatically switches to the high-power GPU (ATI) instead of the low-power (Intel) GPU. There's an option in the bios to stop such thing but it makes the GPU runs always on high-power and I can't switch to the low-power GPU. How can I prevent the switchable graphics from automatically using the high-power GPU when charging?

    Read the article

  • How to print a rendered website to pdf or vector graphics?

    - by Lo Sauer
    This is a crucial question to many: Searching the web, I have found several command line tools that allow you to convert a HTML-document to a PDF-document, however they all seem to use their own, and rather incomplete rendering engine, resulting in poor quality How can you print the rendered output of a modern web-browser to pdf, (and/or svg) whilst retaining as much vector graphics as possible? There is a solution called: webkit-pdf (which renders everything to bitmap graphics) I am looking for options, alternatives, suggestions perhaps even a printer-driver or webservices? Thanks

    Read the article

  • How to create Bitmap object from a Graphics object ?

    - by vi.su.
    How to create Bitmap object from a Graphics object ? I would like to read pixels from my Graphics object. for example, like, System.Drawing.BitMap.GetPixel(). I am trying to find out empty area (all white, or of any colour) inside a pdf document, to write some graphics / image. I have tried like this, but it is not working. why the following code is not working as expected ? // // System.Drawing.Bitmap // System.Drawing.Graphics // Bitmap b = new Bitmap(width, height, graphics); // // In this case, for any (i, j) values, Bitmap.GetPixel returns 0 // int rgb = b.GetPixel(i, j).ToArgb(); ( posting this question again in .net-only context, removing other library dependencies )

    Read the article

  • How to boot with Intel GMA500 Poulsbo graphics

    - by Seyed Mohammad
    I have a Sony VAIO netbook with Intel GMA-500 Poulsbo Graphics and I'm trying to boot the latest Ubuntu-12.04 Beta-2 using a bootable USB. According to this Ubuntu-Wiki, support for Intel GMA-500 Poulsbo graphics is promising in Precise-Beta2 and should work out of the box. Of course the wiki talks about a problem when booting from USB and states that restarting X using: sudo service lightdm restart will bring a functional graphical desktop, but nothing happens for me! Any help is highly appreciated.

    Read the article

  • Problem booting 12.04-Beta2 with Intel GMA500 Poulsbo graphics

    - by Seyed Mohammad
    I have a Sony VAIO netbook with Intel GMA-500 Poulsbo Graphics and I'm trying to boot the latest Ubuntu-12.04 Beta-2 using a bootable USB. According to this Ubuntu-Wiki, support for Intel GMA-500 Poulsbo graphics is promising in Precise-Beta2 and should work out of the box. Of course the wiki talks about a problem when booting from USB and states that restarting X using: sudo service lightdm restart will bring a functional graphical desktop, but nothing happens for me! Any help is highly appreciated.

    Read the article

  • Graphics on ubuntu 12.04 not working

    - by Rhavin
    I've got a new build specs: i5 3570k gigabyte z77 d3h Using integrated graphics I've managed to install Ubuntu 12.04 with the alternate installer, and to get video with the nomodeset option on grub screen. However when I log in, I only get the 4:3 display option. I can't get it to change. Under the settings:details tab it recognizes the hd4000 integrated graphics. Hoping someone could help!

    Read the article

  • Can I boot up a virtual machine natively?

    - by Anshul
    My question is: Is is possible to run a virtual machine natively on your hardware if you have installed the proper drivers etc? In other words, can I use a VHD as a regular hard drive to boot from? The reason I want to do this is that I do both graphics-intensive and audio-intensive work, but my computer is not powerful enough to handle both at the same time and many times I install a bunch of audio programs that I don't want affecting the stability of my graphics programs. Basically I wanted to have sandboxing between the two sets of applications. So I tried running the graphics-intensive programs in a VirtualBox VM and the audio-intensive work natively (simply because it's a pain to route ASIO audio devices in/out of VirtualBox). This kind-of works - the graphics-intensive stuff is tolerable, but still relatively slow, because it's running inside a VM. So my next idea was to just dual-boot and install the graphics and audio programs in separate partitions but I frequently use them in tandem, so it wouldn't be practical to reboot my machine every time I need to use the other set of programs. But I could live with this scenario: If I need to do more audio-intensive stuff, I'll just boot up to the audio partition and run the graphics programs in a VM, and then when I'm working heavily on the graphics part, I'll just boot the graphics partition as a regular OS directly on the hardware. Is this possible? For example by booting up a VHD as a regular hard drive? Or by setting up dual-boot, and every time the audio partition is shut down, synchronize the graphics VM VHD with the native graphics partition? Is it practical, given the above scenario? And if it's not possible, barring buying another computer, can anyone suggest a best-of-all-worlds setup (the two worlds being performance, sandboxing, and running in parallel) for the above scenario? Thanks in advance.

    Read the article

  • Mirroring a portion of the screen to an external display (in OSX)

    - by Adam
    I would like to write a program that can mirror a portion of the main display into a new window. Ideally this new window could then be displayed on an external monitor. I have seen this uiltity for a flightsim that does this on a pc (a multifunction display extractor). MFDex http://trac2.assembla.com/lightnings..._x86_Setup.msi I have looked at screen magnifiers or vnc clients for ideas but I think I need to write something from scratch. I have tried to do some reading on osx programing but where do I start in terms of gaining access to the display? I somehow need to extract the graphics from a particular program. Is it best to go near the final output stage (the individual pixels sent to the display) or somewhere nearer the window management stage. Any ideas or pointers would be much appreciated. I just need somewhere to start from. Regards,

    Read the article

  • Programming graphics and sound on PC - Total newbie questions, and lots of them!

    - by Russel
    Hello, This isn't exactly specifically a programming question (or is it?) but I was wondering: How are graphics and sound processed from code and output by the PC? My guess for graphics: There is some reserved memory space somewhere that holds exactly enough room for a frame of graphics output for your monitor. IE: 800 x 600, 24 bit color mode == 800x600x3 = ~1.4MB memory space Between each refresh, the program writes video data to this space. This action is completed before the monitor refresh. Assume a simple 2D game: the graphics data is stored in machine code as many bytes representing color values. Depending on what the program(s) being run instruct the PC, the processor reads the appropriate data and writes it to the memory space. When it is time for the monitor to refresh, it reads from each memory space byte-for-byte and activates hardware depending on those values for each color element of each pixel. All of this of course happens crazy-fast, and repeats x times a second, x being the monitor's refresh rate. I've simplified my own likely-incorrect explanation by avoiding talk of double buffering, etc Here are my questions: a) How close is the above guess (the three steps)? b) How could one incorporate graphics in pure C++ code? I assume the practical thing that everyone does is use a graphics library (SDL, OpenGL, etc), but, for example, how do these libraries accomplish what they do? Would manual inclusion of graphics in pure C++ code (say, a 2D spite) involve creating a two-dimensional array of bit values (or three dimensional to include multiple RGB values per pixel)? Is this how it would be done waaay back in the day? c) Also, continuing from above, do libraries such as SDL etc that use bitmaps actual just build the bitmap/etc files into machine code of the executable and use them as though they were build in the same matter mentioned in question b above? d) In my hypothetical step 3 above, is there any registers involved? Like, could you write some byte value to some register to output a single color of one byte on the screen? Or is it purely dedicated memory space (=RAM) + hardware interaction? e) Finally, how is all of this done for sound? (I have no idea :) )

    Read the article

  • Plotting an Arc in Discrete Steps

    - by phobos51594
    Good afternoon, Background My question relates to the plotting of an arbitrary arc in space using discrete steps. It is unique, however, in that I am not drawing to a canvas in the typical sense. The firmware I am designing is for a gcode interpreter for a CNC mill that will translate commands into stepper motor movements. Now, I have already found a similar question on this very site, but the methodology suggested (Bresenham's Algorithm) appears to be incompatable for moving an object in space, as it only relies on the calculation of one octant of a circle which is then mirrored about the remaining axes of symmetry. Furthermore, the prescribed method of calculation an arc between two arbitrary angles relies on trigonometry (I am implementing on a microcontroller and would like to avoid costly trig functions, if possible) and simply not taking the steps that are out of the range. Finally, the algorithm only is designed to work in one rotational direction (e.g. counterclockwise). Question So, on to the actual question: Does anyone know of a general-purpose algorithm that can be used to "draw" an arbitrary arc in discrete steps while still giving respect to angular direction (CW / CCW)? The final implementation will be done in C, but the language for the purpose of the question is irrelevant. Thank you in advance. References S.O post on drawing a simple circle using Bresenham's Algorithm: "Drawing" an arc in discrete x-y steps Wiki page describing Bresenham's Algorithm for a circle http://en.wikipedia.org/wiki/Midpoint_circle_algorithm Gcode instructions to be implemented (see. G2 and G3) http://linuxcnc.org/docs/html/gcode.html

    Read the article

  • MyPaint is an Open-Source Graphics App for Digital Painters

    - by Asian Angel
    Are you looking for a terrific graphics app to use for original painting and artwork creation on your computer? Whether it is for you or the kids, MyPaint is an app that you should definitely have on hand for when those artistic moods come along. For our example we chose to install MyPaint on Ubuntu 10.10…you can easily find it in the Ubuntu Software Center by doing a quick search. Once you have it installed, all that is left to do is decide if you want to add additional brushes (link provided below) and then start having fun creating your next work of art. Here are some of MyPaint’s wonderful features: Exists for several platforms (Linux, Windows, and Mac OS X) Supports pressure sensitive graphics tablets Extensive brush creation and configuration options Unlimited canvas (you never have to resize) Basic layer support Comes with a large brush collection including charcoal and ink to emulate real media MyPaint is fun to use and can quickly become very addicting as you experiment during the creation process! Links MyPaint Homepage Download Additional Brushes for MyPaint Download the GIMP Plugin for the OpenRaster File Format Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents Peaceful Tropical Cavern Wallpaper

    Read the article

  • Rendering UIImage/CGImage into CGPDFContext results in... blankness!

    - by quixoto
    Hi all, I'm trying to take an image that I have in a image object and render into a Core Graphics PDF context-- happens to be on an iPhone but this question surely applies equally to desktop Quartz. This UIImage is a simple color-on-white image at about 600x800 resolution. If I (say) turn it into a PNG file, that file looks exactly as expected-- so the data is OK. Here's what I'm doing to generate the PDF: NSMutableData * outputData = [[NSMutableData alloc] init]; CGDataConsumerRef dataConsumer = CGDataConsumerCreateWithCFData((CFMutableDataRef)outputData); CFMutableDictionaryRef attrDictionary = NULL; attrDictionary = CFDictionaryCreateMutable(NULL, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks); CFDictionarySetValue(attrDictionary, kCGPDFContextTitle, @"My Awesome Document"); CGContextRef pdfContext = CGPDFContextCreate(dataConsumer, NULL, attrDictionary); CFRelease(dataConsumer); CFRelease(attrDictionary); CGImageRef pageImage = [myUIImage CGImage]; CGPDFContextBeginPage(pdfContext, NULL); CGContextDrawImage(pdfContext, CGRectMake(0, 0, [myUIImage size].width, [myUIImage size].height), pageImage); CGPDFContextEndPage(pdfContext); CGContextRelease(pdfContext); Resulting PDF, which ends up in outputData, seems like a valid PDF file (opens correctly, document title is present in metadata), but it consists of precisely one blank page. What am I doing wrong? Thanks.

    Read the article

  • How to systematically generate images from data?

    - by adamvickers
    I work for a performing arts nonprofit. We have seating charts for each of the theaters we work with; each seating chart shows the number of sections, the shape of each section, and the number of rows in each section. We'd like to create dynamic seating charts based on this info. We'd like them to look/feel kinda like this: http://www.fansnap.com/tickets/177754-on. But the tricky part is we'd like to be able to store all the info about each theater (the section names, shape/size of each section, and number of rows in each section) as data and then build a system that reads this data and uses it to create a dynamic map. I'm a life-long web developer, but I don't have have any experience with a difficult graphics problem like this. I realize it's a complex problem and I don't expect anyone to give me a complete answer here, but I would love direction on where I should be looking for more info. Is what I'm describing possible? Does this sort of technique have a name? Where can I learn more about how to accomplish this? What software should I use? Any info would be helpful.

    Read the article

  • Motherboard warning lights when plugging in a display port cord to graphics card?

    - by rllr
    Earlier today, my computer spontaneously shut itself off and refused to turn back on. I tested my PSU and it's operating fine. I unplugged everything and let it sit for a while and it started to make a high pitched coil whine/hiss. When I came back an hour later and plugged in only the power cord, it turned on without any issues. After some troubleshooting, I noticed my motherboard (Intel D975XBX2) has a red CPU led and VR led that come on whenever I plug my monitor into my graphics card via display port. DVI does not cause a similar issue. I was running three monitors off the card, so I need both DVI ports and the display port working. Is it likely my graphics card needs to be replaced, or should I be looking elsewhere to resolve this issue?

    Read the article

  • How can I ensure my Samsung Series 7 is actually using the Radeon switchable graphics?

    - by patricksweeney
    I have a Samsung Series 7 with the Radeon 6750M switchable graphics. The ATI software lets you force programs to use the dedicated card. However, I'm not convinced it is actually ever using it, as the frame rates on some non-taxing games (Portal, TF2) are merely OK. To make matters worse, it looks like the ATI Catalyst Control has vanished from my laptop. To make it even worse, you can't download the driver and ATI CCC from ATI's site, you need to download it from Samsung, and the ZIP they provide is corrupted. How can I ensure my Samsung Series 7 is actually using the Radeon switchable graphics?

    Read the article

  • Low end dedicated GPU vs. integrated Intel graphics (for light CAD work)

    - by PaulJ
    I have been asked to spec a PC for an interior design business. They are going to do some AutoCAD work (but they won't be using massive datasets or anything), and also use Kitchen Draw, a program that has 3D visualization features and says, in its requirements, that "a recent NVidia or ATI card might be enough". Since they are very limited budget-wise, I had originally picked a GeForce GT 610 card, but this card is so low end that I'm left wondering whether it will be an improvement at all over the dedicated Intel HD2500 graphics chip that comes with the CPU (I will be using an Ivy-Bridge Intel i5). Most of the information I see around is for gaming, which isn't really relevant in my case. Basically, for the use case I've described (light 3D work), can one get away with a current Intel HD graphics chipset? And will a low end GPU like the GT 610 provide a noticeable improvement?

    Read the article

  • Display with intel integrated graphics, bitcoin mine with Radeon 6950

    - by karategeek6
    I'm on Ubuntu Linux 11.04 64 bit. I have an intel i5 with integrated graphics and a Radeon 6950, with one monitor. I would like to run my graphics on the integrated card, and run bitcoin mining on the 6950. I have bitcoin mining working when I use the 6950 for both display and mining. Every time I try and and use the integrated graphics instead, OpenCL doesn't recognize my 6950. Using aticonfig --initial when using the integrated graphics for display breaks things. So I used the xorg.conf it created as a basis and tried to manually edit it. I really don't know what I'm doing, though. My last attempt is given below. The graphics ran off the integrated card, but the 6950 wasn't recognized. Any help would be greatly appreciated! xorg.conf: #Section "ServerLayout" # Identifier "Intel Layout" # Screen "Default Screen" # Identifier "aticonfig Layout" # Screen "aticonfig-Screen[0]-0" # Screen 0 "aticonfig-Screen[0]-0" 0 0 #EndSection Section "Module" Load "glx" EndSection # Intel Section "Device" Identifier "Intel Integrated Graphics" Driver "intel" BusID "PCI:0:2:0" EndSection Section "Monitor" Identifier "Default Monitor" Option "VendorName" "Monitor Vendor" Option "ModelName" "Monitor Name" Option "DPMS" "true" EndSection Section "Screen" Identifier "Default Screen" Device "Intel Integrated Graphics" Monitor "Default Monitor" DefaultDepth 24 EndSection # ATI Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" Device "aticonfig-Device[0]-0" Monitor "aticonfig-Monitor[0]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection

    Read the article

  • dual-boot (win-xp/ubu12.04) graphics card for ubu-desktop/win-xp-games

    - by iole1
    for work I need to get a a new and cheap graphics card for a dual boot machine: windows xp/ubuntu 12.04 LTS. The only requirements I have are: it should work 'flawlessly' in ubuntu (proprietary drivers are ok) it should handle Guild Wars 2 & League of Legends in windows xp (this is really the top priority as we need to be able to play at work :) - yes I have a cool job) I know nothing about graphics cards (and it seems to be a jungle out there). From other questions here and some webstigation I think I'd like to go for a Nvidia card, I've been trying to figure out what models fit the system req's but it seems they use different kind of model numbers so I don't get any wiser. tl;dr: will http://www.geforce.co.uk/hardware/desktop-gpus/geforce-gt-620-oem/specifications run Guild Wars 2 http://gamesystemrequirements.com/games.php?id=938 Or what is the worst card from nVidia that will run GW2 smoothly and work well in Ubuntu 12.04 Thanks!

    Read the article

  • XNA - Drawing 2D Primitives (Boxes) and Understanding Matrices in Computer Graphics

    - by MintyAnt
    I have two issues which I wish to solve by creating 2D primitives in XNA. In my game, I wish to have a "debug mode" which will draw a red box around all hitboxes in the game (Red outline, transparent inside). This would allow us to see where the hitboxes are being drawn AND still have the sprite graphics being drawn. I wish to further understand how matrices work within computer graphics. I have a basic theoretical grasp of how they work, but I really just want to apply some of my knowledge or find a good tutorial on it. To do this, I wish to draw my own 2D primitives (With Vertex3's) and apply different transormation matrices to them. I was trying to find a tutorial on drawing primitives using Direct3D, but most tutorials are only for c++, and just tell me to use XNA's Spritebatch. I wish to have more control over my program than just with Spritebatch. Any Help on using Direct3D or any other suggestions would greatly be appreciated. Thank you.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >