Search Results

Search found 557 results on 23 pages for 'optimus prime'.

Page 4/23 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • BumbleBee / getting Nvidia 310.19 Drivers to Work | 12.10

    - by Charles Mynard
    Asus U31JG - A1 Laptop: -Intel Core i3 -Nvidia 415m / Intel HD Graphics ( Optimus Technology ) Hello, I have been trying for about 3 weeks now to get any kind of Nvidia driver to work, and now am determined to get the 310.19 driver to work. I have tried numerous times of which either nothing happened or my interface ( menu bars, the top and left bar on desktop, and the ability to close a window) would disappear. Apparently I'm not making the connection on how to get these to install properly. I have tried numerous other posts and websites and attempted "bumblebee" to no avail. I am wondering if anyone can write a step by step guide of commands that I need to run in terminal to get this to work. I've had to reinstall 12.10, so if you could walk me through the process of getting the drivers downloaded and installed, I would greatly appreciate it. I barely know what I'm doing, and this is quite a turn off for someone new to Ubuntu, I really want to enjoy it but this is preventing me from committing. Thank you in advance, and I apologize for being so flustered / helpless with this but I have ran out of patience with this.

    Read the article

  • Acer aspire v3 771G ubuntu 13.04

    - by Jos
    Gooday, i have this acer and i have alot of boot problems (i suspect windows 8) and now i want to try ubuntu but when i use an usb to "try" ubuntu after the boot i get a black screen. now ive read some of the forums and i found something about NOMODESET i have not tried this as i dont know what this does exactly. now i have found this wiki entry https://wiki.ubuntu.com/Bumblebee , i am by far no programmer and always reading all those commands have always kept me of linux because im scared i will !@#$ things up. is there anyway i can go to NOMODESET in the ubuntu "trial" and can i also include the bumblebee futures (coding?) and in how many ways wil this affect my laptops perfomance? reading the bumblebee entry its seems to be something about nvidia optimus and i dont reallt care much for the power saving, but will it affect any performance? im not a heavy pc gamer but i like tho do some gaming and streaming and such also on a rather big TV in wich this laptop already has it flaws in some games not running properly on 65" if this doesn't work or u advise me not to do this what else can i do to fix windows 8 or either some other linux version? i thankyou in advance

    Read the article

  • Correct nvidia+intel graphics setup in 14.04

    - by Espressofa
    Just upgraded to 14.04 to try to fix some other issues. Now, something has gone wrong with my graphics. I have a Thinkpad T530 with Intel and Nvidia graphics cards. $ inxi -SGx System: Host: xyz Kernel: 3.13.0-24-generic x86_64 (64 bit, gcc: 4.8.2) Desktop: N/A Distro: Ubuntu 14.04 trusty Graphics: Card-1: Intel 3rd Gen Core processor Graphics Controller bus-ID: 00:02.0 Card-2: NVIDIA GF108M [NVS 5400M] bus-ID: 01:00.0 X.Org: 1.15.1 drivers: fbdev,vesa,intel,nouveau (unloaded: nvidia) Resolution: [email protected] GLX Renderer: N/A GLX Version: N/A Direct Rendering: N/A $ glxinfo name of display: :0 Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Error: couldn't find RGB GLX visual or fbconfig Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Error: couldn't find RGB GLX visual or fbconfig Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". I'm not sure what I did but now something is wrong with my graphics, as should be visible from the above commands. nvidia-detector says "none" as well. I used to have bumblebee but then some website said to remove it and now something's clearly wrong. What's the right way to set things up? Should I try to add bumblebee back? Here's what's installed now: $ dpkg --get-selections | grep nvidia nvidia-319 install nvidia-331 install nvidia-libopencl1-331 install nvidia-opencl-icd-331 install nvidia-prime install nvidia-settings install nvidia-settings-319 install

    Read the article

  • Euler Project Help (Problem 12) - Prime Factors and the like

    - by Richie_W
    I hate to have to ask, but I'm pretty stuck here. I need to test a sequence of numbers to find the first which has over 500 factors: http://projecteuler.net/index.php?section=problems&id=12 -At first I attempted to brute force the answer (finding a number with 480 after a LONG time) -I am now looking at determining the prime factors of a number and then use them to find all other factors. I am currently at the stage where I can get an array of prime factors for any number I input - i.e 300 has the prime factors 2 2 3 5 5 Using this array of prime factors I need to be able to calculate the remaining factors - This is the part I am stuck on. Basically, as I understand it, I need to calculate ALL possible combinations of the numbers in the array... i.e 2 * 2 2 * 2 * 3 2 * 2 * 3 * 5 2 * 3 2 * 3 * 3 ...and so forth - But where it gets interesting is with things like... 2 * 5 2 * 3 * 5 ...i.e Numbers which are not adjacent to each other in the array I can't think of a way to code this in a generic fashion for any length array... I need help! P.S - I am working in Java EDIT: My brute force code - As it has been suggested brute forcing the problem will work and so there may be an error in my code :( package euler.problem12; public class Solution { public static void main(String[] args) { int next = 1; int triangle = 0; int maxFactors = 0; while(true) { triangle = triangle + next; int factors = 1; int max = (int) triangle / 2; for(int i = 1; i <= max; ++i) { if(triangle % i == 0) { factors ++; } } if(factors > maxFactors) { maxFactors = factors; System.out.println(triangle + "\t" + factors); } next++; } } }

    Read the article

  • Bumblebee [ERROR]Cannot access secondary GPU - error: [XORG]

    - by Lunchbox
    Though this may seem like a duplicate question, none of the suggestions I've seen have worked for me, however nearly all posters get good results. I'll start with hardware: Metabox W350ST notebook Intel Core i7 4700 16GB RAM GTX 765M (with Optimus) 128GB SSD 1TB SSHD My initial error output when trying to optirun a game is: [ERROR]Cannot access secondary GPU - error: [XORG] (EE) NVIDIA(0): Failed to initialize the NVIDIA GPU at PCI:1:0:0. Please [133.973920] [ERROR]Aborting because fallback start is disabled. If anything else is needed to troubleshoot this just let me know. Adding bumblebee.conf: # Configuration file for Bumblebee. Values should **not** be put between quotes ## Server options. Any change made in this section will need a server restart # to take effect. [bumblebeed] # The secondary Xorg server DISPLAY number VirtualDisplay=:8 # Should the unused Xorg server be kept running? Set this to true if waiting # for X to be ready is too long and don't need power management at all. KeepUnusedXServer=false # The name of the Bumbleblee server group name (GID name) ServerGroup=bumblebee # Card power state at exit. Set to false if the card shoud be ON when Bumblebee # server exits. TurnCardOffAtExit=false # The default behavior of '-f' option on optirun. If set to "true", '-f' will # be ignored. NoEcoModeOverride=false # The Driver used by Bumblebee server. If this value is not set (or empty), # auto-detection is performed. The available drivers are nvidia and nouveau # (See also the driver-specific sections below) Driver=nvidia # Directory with a dummy config file to pass as a -configdir to secondary X XorgConfDir=/etc/bumblebee/xorg.conf.d ## Client options. Will take effect on the next optirun executed. [optirun] # Acceleration/ rendering bridge, possible values are auto, virtualgl and # primus. Bridge=auto # The method used for VirtualGL to transport frames between X servers. # Possible values are proxy, jpeg, rgb, xv and yuv. VGLTransport=proxy # List of paths which are searched for the primus libGL.so.1 when using # the primus bridge PrimusLibraryPath=/usr/lib/x86_64-linux-gnu/primus:/usr/lib/i386-linux-gnu/primus # Should the program run under optirun even if Bumblebee server or nvidia card # is not available? AllowFallbackToIGC=false # Driver-specific settings are grouped under [driver-NAME]. The sections are # parsed if the Driver setting in [bumblebeed] is set to NAME (or if auto- # detection resolves to NAME). # PMMethod: method to use for saving power by disabling the nvidia card, valid # values are: auto - automatically detect which PM method to use # bbswitch - new in BB 3, recommended if available # switcheroo - vga_switcheroo method, use at your own risk # none - disable PM completely # https://github.com/Bumblebee-Project/Bumblebee/wiki/Comparison-of-PM-methods ## Section with nvidia driver specific options, only parsed if Driver=nvidia [driver-nvidia] # Module name to load, defaults to Driver if empty or unset KernelDriver=nvidia PMMethod=auto # colon-separated path to the nvidia libraries LibraryPath=/usr/lib/nvidia-current:/usr/lib32/nvidia-current # comma-separated path of the directory containing nvidia_drv.so and the # default Xorg modules path XorgModulePath=/usr/lib/nvidia-current/xorg,/usr/lib/xorg/modules XorgConfFile=/etc/bumblebee/xorg.conf.nvidia ## Section with nouveau driver specific options, only parsed if Driver=nouveau [driver-nouveau] KernelDriver=nouveau PMMethod=auto XorgConfFile=/etc/bumblebee/xorg.conf.nouveau DRIVER VERSION - Output of jockey-text -l: nvidia_304_updates - nvidia_304_updates (Proprietary, Enabled, Not in use)

    Read the article

  • To find first N prime numbers in python

    - by Rahul Tripathi
    Hi All, I am new to the programming world. I was just writing this code in python to generate N prime numbers. User should input the value for N which is the total number of prime numbers to print out. I have written this code but it doesn't throw the desired output. Instead it prints the prime numbers till the Nth number. For eg.: User enters the value of N = 7. Desired output: 2, 3, 5, 7, 11, 13, 19 Actual output: 2, 3, 5, 7 Kindly advise. i=1 x = int(input("Enter the number:")) for k in range (1, (x+1), 1): c=0 for j in range (1, (i+1), 1): a = i%j if (a==0): c = c+1 if (c==2): print (i) else: k = k-1 i=i+1

    Read the article

  • LightDM will not start after stopping it

    - by Sweeters
    I am running Ubuntu 11.10 "Oneiric Ocelot", and in trying to install the nvidia CUDA developer drivers I switched to a virtual terminal (Ctrl-Alt-F5) and stopped lightdm (installation required that no X server instance be running) through sudo service lightdm stop. Re-starting lightdm with sudo service lightdm start did not work: A couple of * Starting [...] lines where displayed, but the process hanged. (I do not remember at which point, but I think it was * Starting System V runlevel compatibility. I manually rebooted my laptop, and ever since booting seems to hang, usually around the * Starting anac(h)ronistic cron [OK] log line (not consistently at that point, though). From that point on, I seem to be able to interact with my system only through a tty session (Ctrl-Alt-F1). I've tried purging and reinstalling both lightdm and gdm, as well as selecting both as the default display managers (through sudo dpkg-reconfigure [lightdm / gdm] or by manually editing /etc/X11/default-display-manager) through both apt-get and aptitude (that shouldn't make a difference anyway) after updating the packages, but the problem persists. Some of the responses I'm getting are the following: After running sudo dpkg-reconfigure lightdm (but not ... gdm) I get the following message: dpkg-maintscript-helper:warning: environment variable DPKG_MATINSCRIPT_NAME missing dpkg-maintscript-helper:warning: environment variable DPKG_MATINSCRIPT_PACKAGE missing After trying sudo service lightdm start or sudo start lightdm I get to see the boot loading screen again but nothing changes. If I go back to the tty shell I see lightdm start/running, process <num> but ps -e | grep lightdm gives no output. After trying sudo service gdm start or sudo starg gdm I get the gdm start/running, process <num> message, and gdm-binary is supposedly an active process, but all that happens is that the screen blinks a couple of times and nothing else. Other candidate solutions that I'd found on the web included running startx but when I try that I get an error output [...] Fatal server error: no screens found [...]. Moreover, I made sure that lightdm-gtk-greeter is installed but that did not help either. Please excuse my not including complete outputs/logs; I am writing this post from another computer and it's hard to manually copy the complete logs. Also, I've seen several posts that had to do with similar problems, but either there was no fix, or the one suggested did not work for me. In closing: Please help! I very much hope to avoid re-installing Ubuntu from scratch! :) Alex @mosi I did not manage to fix the NVIDIA kernel driver as per your instructions. I should perhaps mention that I'm on a Dell XPS15 laptop with an NVIDIA Optimus graphics card, and that I have bumblebee installed (which installs nvidia drivers during its installation, I believe). Issuing the mentioned commands I get the following: ~$uname -r 3.0.0-12-generic ~$lsmod | grep -i nvidia nvidia 11713772 0 ~$dmesg | grep -i nvidia [ 8.980041] nvidia: module license 'NVIDIA' taints kernel. [ 9.354860] nvidia 0000:01:00.0: power state changed by ACPI to D0 [ 9.354864] nvidia 0000:01:00.0: power state changed by ACPI to D0 [ 9.354868] nvidia 0000:01:00.0: enabling device (0006 -> 0007) [ 9.354873] nvidia 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 9.354879] nvidia 0000:01:00.0: setting latency timer to 64 [ 9.355052] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 280.13 Wed Jul 27 16:53:56 PDT 2011 Also, running aptitude search nvidia gives me the following: p nvidia-173 - NVIDIA binary Xorg driver, kernel module a p nvidia-173-dev - NVIDIA binary Xorg driver development file p nvidia-173-updates - NVIDIA binary Xorg driver, kernel module a p nvidia-173-updates-dev - NVIDIA binary Xorg driver development file p nvidia-96 - NVIDIA binary Xorg driver, kernel module a p nvidia-96-dev - NVIDIA binary Xorg driver development file p nvidia-96-updates - NVIDIA binary Xorg driver, kernel module a p nvidia-96-updates-dev - NVIDIA binary Xorg driver development file p nvidia-cg-toolkit - Cg Toolkit - GPU Shader Authoring Language p nvidia-common - Find obsolete NVIDIA drivers i nvidia-current - NVIDIA binary Xorg driver, kernel module a p nvidia-current-dev - NVIDIA binary Xorg driver development file c nvidia-current-updates - NVIDIA binary Xorg driver, kernel module a p nvidia-current-updates-dev - NVIDIA binary Xorg driver development file i nvidia-settings - Tool of configuring the NVIDIA graphics dr p nvidia-settings-updates - Tool of configuring the NVIDIA graphics dr v nvidia-va-driver - v nvidia-va-driver - I've tried manually installing (sudo aptitude install <package>) packages nvidia-common and nvidia-settings-updates but to no avail. For example, sudo aptitude install nvidia-settings-updates returns the following log: Reading package lists... Building dependency tree... Reading state information... Reading extended state information... Initializing package states... Writing extended state information... No packages will be installed, upgraded, or removed. 0 packages upgraded, 0 newly installed, 0 to remove and 83 not upgraded. Need to get 0 B of archives. After unpacking 0 B will be used. Writing extended state information... Reading package lists... Building dependency tree... Reading state information... Reading extended state information... Initializing package states... Writing extended state information... The same happens with the Linux headers (i.e. I cannot seem to be able to install linux-headers-3.0.0-12-generic). The output of aptitude search linux-headers is as follows: v linux-headers - v linux-headers - v linux-headers-2.6 - i linux-headers-2.6.38-11 - Header files related to Linux kernel versi i linux-headers-2.6.38-11-generic - Linux kernel headers for version 2.6.38 on i A linux-headers-2.6.38-8 - Header files related to Linux kernel versi i A linux-headers-2.6.38-8-generic - Linux kernel headers for version 2.6.38 on v linux-headers-3 - v linux-headers-3.0 - v linux-headers-3.0 - i A linux-headers-3.0.0-12 - Header files related to Linux kernel versi p linux-headers-3.0.0-12-generic - Linux kernel headers for version 3.0.0 on p linux-headers-3.0.0-12-generic- - Linux kernel headers for version 3.0.0 on p linux-headers-3.0.0-12-server - Linux kernel headers for version 3.0.0 on p linux-headers-3.0.0-12-virtual - Linux kernel headers for version 3.0.0 on p linux-headers-generic - Generic Linux kernel headers p linux-headers-generic-pae - Generic Linux kernel headers v linux-headers-lbm - v linux-headers-lbm - v linux-headers-lbm-2.6 - v linux-headers-lbm-2.6 - p linux-headers-lbm-3.0.0-12-gene - Header files related to linux-backports-mo p linux-headers-lbm-3.0.0-12-gene - Header files related to linux-backports-mo p linux-headers-lbm-3.0.0-12-serv - Header files related to linux-backports-mo p linux-headers-server - Linux kernel headers on Server Equipment. p linux-headers-virtual - Linux kernel headers for virtual machines @heartsmagic I did try purging and reinstalling any nvidia driver packages, but it did not seem to make a difference, My xorg.conf file contains the following: # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 280.13 ([email protected]) Wed Jul 27 17:15:58 PDT 2011 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • How much time should it take to find the sum of all prime numbers less than 2 million?

    - by Shahensha
    I was trying to solve this Project Euler Question. I implemented the sieve of euler as a helper class in java. It works pretty well for the small numbers. But when I input 2 million as the limit it doesn't return the answer. I use Netbeans IDE. I waited for a lot many hours once, but it still didn't print the answer. When I stopped running the code, it gave the following result Java Result: 2147483647 BUILD SUCCESSFUL (total time: 2,097 minutes 43 seconds) This answer is incorrect. Even after waiting for so much time, this isn't correct. While the same code returns correct answers for smaller limits. Sieve of euler has a very simple algo given at the botton of this page. My implementation is this: package support; import java.util.ArrayList; import java.util.List; /** * * @author admin */ public class SieveOfEuler { int upperLimit; List<Integer> primeNumbers; public SieveOfEuler(int upperLimit){ this.upperLimit = upperLimit; primeNumbers = new ArrayList<Integer>(); for(int i = 2 ; i <= upperLimit ; i++) primeNumbers.add(i); generatePrimes(); } private void generatePrimes(){ int currentPrimeIndex = 0; int currentPrime = 2; while(currentPrime <= Math.sqrt(upperLimit)){ ArrayList<Integer> toBeRemoved = new ArrayList<Integer>(); for(int i = currentPrimeIndex ; i < primeNumbers.size() ; i++){ int multiplier = primeNumbers.get(i); toBeRemoved.add(currentPrime * multiplier); } for(Integer i : toBeRemoved){ primeNumbers.remove(i); } currentPrimeIndex++; currentPrime = primeNumbers.get(currentPrimeIndex); } } public List getPrimes(){ return primeNumbers; } public void displayPrimes(){ for(double i : primeNumbers) System.out.println(i); } } I am perplexed! My questions is 1) Why is it taking so much time? Is there something wrong in what I am doing? Please suggest ways for improving my coding style, if you find something wrong.

    Read the article

  • Aquatic Prime on 10.6?

    - by Kevin
    Hello, I have 2 applications that I use with Aquatic Prime. One is an application just with the Aquatic Prime framework, and the other is an application that I've been working on that has been incorporated with the Aquatic Prime framework. The first application I mentioned before has its SDK set to 10.5 with the architecture to 32-bit. The second application I made has its SDK that is set to 10.6 (which it has to since I need NSRunningApplication) and its architecture to 32-bit. Now my question is, whenever I debug my first application it works perfectly fine, but with my second it gives me this interrupted error right after Running. What's going wrong? I've tried doing some breakpoints like "objc-exception-throw" but all it says is stopped at breakpoint 1. I need 10.6 in my application, but I also need Aquatic Prime. Can anyone help me solve this problem? Ok here's the one from the stack trace which I have no idea how to read: 0x91cdff11 <+0000> push %ebp 0x91cdff12 <+0001> mov %esp,%ebp 0x91cdff14 <+0003> sub $0x28,%esp 0x91cdff17 <+0006> mov %ebx,-0xc(%ebp) 0x91cdff1a <+0009> mov %esi,-0x8(%ebp) 0x91cdff1d <+0012> mov %edi,-0x4(%ebp) 0x91cdff20 <+0015> call 0x91cdff25 <objc_exception_throw+20> 0x91cdff25 <+0020> pop %ebx 0x91cdff26 <+0021> mov 0x8(%ebp),%esi 0x91cdff29 <+0024> lea 0xe4cbb8b(%ebx),%edi 0x91cdff2f <+0030> mov 0x4(%edi),%eax 0x91cdff32 <+0033> test %eax,%eax 0x91cdff34 <+0035> jne 0x91cdff3b <objc_exception_throw+42> 0x91cdff36 <+0037> call 0x91ce4f53 <set_default_handlers> 0x91cdff3b <+0042> mov %esi,(%esp) 0x91cdff3e <+0045> nop 0x91cdff3f <+0046> nopl 0x0(%eax) 0x91cdff43 <+0050> mov %esi,(%esp) 0x91cdff46 <+0053> call *0x4(%edi) 0x91cdff49 <+0056> lea 0xfd44(%ebx),%eax 0x91cdff4f <+0062> mov %eax,(%esp) 0x91cdff52 <+0065> call 0x91ce4e1f <_objc_fatal> The the GDB tells me its all good: run [Switching to process 990] Running… 2009-10-14 15:20:33.233 ApplicationName[990:a0f] init Sincerely, Kevin

    Read the article

  • Finding a prime number after a given number

    - by avd
    How can I find the least prime number greater than a given number? For example, given 4, I need 5; given 7, I need 11. I would like to know some ideas on best algorithms to do this. One method that I thought of was generate primes numbers through the Sieve of Eratosthenes, and then find the prime after the given number.

    Read the article

  • is hashCode() must return a prime number

    - by subhashis
    The signature of the hashCode() method is public int hashCode(){ return x; } in this case x must be an int(primitive) but plz can anyone explain it to me that the number which the hashCode() returns must be a prime number, even number...etc or there is no specification ? the reason behind i am asking this question is i have seen it in different ids the auto generated code always returns a prime number, so i need to know why? thanks in advance

    Read the article

  • All non-prime factorings

    - by Tony Veijalainen
    Let's say we have numbers factors, for example 1260: >>> factors(1260) [2, 2, 3, 3, 5, 7] Which would be best way to do in Python combinations with every subproduct possible from these numbers, ie all factorings, not only prime factoring, with sum of factors less than max_sum? If I do combinations from the prime factors, I have to refactor remaining part of the product as I do not know the remaining part not in combination. Example results would be: [4, 3, 3, 5, 7] (one replacement) [3, 6, 14, 5] (two replacements)

    Read the article

  • Download Current WSJ.com Prime Rate

    - by Registered User
    I need to automatically download the current Wall Street Journal Prime Rate and load the data into my database. What is the best method for downloading this data automatically? I have come up with three possible solutions for doing this: Scrape a HTML web page from WSJ. Parse a RSS news feed from WSJ. Use some API that I haven't found from WSJ. Regarding solution 1, although I don't like solution 1 since it could easily break, it's the only one that I have worked out from end to end. It appears I can scrape this page with a WebRequest / WebResponse and read the text in this code: <tr> <td style="text-align:left" class="colhead">&nbsp;</td> <td class="colhead">Latest</td> <td class="colhead">Wk ago</td> <td class="colhead">High</td> <td class="colhead">Low</td> </tr> <tr> <td class="text">U.S.</td> <td style="font-weight:bold;" class="num">3.25</td> <td class="num">3.25</td> <td class="num">3.25</td> <td class="num" style="border-right:0px">3.25</td> </tr> Regarding solution 2, although I can implement a RSS reader solution, I don't see a way to reliably anticipate verbiage for changes in the Prime Rate. Therefore, I don't think this is as safe or reliable a way to get the data as solution 1. Regarding solution 3, I haven't found any published API's for checking money rates like the Prime Rate. If anyone knows of a web service or other API for checking money rates, then please let me know.

    Read the article

  • Find out 20th, 30th, nth prime number. (I'm getting 20th but not 30th?) [Python]

    - by gsin
    The question is to find the 1000th prime number. I wrote the following python code for this. The problem is, I get the right answer for the 10th , 20th prime but after that each increment of 10 leaves me one off the mark. I can't catch the bug here :( count=1 #to keep count of prime numbers primes=() #tuple to hold primes candidate=3 #variable to test for primes while count<20: for x in range(2,candidate): if candidate%x==0: candidate=candidate+2 else : pass primes=primes+(candidate,) candidate=candidate+2 count=count+1 print primes print "20th prime is ", primes[-1] In case you're wondering, count is initialised as 1 because I am not testing for 2 as a prime number(I'm starting from 3) and candidate is being incremented by 2 because only odd numbers can be prime numbers. I know there are other ways of solving this problem, such as the prime number theorem but I wanna know what's wrong with this approach. Also if there are any optimisations you have in mind, please suggest. Thank You

    Read the article

  • Unable to boot Ubuntu 13.10 (nVidia GTX 770m and Intel HD 4600)

    - by Raziel Gonzalez
    Ever since I bought this laptop I've been trying to install Ubuntu on it. It came with W8 preinstalled. Up to this point, I've been able to boot in UEFI mode with a black screen. I can tell it's trying to use the nVidia card (there's a led on the computer, depending on the color you can tell which GPU is using) and if I press crtl+alt+F1 I can go to console mode. Taking this advantage I tried to install bumblebee and after a successful install the led that indicates which GPU is being used change, indicating that it switched to the Intel HD 4600 graphics. After the installation I tried to initiate the graphic interface (startx) with no success. Xorg.0.log shows the error: [ 3706.779] X.Org X Server 1.14.3 Release Date: 2013-09-12 [ 3706.782] X Protocol Version 11, Revision 0 [ 3706.783] Build Operating System: Linux 3.2.0-37-generic x86_64 Ubuntu [ 3706.783] Current Operating System: Linux ubuntu 3.11.0-12-generic #19-Ubuntu SMP Wed Oct 9 16:20:46 UTC 2013 x86_64 [ 3706.783] Kernel command line: BOOT_IMAGE=/casper/vmlinuz.efi file=/cdrom/preseed/ubuntu.seed boot=casper nomodeset -- [ 3706.785] Build Date: 15 October 2013 09:23:37AM [ 3706.786] xorg-server 2:1.14.3-3ubuntu2 (For technical support please see http://www.ubuntu.com/support) [ 3706.786] Current version of pixman: 0.30.2 [ 3706.788] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 3706.788] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 3706.791] (==) Log file: "/var/log/Xorg.0.log", Time: Sat Nov 2 12:28:52 2013 [ 3706.792] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 3706.792] (==) No Layout section. Using the first Screen section. [ 3706.792] (==) No screen section available. Using defaults. [ 3706.792] (**) |-->Screen "Default Screen Section" (0) [ 3706.792] (**) | |-->Monitor "<default monitor>" [ 3706.792] (==) No monitor specified for screen "Default Screen Section". Using a default monitor configuration. [ 3706.792] (==) Automatically adding devices [ 3706.792] (==) Automatically enabling devices [ 3706.792] (==) Automatically adding GPU devices [ 3706.792] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. [ 3706.792] Entry deleted from font path. [ 3706.792] (WW) The directory "/usr/share/fonts/X11/100dpi/" does not exist. [ 3706.792] Entry deleted from font path. [ 3706.792] (WW) The directory "/usr/share/fonts/X11/75dpi/" does not exist. [ 3706.792] Entry deleted from font path. [ 3706.792] (WW) The directory "/usr/share/fonts/X11/100dpi" does not exist. [ 3706.792] Entry deleted from font path. [ 3706.792] (WW) The directory "/usr/share/fonts/X11/75dpi" does not exist. [ 3706.792] Entry deleted from font path. [ 3706.792] (==) FontPath set to: /usr/share/fonts/X11/misc, /usr/share/fonts/X11/Type1, built-ins [ 3706.792] (==) ModulePath set to "/usr/lib/x86_64-linux-gnu/xorg/extra-modules,/usr/lib/xorg/extra-modules,/usr/lib/xorg/modules" [ 3706.792] (II) The server relies on udev to provide the list of input devices. If no devices become available, reconfigure udev or disable AutoAddDevices. [ 3706.792] (II) Loader magic: 0x7ff680918d20 [ 3706.792] (II) Module ABI versions: [ 3706.792] X.Org ANSI C Emulation: 0.4 [ 3706.792] X.Org Video Driver: 14.1 [ 3706.792] X.Org XInput driver : 19.1 [ 3706.792] X.Org Server Extension : 7.0 [ 3706.793] (--) PCI:*(0:0:2:0) 8086:0416:1462:10e8 rev 6, Mem @ 0xf7400000/4194304, 0xb0000000/268435456, I/O @ 0x0000f000/64 [ 3706.793] (II) Open ACPI successful (/var/run/acpid.socket) [ 3706.794] Initializing built-in extension Generic Event Extension [ 3706.795] Initializing built-in extension SHAPE [ 3706.796] Initializing built-in extension MIT-SHM [ 3706.797] Initializing built-in extension XInputExtension [ 3706.797] Initializing built-in extension XTEST [ 3706.798] Initializing built-in extension BIG-REQUESTS [ 3706.799] Initializing built-in extension SYNC [ 3706.799] Initializing built-in extension XKEYBOARD [ 3706.800] Initializing built-in extension XC-MISC [ 3706.801] Initializing built-in extension SECURITY [ 3706.802] Initializing built-in extension XINERAMA [ 3706.802] Initializing built-in extension XFIXES [ 3706.803] Initializing built-in extension RENDER [ 3706.804] Initializing built-in extension RANDR [ 3706.804] Initializing built-in extension COMPOSITE [ 3706.805] Initializing built-in extension DAMAGE [ 3706.806] Initializing built-in extension MIT-SCREEN-SAVER [ 3706.806] Initializing built-in extension DOUBLE-BUFFER [ 3706.807] Initializing built-in extension RECORD [ 3706.807] Initializing built-in extension DPMS [ 3706.808] Initializing built-in extension X-Resource [ 3706.809] Initializing built-in extension XVideo [ 3706.809] Initializing built-in extension XVideo-MotionCompensation [ 3706.810] Initializing built-in extension SELinux [ 3706.811] Initializing built-in extension XFree86-VidModeExtension [ 3706.811] Initializing built-in extension XFree86-DGA [ 3706.812] Initializing built-in extension XFree86-DRI [ 3706.812] Initializing built-in extension DRI2 [ 3706.812] (II) "glx" will be loaded by default. [ 3706.812] (WW) "xmir" is not to be loaded by default. Skipping. [ 3706.812] (II) LoadModule: "dri2" [ 3706.812] (II) Module "dri2" already built-in [ 3706.812] (II) LoadModule: "glamoregl" [ 3706.813] (II) Loading /usr/lib/xorg/modules/libglamoregl.so [ 3706.813] (II) Module glamoregl: vendor="X.Org Foundation" [ 3706.813] compiled for 1.14.2.901, module version = 0.5.1 [ 3706.813] ABI class: X.Org ANSI C Emulation, version 0.4 [ 3706.813] (II) LoadModule: "glx" [ 3706.813] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 3706.813] (II) Module glx: vendor="X.Org Foundation" [ 3706.813] compiled for 1.14.3, module version = 1.0.0 [ 3706.813] ABI class: X.Org Server Extension, version 7.0 [ 3706.813] (==) AIGLX enabled [ 3706.814] Loading extension GLX [ 3706.814] (==) Matched intel as autoconfigured driver 0 [ 3706.814] (==) Matched vesa as autoconfigured driver 1 [ 3706.814] (==) Matched modesetting as autoconfigured driver 2 [ 3706.814] (==) Matched fbdev as autoconfigured driver 3 [ 3706.814] (==) Assigned the driver to the xf86ConfigLayout [ 3706.814] (II) LoadModule: "intel" [ 3706.814] (II) Loading /usr/lib/xorg/modules/drivers/intel_drv.so [ 3706.814] (II) Module intel: vendor="X.Org Foundation" [ 3706.814] compiled for 1.14.3, module version = 2.99.904 [ 3706.814] Module class: X.Org Video Driver [ 3706.814] ABI class: X.Org Video Driver, version 14.1 [ 3706.814] (II) LoadModule: "vesa" [ 3706.814] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 3706.814] (II) Module vesa: vendor="X.Org Foundation" [ 3706.814] compiled for 1.14.1, module version = 2.3.2 [ 3706.814] Module class: X.Org Video Driver [ 3706.814] ABI class: X.Org Video Driver, version 14.1 [ 3706.814] (II) LoadModule: "modesetting" [ 3706.814] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so [ 3706.814] (II) Module modesetting: vendor="X.Org Foundation" [ 3706.814] compiled for 1.14.1, module version = 0.8.0 [ 3706.814] Module class: X.Org Video Driver [ 3706.814] ABI class: X.Org Video Driver, version 14.1 [ 3706.814] (II) LoadModule: "fbdev" [ 3706.814] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 3706.815] (II) Module fbdev: vendor="X.Org Foundation" [ 3706.815] compiled for 1.14.1, module version = 0.4.3 [ 3706.815] Module class: X.Org Video Driver [ 3706.815] ABI class: X.Org Video Driver, version 14.1 [ 3706.815] (II) intel: Driver for Intel(R) Integrated Graphics Chipsets: i810, i810-dc100, i810e, i815, i830M, 845G, 854, 852GM/855GM, 865G, 915G, E7221 (i915), 915GM, 945G, 945GM, 945GME, Pineview GM, Pineview G, 965G, G35, 965Q, 946GZ, 965GM, 965GME/GLE, G33, Q35, Q33, GM45, 4 Series, G45/G43, Q45/Q43, G41, B43, HD Graphics, HD Graphics 2000, HD Graphics 3000, HD Graphics 2500, HD Graphics 4000, HD Graphics P4000, HD Graphics 4600, HD Graphics 5000, HD Graphics P4600/P4700, Iris(TM) Graphics 5100, HD Graphics 4400, HD Graphics 4200, Iris(TM) Pro Graphics 5200 [ 3706.815] (II) VESA: driver for VESA chipsets: vesa [ 3706.815] (II) modesetting: Driver for Modesetting Kernel Drivers: kms [ 3706.815] (II) FBDEV: driver for framebuffer: fbdev [ 3706.815] (--) using VT number 7 [ 3706.819] (WW) Falling back to old probe method for modesetting [ 3706.819] (EE) open /dev/dri/card0: No such file or directory [ 3706.819] (WW) Falling back to old probe method for fbdev [ 3706.819] (II) Loading sub module "fbdevhw" [ 3706.819] (II) LoadModule: "fbdevhw" [ 3706.819] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so [ 3706.819] (II) Module fbdevhw: vendor="X.Org Foundation" [ 3706.819] compiled for 1.14.3, module version = 0.0.2 [ 3706.819] ABI class: X.Org Video Driver, version 14.1 [ 3706.819] (II) Loading sub module "vbe" [ 3706.819] (II) LoadModule: "vbe" [ 3706.819] (II) Loading /usr/lib/xorg/modules/libvbe.so [ 3706.819] (II) Module vbe: vendor="X.Org Foundation" [ 3706.819] compiled for 1.14.3, module version = 1.1.0 [ 3706.819] ABI class: X.Org Video Driver, version 14.1 [ 3706.819] (II) Loading sub module "int10" [ 3706.819] (II) LoadModule: "int10" [ 3706.819] (II) Loading /usr/lib/xorg/modules/libint10.so [ 3706.819] (II) Module int10: vendor="X.Org Foundation" [ 3706.819] compiled for 1.14.3, module version = 1.0.0 [ 3706.819] ABI class: X.Org Video Driver, version 14.1 [ 3706.819] (II) VESA(0): initializing int10 [ 3706.820] (EE) VESA(0): V_BIOS address 0x0 out of range [ 3706.820] (II) UnloadModule: "vesa" [ 3706.820] (II) UnloadSubModule: "int10" [ 3706.820] (II) Unloading int10 [ 3706.820] (II) UnloadSubModule: "vbe" [ 3706.820] (II) Unloading vbe [ 3706.820] (EE) Screen(s) found, but none have a usable configuration. [ 3706.820] (EE) Fatal server error: [ 3706.820] (EE) no screens found(EE) [ 3706.820] (EE) Please consult the The X.Org Foundation support at http://wiki.x.org for help. [ 3706.820] (EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information. [ 3706.820] (EE) [ 3706.827] (EE) Server terminated with error (1). Closing log file. I also saved the dsmeg output to see if it can be of any help. In order to be able to get to this stage I had to boot with nomodeset option and removed quiet and splash. Anyone got this same error? Any guidance? I've tried other linux distros and so far the only one that is able to boot is Opensuse 12.3 without any issues (but only when I switch to legacy mode instead of UEFI).

    Read the article

  • Why doesn't Bumblebee work with Nvidia GT640M?

    - by Dickson Wong
    I've done a clean install of Ubuntu 12.04 from a liveCD on a Samsung Series 7 Chronos with a 3rd Gen i7 and the Nvidia GeForce GT 640M. I've followed this: (https://wiki.ubuntu.com/Bumblebee#Installation) to install bumblebee so I can switch to my discrete GPU. I have not used ironhide, or preivous bumblee since it's a clean install. When I use optirun, optirun says it can't initialize the GPU: [ERROR]Cannot access secondary GPU - error: [XORG] (EE) NVIDIA(0): Failed to initialize the NVIDIA GPU at PCI:1:0:0. Please [ERROR]Aborting because fallback start is disabled. I've looked at the troubleshooting page for bumblebee and I have the correct driver and do not think I have aspci disabled. Also, my keyboard becomes very unresponsive and my mouse skips and isn't smooth after optirun crashes. The only thing to fix this is a reboot. Here's my lspci | grep VGA output: 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation Device 0fd2 (rev ff) It seems Ubuntu can see my graphics card. I don't really know what's going on.

    Read the article

  • ERROR running Bumblebee on 13.10

    - by paul
    I'm trying to get Bumblebee working again after an upgrade to Saucy. Running software with Optirun gives the following output: optirun nvidia-settings [ 45.697126] [ERROR]Cannot access secondary GPU - error: [XORG] (EE) Failed to load /usr/lib/xorg/modules/libglamoregl.so: /usr/lib/xorg/modules/libglamoregl.so: undefined symbol: _glapi_tls_Context [ 45.697179] [ERROR]Aborting because fallback start is disabled. Does anyone know how to fix this? Thanks! :)

    Read the article

  • Bumblebee : Module bbswitch could not be loaded

    - by StepTNT
    I've upgraded to 12.04 and I had to switch from Ironhide to the latest version of Bumblebee. Now, when I try to run bumblebeed, I get this error: FATAL: Module bbswitch not found. [ERROR]Module bbswitch could not be loaded (timeout?) [WARN]No switching method available. The dedicated card will always be on. I don't really need to use the secondary VGA on Kubuntu, so I would like to find a way to definitely shut the discrete GPU down and avoid wasting battery. I can't disable it from the BIOS because I use it on Windows. My card is an nvidia 540M.

    Read the article

  • stuck with 640x480 resolution using bumblebee on 10.04

    - by pholz
    I just installed bumblebee on a Dell XPS L502X. It basically works: optirun glxspheres does exactly what it should. However, X is running with a resolution of 640x480. I can try to change settings with optirun nvidia-settings -c :8 (as suggested on the bumblebee wiki), but in the relevant section it only lists resolutions up to 640x480. In the /etc/bumblebee/xorg.conf.* files, there are no Monitor or Screen sections. Adding them doesn't change anything. How can I change the resolution?

    Read the article

  • OpenGL (glx) not working with ubuntu 12.10

    - by user26766
    It all started when I installed nvidia's own (download from website) driver. Uninstalling it and reverting back to nvidia-current didn't solve the problem, so I have been playing with this for a while. It seems glx support is missing, and my intel graphics is not responding. gnome loads only in fallback mode. Here are some outputs: glxinfo name of display: :0.0 Error: couldn't find RGB GLX visual or fbconfig glxgears Error: couldn't get an RGB, Double-buffered visual optirun glxgears works fine lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [GeForce GT 540M] (rev ff) Here's the content of log file when just after running glxinfo: less /var/log/Xorg.0.log | grep gl [ 112.156] (II) LoadModule: "glx" [ 112.157] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 112.157] (II) Module glx: vendor="X.Org Foundation" [ 112.157] (II) UnloadModule: "glx" [ 112.157] (II) Unloading glx [ 112.157] (EE) Failed to load module "glx" (module requirement mismatch, 0) Some more info... lsmod Module Size Used by bbswitch 13612 0 pci_stub 12623 1 vboxpci 23195 0 vboxnetadp 25671 0 vboxnetflt 23480 0 vboxdrv 320372 3 vboxpci,vboxnetadp,vboxnetflt parport_pc 32689 0 ppdev 17074 0 bnep 18141 2 rfcomm 46620 12 binfmt_misc 17501 1 snd_hda_codec_hdmi 32049 1 snd_hda_codec_realtek 78147 1 joydev 17458 0 uvcvideo 76750 0 videobuf2_core 32852 1 uvcvideo videodev 120310 2 uvcvideo,videobuf2_core videobuf2_vmalloc 12861 1 uvcvideo videobuf2_memops 13405 1 videobuf2_vmalloc snd_hda_intel 33492 3 coretemp 13401 0 kvm_intel 132760 0 arc4 12530 2 snd_hda_codec 134213 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel kvm 414111 1 kvm_intel iwlwifi 386837 0 i915 520799 2 snd_hwdep 17699 1 snd_hda_codec snd_pcm 96668 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec psmouse 100389 0 drm_kms_helper 49113 1 i915 ghash_clmulni_intel 13221 0 snd_seq_midi 13325 0 snd_rawmidi 30513 1 snd_seq_midi btusb 22475 0 drm 288436 3 i915,drm_kms_helper serio_raw 13216 0 snd_seq_midi_event 14900 1 snd_seq_midi snd_seq 61555 2 snd_seq_midi,snd_seq_midi_event mac80211 535936 1 iwlwifi bluetooth 209249 22 bnep,rfcomm,btusb snd_timer 29426 2 snd_pcm,snd_seq snd_seq_device 14498 3 snd_seq_midi,snd_rawmidi,snd_seq snd 78921 16 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device cfg80211 206797 2 iwlwifi,mac80211 aesni_intel 51038 1 cryptd 20404 2 ghash_clmulni_intel,aesni_intel aes_x86_64 17256 1 aesni_intel dell_wmi 12682 0 sparse_keymap 13891 1 dell_wmi dcdbas 14439 0 i2c_algo_bit 13414 1 i915 microcode 22804 0 lpc_ich 17062 0 soundcore 15048 1 snd snd_page_alloc 18485 2 snd_hda_intel,snd_pcm video 19391 1 i915 mac_hid 13206 0 wmi 19071 1 dell_wmi mei 40691 0 lp 17760 0 parport 46346 3 parport_pc,ppdev,lp ahci 25721 3 libahci 31192 1 ahci atl1c 41102 0 how can I fix this? any ideas? Here is another thing I've tried: sudo apt-get purge nvidia* sudo reboot sudo apt-get install bumblebee bumblebee-nvidia didn't make any difference. The most relevant post I found on the web was this: http://ubuntuforums.org/archive/index.php/t-1722306.html Here it's explained that the problem is with the priority of shared libraries that are loaded for glxinfo. Here's what I get when I look up the libraries: linux-vdso.so.1 => (0x00007fff6bf8b000) libGL.so.1 => /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1 (0x00007f22d6ccd000) libX11.so.6 => /usr/lib/x86_64-linux-gnu/libX11.so.6 (0x00007f22d6993000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f22d65d3000) libglapi.so.0 => /usr/lib/x86_64-linux-gnu/libglapi.so.0 (0x00007f22d63ad000) libXext.so.6 => /usr/lib/x86_64-linux-gnu/libXext.so.6 (0x00007f22d619b000) libXdamage.so.1 => /usr/lib/x86_64-linux-gnu/libXdamage.so.1 (0x00007f22d5f97000) libXfixes.so.3 => /usr/lib/x86_64-linux-gnu/libXfixes.so.3 (0x00007f22d5d91000) libX11-xcb.so.1 => /usr/lib/x86_64-linux-gnu/libX11-xcb.so.1 (0x00007f22d5b8f000) libxcb-glx.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-glx.so.0 (0x00007f22d5977000) libxcb.so.1 => /usr/lib/x86_64-linux-gnu/libxcb.so.1 (0x00007f22d5759000) libXxf86vm.so.1 => /usr/lib/x86_64-linux-gnu/libXxf86vm.so.1 (0x00007f22d5553000) libdrm.so.2 => /usr/lib/x86_64-linux-gnu/libdrm.so.2 (0x00007f22d5346000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f22d5129000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f22d4f25000) /lib64/ld-linux-x86-64.so.2 (0x00007f22d6f54000) libXau.so.6 => /usr/lib/x86_64-linux-gnu/libXau.so.6 (0x00007f22d4d20000) libXdmcp.so.6 => /usr/lib/x86_64-linux-gnu/libXdmcp.so.6 (0x00007f22d4b1a000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f22d4911000) It seems "mesa" instead of nvidia-current is being used. However I didn't find any obvious link to /lib/nvidia-current in /etc/ld.so.conf (where the directories containing the shared libraries are listed). I don't know if there's anything missing or if this is normal. thanks, UPDATE: The problem was solved by updating to Ubuntu 13.04. It seems bumblebee was to blame, but I'm not sure what was going wrong...

    Read the article

  • nvidia graphics resolution problem

    - by Deepak Adhikari
    I am currently using ubuntu 12.04 I have acer aspire timelinex 3830tg with 2GB nvidia GeForce GT540M graphics card To enable my graphics card I followed following steps. 1.) I activated nvidia_current and nvidia_current_updates from additional drivers 2.) sudo nvidia-xconfig 3.) then reboot Following these steps I got following errors 1.) my resolution is 640x480...(there is no option of 1366x768 in display...previously there was 1366x768 when nvidia-xconfig command was not entered) 2.) when I open nvidia-settings it shows me following error "You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run 'nvidia-xconfig' as root) and restart the X server." Problem need to be solved 1.) Change resolution to 1366x768 2.) Also how to check my nvidia graphics working or not Please some one please help me to solve these issues...I am seriously in need of my graphics card... I wan't my nvidia graphics card work as my intel graphics smoothly I am not willing to use bumblebee with regards, ubuntu user

    Read the article

  • Lenovo Y570 nVidia drivers are giving me problems. (GT 555)

    - by Joe
    So, I've tried the bumblebee "hack" listed HERE:https://github.com/Bumblebee-Project/bbswitch/tree/hack-lenovo I copied every line below into the terminal, is that what I was supposed to do? I'm a Linux noob. git clone http://github.com/Bumblebee-Project/bbswitch.git -b hack-lenovo cd bbswitch mkdir /usr/src/acpi-handle-hack-0.0.1 cp Makefile acpi-handle-hack.c /usr/src/acpi-handle-hack-0.0.1 cp dkms/acpi-handle-hack.conf /usr/src/acpi-handle-hack-0.0.1/dkms.conf dkms add acpi-handle-hack/0.0.1 dkms build acpi-handle-hack/0.0.1 dkms install acpi-handle-hack/0.0.1 echo acpi-handle-hack | sudo tee -a /etc/modules sudo update-initramfs -u (had to use http:// instead of git://, in my uni) It doesn't change anything, I get joe@ubuntu:~$ optirun glxspheres [ 4447.830749] [ERROR]Cannot access secondary GPU - error: Could not load GPU driver [ 4447.830844] [ERROR]Aborting because fallback start is disabled.

    Read the article

  • Ubuntu 12.04.3 Graphics Issues: Broken Pipes, Reinstalled Xorg and Bumblebee

    - by user190488
    It seems I have a problem, and am only making it worse by following what I find online. I have a new Asus N550JV-D71 (not sure about the part after the dash, though I definitely know it includes 71). I decided to downgrade Windows 8 to 7, then dual boot Ubuntu 12.04 with it (there were issues with Windows 8, and I had a Windows 7 disk handy). It did work and, after installing Bumblebee in tty (because it wouldn't boot when it was first installed), it worked marvelously for a little less than a week. However, I restarted it last night and got the Could not write bytes: Broken pipes error. (I see it's a very common error, but I've looked at the majority of the suggested Similar Questions already.) I followed what I could find online, followed those instructions (making sure to not install any sort of graphics drivers other than what Bumblebee provides), and it just seems to go further and further downhill. I'm afraid I didn't write the exact steps to get to this point (it was late by the time I gave up the night before), but it involved reinstalling lightdm, xorg (and xserver?), and Bumblebee. I then changed the Bumblebee.conf file so that Device=nvidia. I'm pretty new to Linux in general (I've used it since 10.04, but I hadn't had issues up until this computer, so it let me stay a newbie), so I'm not exactly sure what log files to look at to find the errors to look up. However, I did look at lshw and noticed that displays was marked as unassigned. Also, if I try to start lightdm using the command line, it always stops at Stopping Mount network filesystems. I should note that there isn't an xorg.conf file, and no .Xauthority. I would really, really prefer not to reinstall 12.04 if possible. I managed to get grub to display only a short time ago, and I can't boot to the dvd drive unless I go into the BIOS settings and manually change the boot order (that was an issue from the beginning, before the Ubuntu install), and getting into those settings often means rebooting several times due to the fact that the window to get to it is extremely small. I have most of what I need backed up, however, in case it does get to that point. If I really have to, I can just use the latest Ubuntu version instead of the LTS, but the reason I chose 12.04 in the first place is because I need something stable-ish, and Windows isn't suitable to what I need to do. I should note that the reason I restarted last night in the first place was that it wasn't charging the battery, and the wifi kept on going out. Hardware: Nvidia GeForce GT 750M Intel HD graphics 4600

    Read the article

  • Unity won't load with proprietary drivers

    - by Nobita
    First time running Ubuntu 11.04 and getting used to Unity, I decided to install proprietary drivers for my Nvidia graphic card. The output of lspci | grep VGA is: 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: nVidia Corporation Device 0df5 (rev a1) If I activate the driver that is "recommended", next time I try to login in a Unity session it just changes to the classic. How can that be happening? I attach the screenshoot of my proprietary driver screen:

    Read the article

  • Asus 1215n GPU driver/s don't give me a "full" OS experience

    - by AFD
    I'm use to not having specific drivers from a manufacture on my laptop when running a Linux OS and that has always been fine - there's been adequate FOSS drivers for my needs and it hasn't ruined any of my OS experience. When I bought an Asus 1215n one of the upsides to the hardware seemed to be the switchable GPU that could give lots of performance or lots more battery life and would switch on-the-fly... with Windows of course. Seems that the Nvidia driver are crap and people advise not installing them. I have some sort of workaround for vga_switcharoo (?) and the on-the-fly nature of the GPUs has turned in to a manual one :( The worst bit though (aside from shorter battery life) is the web experience with HTML5. If I visit Mozilla's Web O'Wonder site I'm told I don't have WebGL working due to driver issues. This really blows - is it possible that proprietary drivers can now ruin my web experience too?!

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >