Search Results

Search found 21717 results on 869 pages for 'setup versions'.

Page 526/869 | < Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >

  • How can I get my ATI / AMD drivers to work with any kernel above 3.2.0.x?

    - by TorakTu
    How can I get my ATI / AMD drivers to work with any kernel above 3.2.0.x ? WHAT DID WORK Installed original AMD64 version of Ubuntu 12.04 ISO image. Burned DVD and installed which shown kernel 3.2.0-23 to begin with. Got 5.1 surround sound working. Got ATI ( Now its AMD ) video drivers installed for my Radeon HD R6870 Video card from AMD's website. fglrxinfo came up and reported as normal. THE PROBLEM Kernel 3.2.0.x kept locking up so I tried higher kernel versions. But ATI / AMD Drivers do not install on any kernel Above 3.2.0.x WHAT I HAVE TRIED I have gone over this tutorial many times ( https://help.ubuntu.com/community/BinaryDriverHowto/ATI ) and it doesn't work on ANY kernel except 3.2.0.x. The problems I am having here are that the ATI / AMD drivers working for the 12.04 Precise with kernel 3.2.0-23 and 24, But the computer kept locking up. Although all my games would work, the lock ups were random and were constant. So I looked all over the web for 3 days trying to find an answer and the lock up issue was said to just update the kernel. So I did. Have tried many kernels. All of them .. no lock ups. BUT the Restricted AMD drivers from the AMD website will not install. And none of the OpenSource AMD drivers have EVER installed no matter what Kernel or version I tried. EXAMPLE OUTPUT OF 3D TYPE OF ERRORS Javax.media.opengl.GLException: glXGetConfig failed: error code GLX_NO_EXTENSION at com.sun.opengl.impl.x11.X11GLDrawableFactory.glXGetConfig(X11GLDrawableFactory.java:651) at com.sun.opengl.impl.x11.X11GLDrawableFactory.xvi2GLCapabilities(X11GLDrawableFactory.java:350) at com.sun.opengl.impl.x11.X11GLDrawableFactory.chooseGraphicsConfiguration(X11GLDrawableFactory.java:174) at javax.media.opengl.GLCanvas.chooseGraphicsConfiguration(GLCanvas.java:520) at javax.media.opengl.GLCanvas.<init>(GLCanvas.java:131) at haven.HavenPanel.<init>(HavenPanel.java:68) at haven.HavenPanel.<init>(HavenPanel.java:78) at haven.MainFrame.<init>(MainFrame.java:182) at haven.MainFrame.main2(MainFrame.java:306) at haven.MainFrame.access$100(MainFrame.java:34) at haven.MainFrame$7.run(MainFrame.java:360) at java.lang.Thread.run(Thread.java:722) And of course this is what fglrxinfo shows : X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 139 (ATIFGLEXTENSION) Minor opcode of failed request: 66 () Serial number of failed request: 13 Current serial number in output stream: 13 EDIT : I forgot to mention that I DID look at this post over the last few days and it did not help.

    Read the article

  • OS X 10.8.3 + attempt to change VPN settings = no more VPN access

    - by nicole
    I am running Mountain Lion and had gotten very tired of re-entering my password at random times when using my school's VPN network (I don't know much about these, but the type is Cisco IPSec according to the setup instructions I followed a while back). In an attempt to make life easier, I followed the instructions here, but, alas, any attempt to connect with VPN was met with the message "A configuration error has occurred. Verify your settings and try connecting again" (or something along those lines.) I then tried to do the steps in the blog post in reverse and change everything back. Upon (supposedly) doing that, though, a new error message came when attempting to connect to VPN: "The negotiation with the VPN server failed. Verify the server address and try reconnecting." Now I have no idea what to do. Is there a way to reset all VPN-related things in my system so that I can follow my school's instructions and just start over?

    Read the article

  • Why are default spamassassin rules not being applied to emails we generate?

    - by Chance
    My company uses a standalone spam-assassin install to test marketing emails, however, mail originating from us does not seem to run the full gamut of test. For example, Spam assassin has a default rule that flags messages that contain the phrase Dear [Something], and it properly flags spam that I feed it.It does not, however, apply that same rule to in house email I send it. Is it possible that spam assassin has white-listed us somehow, perhaps because the mail originates in the same domain as the server or receiver? I believe most of the recent spamassassin questions have been mine, so thanks for bearing with me as I figure this out! Chance EDIT Details on our SA setup: We are piping the emails into the CL with spamc -R < test_email.eml Identical results testing as root or a user, no user_prefs file

    Read the article

  • Running Android emulator inside a Virtualbox Vm

    - by sgargan
    I'm trying to setup a VM with a complete android development stack (SDK, Platforms, Eclipse etc) for a Hackathon. I'm having real trouble getting the emulator to start in the VM. I realize that the emulator is essentially a VM itself inside the Vbox VM and so is going to be slow, but it just hangs at the Android splash screen and never gets any further. Might there be something going on with the VM that is causing it to run so very slowly? Is there anything I can do to give the VM more CPU? I've tried setting the execution cap to 100% but it didn't help any. Anyone know what might be going on here, or have any ideas about how I might speed it up? Thanks Steve.

    Read the article

  • Setting Linux UID on NFS volume from EMC NX4

    - by ethrbunny
    I have an EMC NX4 from which there are several CIFS shares with corresponding NFS mount points. The CIFS user ids seem fine but when viewed from Linux they are all 327xx numbers and can't be set from the file system. (IE CHOWN doesn't work - permission denied). On our other (older) EMC devices we used an MMC app to set the Linux UID for each user. I don't seem to have such an app on the 'Applications and Tools' CD for this new device. Is there some other method for setting these? Did I setup the system incorrectly?

    Read the article

  • What is the command to use to put your computer to sleep (not hibernate)?

    - by airrick
    I want to put my windows pc (win7) into a sleep state via command line (so i can bind to macro button on keyboard). The power button on the PC is setup to but the computer to sleep (but it's down on the floor and I'm too lazy to reach down) it exactly how I want it (sleeps using hybrid mode in case I loose power) The sleep command on the shutdown menu also works. most info I found says to use; %windir%\system32\rundll32.exe PowrProf.dll, SetSuspendState 0,1,0 But this puts the computer in hibernate mode. I do have hibernate disabled but using hybrid sleep. So, What is the command to use to put your computer to sleep (not hibernate)?

    Read the article

  • design in agile process

    - by ying
    Recently I had an interview with dev team in a company. The team uses agile + TDD. The code exercise implements a video rental store which generates statement to calc total rental fee for each type of video (new release, children, etc) for a customer. The existing code use object like: Statement to generate statement and calc fee where big switch statement sits to use enum to determine how to calc rental fee customer holds a list of rentals movie base class and derived class for each type of movie (NEW, CHILDREN, ACTION, etc) The code originally doesn't compile as the owner was assumed to be hit by a bus. So here is what I did: outlined the improvement over object model to have better responsibility for each class. use strategy pattern to replace switch statement and weave them in config But the team says it's waste of time because there is no requirement for it and UAT test suite works and is the only guideline goes into architecture decision. The underlying story is just to get pricing feature out and not saying anything about how to do it. So the discussion is focused on why should time be spent on refactor the switch statement. In my understanding, agile methodology doesn't mean zero design upfront and such code smell should be avoided at the beginning. Also any unit/UAT test suite won't detect such code smell, otherwise sonar, findbugs won't exist. Here I want to ask: is there such a thing called agile design in the agile methodology? Just like agile documentation. how to define agile design upfront? how to know enough is enough? In my understanding, ballpark architecture and data contract among components should be defined before/when starting project, not the details. Am I right? anyone can explain what the team is really looking for in this kind of setup? is it design aspect or agile aspect? how to implement minimum viable product concept in the agile process in the real world project? Is it must that you feel embarrassed to be MVP?

    Read the article

  • Install wireless router with cable modem - need authentication server?

    - by Paul
    I've bought a wireless router which I'm installing with a Telstra BigPond cable modem for a friend. As part of the setup I've got to a screen requesting username / password / authentication server for the cable modem They have contacted Telstra who supply the username / password and say that is all they need. They dont know anything about an authentication server. There are a couple of answers up on Whirlpool forum found through google but those answers are 4 years old. http://forums.whirlpool.net.au/forum-replies-archive.cfm/475258.html http://forums.whirlpool.net.au/forum-replies-archive.cfm/479615.html I havent tried them yet as I hoped to get actual answers before trundling over to my friends house again. Can anyone suggest, How to get information from Telstra support? (I realise this question maybe impossible to answer) What is the authentication server for Telstra BigPond for a user in Sydney Australia Are those whirlpool forum answers still valid? I guess if I dont get anything more here I'll try what it says on whirlpool and see what happens.

    Read the article

  • Add a netbook to an existing Windows XP home network

    - by GorillaSandwich
    I've got a home network set up with a couple of Windows XP computers. I'm now trying to add our new netbook to it - also running XP. (The goal is to share files and a printer.) I have run the Network Setup Wizard and made sure that the workgroup name is the same as the others, and have rebooted several times, but whenever I try to 'view workgroup computers,' the only one on it is the netbook. I have a Windows XP CD, but the netbook has no drive. The wizard has some options for floppy disks, but that's useless to me these days. What is this wizard actually trying to do, and can I do it manually? Surely it can't be this hard.

    Read the article

  • OpenLdap 2.4 on centos 6 doesn't listen on port 636

    - by Oliver Henriot
    I have an openldap 2.4 server on centos 6 whose confg I copied from those I have running under openldap 2.3 servers on centos 5 machines. On openldap 2.3, specifying TLSCACertificateFile, TLSCertificateFile and TLSCertificateKeyFile with correct values makes the server listen on port 636. This is not the case on the openldap 2.4 setup. I have configured it with loglevel -1 but I have not seen any clue as to what might be wrong and reading the openldap 2.4 manual doesn't indicate if any of the other TLS related parameters are now mandatory. I don't think so though because if I run the service manually, using "# /usr/sbin/slapd -u ldap -h "ldap:/// ldaps:/// ldapi:///"", the server does listen on port 636 and I can query it using "ldapsearch -H ldaps://myserver:636". Is there something I am missing to get the server to listen on port 636 without having to always launch it manually? Is this linked to centos 6 or openldap 2.4? Thank you. Cheers,

    Read the article

  • The tale of how the PowerShell CmdLets got installed with Azure SDK 1.4

    - by Enrique Lima
    I installed the Azure SDK 1.4 while rebuilding my laptop and ran the installation for the Windows Azure Service Management PowerShell CmdLets. Kicked off the installation script for the WASM PowerShell CmdLets by locating the path to which WASM PowerShell CmdLets was deployed to. Double clicked the startHere command. It will then open the WASM installation dialog. Click Next. Click Next. Notice the red x next to the Azure SDK 1.3, the problem is I have SDK 1.4 Here is the workaround, I go back to the location of the deployed WASM sources. Go into the setup path, then scripts>dependencies>check. Now, locate the CheckAzureSDK.ps1 file, and right-click, then edit. This is the content in the ps1 file, it check for the specific version of the Azure SDK, in this case, it is looking for version 1.3.11133.0038. We need for it to check for version 1.4.20227.1419 Now, save your ps1 file, go back to the open WASM install dialog, and click rescan. This time it should pass, then click next. A Command prompt window will appear, click any key. This completes the installation, click Close.

    Read the article

  • How to get Wifi Working Properly - I am Noob

    - by user287853
    I'm a noob to Ubuntu, but not computers. I installed a full version of Ubuntu version 12 whatever it is. I run it on a machine that has Win7/Win8 on another hard drive. My wireless adapter is some tiny USB stick I got on eBay - it works great in Windows, but I can't get it to work in Ubuntu. More precisely, Ubuntu is providing me a list (sometimes) of wireless networks in the area and when I try to connect to mine it just keeps password prompting me even though the one I use is correct. I looked over all the settings multiple times and don't believe there is anything in error regarding what it takes to connect to my network. So, I thought maybe it is a driver issue and came across NDIS. I thought I should try installing it, but I don't know how when I can't connect the Ubuntu machine to the Internet. I tried some commands to no avail. I have the Ubuntu installation disc and it shows a NDIS common and utils .deb files in there. Can someone out there help me out to get this wireless setup so I can get online?

    Read the article

  • virtual box upgrade

    - by Husni
    I did upgrade virtualbox from 4.1 to 4.2 wheneverver I want to load my win xp vdi, it gives me the following error: "Kernel driver not installed (rc=-1908) The VirtualBox Linux kernel driver (vboxdrv) is either not loaded or there is a permission problem with /dev/vboxdrv. Please reinstall the kernel module by executing '/etc/init.d/vboxdrv setup' as root. If it is available in your distribution, you should install the DKMS package first. This package keeps track of Linux kernel changes and recompiles the vboxdrv kernel module if necessary." I ran the suggested step to reinstall the kernel module, and the log file files is as follow: Makefile:181: * Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR= and run Make again. Stop. Makefile:181: * Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR= and run Make again. Stop. Makefile:181: * Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR= and run Make again. Stop. I still unable to re-run my win virtual XP vdi file. anyone have a clue?

    Read the article

  • Is it possible to combine two internet connections to increase performance?

    - by cornjuliox
    I've got a small home network, 3 PCs plus a laptop or two when the relatives come to visit, connected to a single cable internet connection. Now, as soon as everyone starts using the 'net the performance starts to suffer and if the load is heavy enough nobody can get anything done and everyone complains. At one point it was so bad that only one of us could use it at a time. I was researching possible solutions to this problem and I heard that internet cafes that utilize 2 internet connections, possibly from different providers, and have some sort of router that allows them to split the traffic between the both of them, with online games going through one and web traffic going through another. Is this possible? What is the technical term for it, and can/should it be applied to a home network setup or is there another solution to this problem?

    Read the article

  • Windows Server 2003 (as workstation) unable to write to Samba fileshares

    - by remyhorton
    Setup is a Samba fileserver under Linux, which i am trying to access from a Windows Server 2003 box which has been reconfigured as a workstation. I can log onto the fileshares and can copy/delete files, but trying to open a file then write to it fails. Renaming files also fails with an error about requiring a filename. Drag/dropping files onto Xemacs gives me a message about copying from the network zone, and once open the file is read-only. Any ideas of what is wrong? I suspect it is a miscommunication of security details, as folder security options are all unchecked (checking them has no effect). I know it is not a problem with Samba itself, as Window2000, WindowsXP, and Nautulas (under Linux) can all access/edit fileshare files fine using the same userid/password. I am not using domain logins.

    Read the article

  • Backups of Exchange 2007 SP3 using VSS are abnormally large

    - by Stew
    I have recently implemented Veeam backup and recovery 6.0, and have noted when backing up my exchange server via incremental updates, it is transferring way more data than expected. Backup is incremental, and setup to use VSS. VSS is stable and healthy, according to vssadmin. Exchange 2007 SP3 running on Windows Server 2008 R2, just last weekend I installed the latest Rollup for Exchange. I thought the nightly incrementals were large, but perhaps my users really are sending that much mail so I tested taking one incremental backup, waiting 10 minutes and taking a second. The second incremental backup transfered 5.8GB of data. We as an organization are absolutely NOT putting 5.8GB of data on the mail server every 10 minutes. Are there any other veeam users who have seen something similar? Is my test faulted? Are there other considerations for VSS?

    Read the article

  • apt-get update stuck on "Waiting for Headers"

    - by crasic
    I'm setting up a Maverick server on a spare PC. The install completes fine and the system boots up into the shell. However, when I try to do a apt-get update , apt hangs on almost every entry with the message 99% [Waiting for headers] sometimes a message of 96 b/s appears on the far right. The actual percent that it claims also varies. Searching around online gave a potential solution by using the option Acquire::http::Pipeline-Depth="0" this somewhat alleviates the problem, i.e. it stalls on every other entry with the same message as above. If you wait it out (the whole update took about 4 hours), the update still fails as a good portion of the hits show a "unable to connect" or similar message, despite the fact that I can ping the server from the pc just fine. The problem is also unrelated to the mirror used since I've tried about a dozen mirrors with no success, I've even tried commenting out everything but the main entry in sources.list and it still refuses to update. The network connection is fine since I can ping and wget (apt won't let me install lynx until I run a successful update) just fine. I've also reinstalled the distro with no luck. The only thing weird about the setup is that the PC is connecting to the internet through my windows laptop with ICS configured properly, but as I've said before, the network connection is fine.

    Read the article

  • Dovecot, POP3 and Gmail

    - by Eric J.
    I setup Postfix and Dovecot on a new Ubuntu box following these directions. From a client machine, I validate that POP3 seems to be working telnet mydomain.com 110 +OK Dovecot ready. USER [email protected] +OK PASS mypassword +OK Logged in. quit +OK Logging out. However, when trying to configure Gmail on the same client to retrieve email via POP3, I get the error Server denied POP3 access for the given username and password. Server returned error: "Login failed." I carefully confirmed that Gmail is configured to use the same POP Server, Port, Username and Password I used when checking the connection with telnet. What could be causing Gmail to get a "Login failed" message?

    Read the article

  • Tumblr custom domain not redirecting properly

    - by Manic
    I decided to host my blog at Tumblr, using their custom domain setup (http://blog.smokingfishgames.com/ instead of http://smokingfishgames.tumblr.com). However, it's been 72 hours and I'm still getting spotty redirection. It works some of the time--I go and see the page and blog, and it's all fine. However, it occasionally just stops working and redirects back to my web host, which is a directory with nothing but a single file called BUGGER.html (which I stuck in to make sure that it was my web host and not some Tumblr empty directory). Clearing the Chrome DNS cache makes the problem go away--for a while. After a few minutes, or an hour, or however long, I'll start seeing BUGGER.html again. I clear the cache, and poof, the blog shows up. The thing that's curious to me is that when I clear the cache and get BUGGER.html again (which happens occasionally), I can look at my Chrome DNS cache and see assets.tumblr.com UNSPECIFIED blog.smokingfishgames.com UNSPECIFIED www.tumblr.com UNSPECIFIED IP addresses and expiration times omitted for brevity's sake--if they're important I'm sure I can replicate the issue. This implies, to me anyway, that my browser is reaching Tumblr but getting bounced back to my web host. Any reason why this would be happening, or is this a normal symptom of DNS propagation? If it is a problem, should I be bothering Tumblr or my host with it, or is this something I can fix myself?

    Read the article

  • mailman not relaying email to external address

    - by gozzilli
    I have a setup of mailman with postfix on an ubuntu server 12.04. My problem is that mailing list emails are not forwarded to email addresses external to my institution. However the initial welcome email is received by everyone, internally and externally. in fact, a simple email from command line with mail is successfully sent to anyone after that, mailing list emails are only forwarded to internal addresses. the domain name I'm using for the server is not that of my institution who is hosting the server. Here is my main.cf: myorigin = sub.myinstitution.tld mynetworks = 127.0.0.0/8 xxx.xxx.xxx.xxx/16 # this is my institution ip range relayhost = smtp.myinstitution.tld inet_interfaces = loopback-only local_transport = error:local delivery is disabled virtual_alias_maps = hash:/etc/postfix/virtual smtpd_recipient_restrictions = permit_mynetworks myhostname = mywebsite.tld mydestination = $myhostname, localhost.$mydomain, localhost I also found these two links on serverfault and ubuntu forums, but neither of these solutions seem to do the trick for me. Any help would be much appreciated.

    Read the article

  • Linux QoS: bulk data transmission during idle times

    - by syneticon-dj
    How would I do a QoS setup where a certain low-priority data stream would get up to X Mbps of bandwidth, but only if the current total bandwidth (of all streams/classes) on this interface does not exceed X? At the same time, other data streams / classes must not be limited to X. The use case is an ISP billing the traffic by calculating the bandwidth average over 5 minute intervals and billing the maximum. I would like to keep the maximum usage to a minimum (i.e. quench the bulk transfer during interface busy times) but get the data through during idle/low traffic times. Looking at the frequently used classful schedulers CBQ, HTB and HSFC I cannot see a straightforward way to accomplish this.

    Read the article

  • lighttpd silently stops logging

    - by Max Cantor
    I'm on a Slicehost 256MB VPS with Ubuntu 9.04 (Jaunty). lighttpd is the only web server process running; it listens on port 80. My lighttpd.conf can be found here. I'm using Ubuntu's default logrotate setup for lighty. At seemingly random times, lighttpd will stop logging. It is not correlated with log rotation--that is, the errors do not occur when logrotate kicks in. What happens is, I will verify that the server is serving files by hitting a URL with my browser, and I will verify that it is not logging by checking access.log and seeing that the GET request I just made is not there. Using init.d to restart the process starts logging again, without truncating or rotating the log file. That is, new requests will be logged at the end of the existing access.log file. There are no cron jobs running on this box. Any ideas?

    Read the article

  • Deactivating website in ISPConfig shows another site

    - by Mattias
    A long time ago, one of our clients setup a subdomain pointing to our ip-adress. We added a website (SitesWebsiteAdd new website) that points to one of our servers. The project is now closed and the client wants us to remove the content. When we deactivate (by unclicking active) this site it automatically defaults to another website we have in our list (!?). So, because the client is still pointing to our ip, when entering project.client.com another client project is showing up by default. How is this possible? Any suggestions? I can ofcourse give you guys more details when you tell me what details you need. Thanks

    Read the article

  • Redundant APC UPS units, single server set up

    - by Sholom
    Hi All We have a very simple set up, looking for a very simple (reliable) solution: Setup: One Dell box with redundant power supplies running Windows 2003, plugged into two separate APC SmartUPS 1500 units (USB, no smart cards) on two separate circuits. Solution required: IF (UPS1 = Low) AND (UPS2 = Low) THEN: Shutdown gracefully ELSE: DO NOTHING!! APCUPSD only allowes for one instance (and therefore one UPS) in a windows environment. PowerChute can't do this without using APC Smart Cards which means utilizing our switch, but the switch does not have redundant power supplies, so it will only live for as long as one of the two UPS units. And no, i don't have the budget to buy two smart cards pluse a switch with redundancy ;) Thanks

    Read the article

  • When to use Nginx PHP Fast CGI with a TCP socket instead of a UNIX socket?

    - by user64204
    I've followed this guide to setup PHP in FastCGI mode with Nginx. This guide describes 2 ways of doing it: TCP socket and UNIX socket. I've ran some Apache Benchmark on my locale machine and here are the results: Below tests ran multiple times to get better average statistics: $ ab -c 200 -n 100000 http://.... APACHE: 1800 req/sec NGINX (TCP socket): 2500 req/sec NGINX (UNIX socket): 15000 req/sec As far as I understand, there is overhead with using a TCP socket rather than a UNIX socket, hence the better performance with the latter. However I was not expecting such a performance difference given that the TCP socket is on the localhost, and therefore would like to ask the following question: Q: Given the huge performance gain with using a UNIX socket, what are the configuration scenarios where it would make sense to use a TCP socket instead?

    Read the article

< Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >