Search Results

Search found 32520 results on 1301 pages for 'local machine'.

Page 480/1301 | < Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >

  • Patch management on multiple systems

    - by Pierre
    I'm in charge of auditing the security configuration of an important farm of Unix servers. So far, I came up with a way to assess the basic configuration but not the installed updates. The very problem here is that I just can't trust the package management tools on those machine. Indeed some of them did not sync with the repository for a long time (So I can't do a "yum check-updates" on Redhat for example). Some of those servers are not even connected to the internet and use an company repository. Another problem is that I have multiple target systems: AIX, Debian, Centos/Redhat, etc... So the version could be different (AIX) and the tools available will be different. And, last but not least, I can't install anything on the target system. So I need to use a script to retrieve the information and either: process it directly or save the information to be able to process it later on a server (Which may happen to run a different distribution than the one on which the information have been retrieved). The best ideas I could come up with were: either retrieve the list of installed packages on the machine (dpkg -l for example on debian) and process it on a dedicated server (Directly parsing the "Packages" file of debian repositories). Still, the problem remains the same for AIX and Redhat... or use Nessus' scripts to assess vulnerability on the installed packages, but I find this a bit dirty. Does anyone know any better/efficient way of doing this ? P.S: I already took time to review some answers to similar problems. Unfortunately Chef, puppet, ... don't meet the requirements I have to meet. Edit: Long story short. I need to have the list of missing updates on a Unix system just like MBSA on Windows. I'm not authorized to install anything on this system as it's not mine. All I have are scripts languages. Thanks.

    Read the article

  • What games work well on MacBook Pro (i7/GeForce GT 330M) within VMWare Fusion?

    - by webworm
    I have a 15" MacBook Pro (2.66 i7 with 8 GB RAM) with the GeForce GT 330M 512 MB graphics card. I use it primarily for development (Mac/Web/Windows) though I would like to play the occasional game with my son who uses a desktop PC system at home. I prefer to use VMWare Fusion for virtualization rather than BootCamp for a number of reasons. Heat/Fan issues with i7 under BootCamp Prefer to retain virtual machine as single file rather than dedicated partition (easier to move a nd backup) I have heard that Windows support of the GeForce GT 330 in BootCamp is not all that good. So that being said I was wondering what sort of games I would be able to play within the Fusion environment running Windows 7. I have 8 GB RAM and usually dedicate 4 GB to the virtual machine. I don't expect to be able to play the latest FPS games such as BattleField: Bad Company 2 or Call of Duty, rather I am looking at games such a Total War II, Civilizations IV, Supreme Commander, and other RTS type games. I should mention the native screen resolution of my MacBook Pro is 1680x1050, which is what I would be most likely running the VM at (fullscreen). Thank you for any advice.

    Read the article

  • Strange network connectivity problem

    - by Marc
    Here is my network connectivity: cable modem | |(WAN) wrt54g (default gateway, 192.168.1.1) -- earth |(LAN) | Simple Switch1 | | | | | SimpleSwitch2- neptune | | | | mars mercury | |- venus | |- laptop | saturn (Windows AD DC) simpleSwitch2 was hanging off the wrt54g. I moved it to SW1 during troubleshooting. Nothing described below was any different. earth is connected via wireless to the wrt54g. I can ping from laptop to mars, neptune & mercury. I can ping from earth to venus, saturn & laptop. However, pinging mars, mercury or neptune from earth gives the following result. Pinging mars.XXX.XXX [192.168.1.105] with 32 bytes of data: Reply from 192.168.1.122: Destination host unreachable. Reply from 192.168.1.122: Destination host unreachable. Reply from 192.168.1.122: Destination host unreachable. Reply from 192.168.1.122: Destination host unreachable. Ping statistics for 192.168.1.105: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), .122 is the address of the machine from which I am pinging. earth is a Vista machine. Windows firewall is off. saturn is my DNS & DHCP server. Can anyone give me any ideas what the h*ll is going on? Clearly the topology is a factor And yes, I am a space geek.

    Read the article

  • MySQL command appends '@localhost' to username

    - by Mikee
    I just can't seem to figure this one out. I want to use the command line to connect to a MySQL database residing on another server. I went ahead and created the username and password for the user. I have also granted all privileges on that user for that database. When using the command: mysql -h <hostname> -u <username> -p, I get the following error: ERROR 1045 (28000): Access denied for user '<username>'@'<local_machine_hostname>' (using password: YES) The problem is that it keeps appending the current machine's hostname into the username. Obviously, that user@<local_machine_hostname> is not correct. It doesn't matter what I type. For instance, if I type: mysql -h <hostname> -u '<username>'@'<hostname>' -p It does the same, only in the error output, it says: Access denied for user '<username>@<hostname>'@'<local_machine_hostname>' Is there a setting in a configuration file which is allowing this to happen? It's really quite annoying. I need to set up a tikiwiki server, and it cannot connect because during the step where you set up mysql, it keeps appending the local machine's hostname to the mysql login name.

    Read the article

  • Restrict Computer or Users from Internet but allow access to intranet and Windows Update / ePO?

    - by MoSiAc
    So this may be impossible but I've been asked to try and find something about it. So far nothing I have found is possible. I need to restrict specific machines or user accounts from regular Internet access but let them have access to the intranet portion of our network. I do not have Active Directory control, nor does anyone at my local workplace (corporate control in a different state). I have tried going through IPsec and doing this per local machine, but that system seems to have been removed from the images that are installed on these machines so that is out. So far the only other option I can think of is assigning the machines a specific ip address and removing their gateway access. This would probably work but the machines need to be able to receive updates that are being pushed to them through ePO and LanDesk. I would really like to do this on the user level because then if I need to do tech work to the machine and need internet access I can get to it but a "special" user could login and not be able to get into anything.

    Read the article

  • Ubuntu 9.10 RSA authentication: ssh fails, filezilla runs fine

    - by MariusPontmercy
    This is quite a mistery for me. I usually use passwordless RSA authentication to login into my remote *nix servers with ssh and sftp. Never had any problem until now. I cannot connect to an Ubuntu 9.10 machine: user@myclient$ ssh -i .ssh/Ganymede_key [email protected] [...] debug1: Host 'ganymede.server.com' is known and matches the RSA host key. debug1: Found key in /home/user/.ssh/known_hosts:14 debug2: bits set: 494/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: .ssh/Ganymede_key (0xb96a0ef8) debug2: key: .ssh/Ganymede_key ((nil)) debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: .ssh/Ganymede_key debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Trying private key: .ssh/Ganymede_key debug1: read PEM private key done: type RSA debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug2: we did not send a packet, disable method debug1: Next authentication method: keyboard-interactive debug2: userauth_kbdint debug2: we sent a keyboard-interactive packet, wait for reply debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 1 Then it falls back to password authentication. If I disable password authentication on the remote machine my connection attempt just fails with a "Permission denied (publickey)." state. Same thing for sftp from command line. The "funny" thing is that the exact same RSA key works like a charm with a Filezilla sftp session instead: 12:08:00 Trace: Offered public key from "/home/user/.filezilla/keys/Ganymede_key" 12:08:00 Trace: Offer of public key accepted, trying to authenticate using it. 12:08:01 Trace: Access granted 12:08:01 Trace: Opened channel for session 12:08:01 Trace: Started a shell/command 12:08:01 Status: Connected to ganymede.server.com 12:08:02 Trace: CSftpControlSocket::ConnectParseResponse() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Retrieving directory listing... 12:08:02 Trace: CSftpControlSocket::SendNextCommand() 12:08:02 Trace: CSftpControlSocket::ChangeDirSend() 12:08:02 Command: pwd 12:08:02 Response: Current directory is: "/root" 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Trace: CSftpControlSocket::ParseSubcommandResult(0) 12:08:02 Trace: CSftpControlSocket::ListSubcommandResult() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Directory listing successful Any thoughts? M

    Read the article

  • Upstart Script on Centos 6

    - by MarcusMaximus
    I'm trying to create an upstart script to run a python script on startup. In theory it looks simple enough but I just can't seem to get it to work. I'm using a skeleton script I found here and altered. description "Used to start python script as a service" author "Me <[email protected]>" # Stanzas # # Stanzas control when and how a process is started and stopped # See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn # When to start the service start on runlevel [2345] # When to stop the service stop on runlevel [016] # Automatically restart process if crashed respawn # Essentially lets upstart know the process will detach itself to the background expect fork # Start the process script exec su nonrootuser -c "python /usr/local/scripts/script.py" end script The test script I want it to run is currently a simple python script that runs without any issue when run from a terminal. #!/usr/bin/python2 import os, sys, time if __name__ == "__main__": for i in range (10000): message = "shotgunUpstartTest " , i , time.asctime() , " - Username: " , os.getenv("USERNAME") #print message time.sleep(60) out = open("/var/log/scripts/scriptlogfile", "a") print >> out, message out.close() The location/var/log/scripts has permissions 777 The file /usr/local/scripts/script.py has permissions 775 The upstart script /etc/init.d/pythonupstart.conf has permissions 755

    Read the article

  • MySQL Not Turning On

    - by Shalin Shah
    I have an amazon ec2 instance running on the Amazon Linux AMI and its a micro instance. I wanted to install Django onto my server so I entered these commands wget http://www.mlsite.net/blog/wp-content/uploads/2008/11/go wget http://www.mlsite.net/blog/wp-content/uploads/2008/11/django.conf chmod 744 go ./go So after I was done, I ran sudo service httpd restart and sudo service mysqld restart and This is what came up for mysqld: Stopping mysqld: [ OK ] MySQL Daemon failed to start. Starting mysqld: [FAILED] So I deleted the django files /usr/local/python2.6.8/site-packages/django_registration.egg and I tried finding the error and I found out that in my /etc/my.cnf for the socket, it said socket=/var/lock/subsys/mysql.sock so I went to /var/lock/subsys/ and there was no mysql.sock. I tried creating one using vim but it still didn't work. Then I checked the error log and it said Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) So I am pretty much lost right now. I know it has something to do with mysql.sock If you might know a reason why this was caused could you please let me know? I have a wordpress site on my server, so i kind of need MySQL to work. Thanks!

    Read the article

  • Mapped network drive missing from My Computer and Explorer

    - by matt wilkie
    On a Windows XP Pro SP3 machine one network drive refuses to show up in My Computer or Explorer. The missing drive letter is G:, if that matters. Other mappings work fine. Other profiles one the same machine have no problem mapping G:. I can access the G: just fine typing it into the address bar or in CMD shell. I've used TweakUI to toggle hide/show G: with no difference. TweakUI says G: should be visible. I've logged off,on between toggles to make sure the settings are taking effect. I've looked at reg key [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer] and made sure it's zero'd. [insert ref link here] We've limped along with this broken setup for some time, just working around it, but some applications do not allow typing in a path when choosing a place to save files and it's reached the point where it's intolerable. So, anyone have any idea why XP won't show this drive letter? or how to fix it?

    Read the article

  • Gigabit LAN not working on ASUS M2N-MX

    - by chmod
    Today I replace my FastEthernet switch with a newly bought gigabit switch (DGS-1008A). All computers in my house are displaying that the connection speed is 1 Gbps except for one. The computer that is not working is an ASUS M2N-MX which contain an onboard gigabit NIC. See ASUS link for confirmation http://www.asus.com/Motherboards/AMD_AM2/M2NMX/ Here are some info of the machine OS: Windows 7 Ultimate SP1 64bit BIOS version: 1004 (latest) Driver: installed via Windows update (latest from Windows update) Windows Update: fully updated The machine is reformatted 3 days ago, so it's pretty clean, no junk, no virus, etc Cable: Amp CAT5E 5 meters In device manager, the name of the NIC is "NVIDIA nForce 10/100/1000 Mbps Ethernet" What I have try: I did try to install the driver provided in ASUS website, but there isn't any for Windows 7 64 or Vista 64. I did try to install the latest nForce340/6100, downloaded from Nvidia website. However, the LAN driver refuse to install, it complain that I already have the best driver installed. I looking in the property -- advance tab -- Speed/duplex settings, in an attempt to force it to run at 1000Mbps, but there is no 1000Mbps choice, only 10 and 100Mpbs. I change the CAT5E cable (use one from another computer that is running gigabit without problem) Anyone have this issue or know how to solve it? Thanks

    Read the article

  • Windows 8 using as a webserver

    - by Jason
    I have a few hobby websites that I currently host on CentOS 6. Apache, mail serving, PHP, MySQL nothing special. In the past I used Windows XP to do this same task, for years, and I was OK. I switched to Linux and for the last few years it has been such a pain. updates break, certain apps only support certain distros without compiling from source. It prevents me from working on my hobby sites more because I am always fixing something. With Windows I locked it down, I run a hardware firewall and packet analyser, kept up on updates and A/V and never had a problem. I dont allow RDC from outside the local LAN, no FTP open, run OpenSSH on an obscure port.. I am considering switching to Windows 8 (since it is a cheaper license now that Windows 7) and running apache, HMailServer, PHP, MySQL, just like my CentOS install. My questions: I am not familiar with Windows 8, can the above be done like XP? No new security restrictions or the OS preventing this from happening? The machine is a Athlon 64-bit X2 with 32GB of RAM. Will Windows 8 see all of the RAM? Technically the machine came with Windows 7, and there is a serial number on it but I am sure I wiped away the Windows 7 recovery partition when I switched to Linux....

    Read the article

  • nginx redirect proxy

    - by andrew
    I have a web app running on a nginx server on local ip 192.168.0.30:80 I have this in my etc/hosts 127.0.0.1 w.myapp.in If someone accesses my app using a "w" subdomain, it shows a webdav interface, otherwise it runs normally (for example, someone calls http://myapp.in , it goes into the app, and http://w.myapp.in goes into webdav interface - this is done within the app, nginx has nothing to do with it) Because I don't have a dns or anything like that, users must access the app by ip. A problem appears if someone wants to access the webdav interface, because you cannot access the app by a subdomain - unless you write a line in your local hosts file, which is not a solution) A possible solution If it's possible to setup the nginx server so that if someone calls http://192.168.0.30 (on port 80), it goes normally into the app, but if a user tries to access say http://192.168.0.30:81 (another defined port) it redirects internally to w.myapp.in, and the app sees the subdomain Given the app, can this be done? If yes, what should I put in the nginx config file? And if you guys think of a better solution, I'm open to any.

    Read the article

  • How to create a static IP on Windows Server 2008 R2 so I can access the server remotely

    - by Aesir
    I have just purchased a HP Proliant N40L which I am intending to use as a NAS, learning tool and just in general something to mess around with. As a student via the Microsoft dreamspark program I can get a free copy of Windows Server 2008 R2 which I am using as the OS. So that I can remote to the box from outside of my local network and so that I can stream media from it to my PS3, I have read that I need to create a static IP for the server and use port forwarding to forward to this IP so I can remote in. Is this correct? I am not really sure how to do this and if I need to make these changes on my router configuration, on the OS or both. I am a novice when it comes to networking however most resources for Windows server 2008 R2 seem to assume a fair amount of experience already. I realise that using this particular OS may seem like overkill for what I currently wish to do with it (stream content to other devices and backup) but as I can get a copy for free it seems sensible. Edit: From reading answers posted I feel I should give more information. I have now tried to add a static IP address using my router configuration settings. I have used the getmac command to get the mac address of the server. My ISP is Virgin Media and I have gone to the LAN IP section and I have added an IP address to the DHCP Reservation Lease Info. I can now use remote desktop connection internally to remote to the server (so I am assuming assigning this IP has worked). How do I configure this on the OS as well? I am also unsure on how I would remote to this machine outside of my local network?

    Read the article

  • Intell SSD + Win 7 after crash can not repair, can not re install

    - by Ori
    I have Lenovo w520, after i bought it i took away old hhd (no longer with me) and replaced it with intel ssd, it worked perfectly for 1 year or so, today my system fr0ze and after waiting for some time i didi hard reset - it wasn't able to boot anymore at all, i do not see any messages from windows ever, it only loads Intel boot utility that suggests to pick one of 3 devices to boot, it has my hdd there but nothing happens. /I dont have recovery tools from lenovo since i moved to another country, i got win 7 cd from a friend (came with his laptop) abd if in bios i have AHCI - it doesnt see my ssd, if compatible mode - it sees it but format not available, partition creation gives b\me 8007045 error. I tried diskpart, in compatible mode it sees my disk but doesnt do recover or clean all, also win 7 disk tools dont do anything if i try to do boot fix... I am ok with erasing it but i seem not to be able too, i jus tneed the machine to wpork asap, all my files are on external drives so i dont care about formatting. please help! I am given a very old machine by a friend so i am able to browse internet... it is under XP...

    Read the article

  • Backup Picasa 'people' tags data

    - by pelms
    OK, so I've spent a fair amount of time putting names to faces in Picasa 3.5 but in a few days (hopefully) my copy of Windows 7 should arrive and I'll need to reinstall Windows. So, does anyone know what I need to backup so that I don't have to re-enter all those name tags? N.B. I'm on Windows 7 RC and know that I don't have to do a clean reinstall but I would prefer to. Outcome: I clean installed Windows 7 and downloaded and installed Picasa. Unfortunately, the download link on the UK Picasa homepage still pointed to Picasa 3.0 (rather than 3.5) which doesn't have face recognition. This scanned my photos folders and overwrote the picasa.ini files along with the people information   :¬( Fortunately I'd backed up the photos before installing Win 7, so after uninstalling Picasa 3.0 (along with it's database), restoring the photos from backup and installing Picasa 3.5, I finally got my face names back. Extra... Google has now posted advice on how to migrate to Windows 7 and keep your Picasa database, meaning that it will not need to rescan you photos and will retain all information about then including name tags. They have a method for upgrading and for a clean install of Win 7. Basically you need to back up: "C:\Users\%username%\AppData\Local\Google\Picasa2" and "C:\Users\%username%\AppData\Local\Google\Picasa2Albums"

    Read the article

  • Setting Remote Desktop to allows IPv6 connections

    - by Garrett
    Setup: Basically I have 3 machines (2 virtual and 1 physical) that I would like to be able to RDP in to from outside my NAT (a router). The VMs are Windows 7 and Windows XP, both fully patched with Teredo installed and working, both running in VirtualBox (their host also has Teredo working, though I'm not sure if that matters). They both have bridged network adapters with promiscuous mode enabled. The physical machine is Windows 7 fully patched with an HFS server running on it and a dynamic DNS set up for my public IPv4 address and port forwarded. It also has Teredo installed and working. Symptoms: According to http://test-ipv6.com/ all 3 have public IPv6 addresses, and they can all connect to http://ipv6.google.com/. I can ping the XP VM from the host it's running on but I cannot ping it from any other machine. Also, I cannot ping either of the other machines from anywhere. I cannot connect to any of them over RDP from IPv6, however I can connect to all of them through IPv4. Any ideas what is going wrong?

    Read the article

  • Transfering Files to server IP and port

    - by Mason
    I need to transfer files from my local computer on windows 7 to a server running linux. I access the server with putty through ssh at a specific IPv4 address and port number. I've attempted using the pscp command from my local computer but was denied access by the server. "Fatal: Network error: Connection refused" c:>pscp test.csv userid@**IPv4_Addres***:Port# /path/destination_file_name. Either the server blocks all pscp attempts from unauthorized users (most likely my laptop included) or I used the command incorrectly. If you have experience using this command, where exactly will the file get transfered to, I'm assuming that the path destination starts at my home directory in the server. Also if you have any other alternative methods of transfering the files let me know. Update 1 I have also tried using WinSCP however I got permission denied for that as well, it looks like the server will not let me upload or save files. Solved I had a complete lapse of memory and forgot about sudo (spent too much time with scripts the last 2 months), so I was able to change the permissions to allow external editing. Thanks for all the help guys!

    Read the article

  • Why can't I get out of display mirror mode?

    - by Roy Smith
    I've been running Ubuntu (10.04.1 LTS, 64-bit) for a while and just replaced my hardware with a faster machine with an ATI Radeon HD 5700 video card. I've got twin 1920 x 1080 displays. I downloaded the latest driver (ati-driver-installer-10-9-x86.x86_64.run) from the ATI web site and installed that. I've gone through a few rounds of playing with /etc/X11/xorg.conf, and can't get things right. At the moment, it's in display mirroring mode, and I can't figure out how to get it out of mirror mode. If I run Monitor Preferences, there's a "Same image in all monitors" checkbox. If I uncheck that, the little preview window switches to show two monitors. When I click Apply, it asks me to log out and log back in again. When I do that, I'm right back to mirrored mode. What's really weird is that I'm currently running a copy of xorg.conf from a coworker's machine. He's got identical hardware, and his display works fine. So, I'm inclined to think there's something else going on other than the conf file. Any ideas what might be wrong?

    Read the article

  • Max. Temp. on Intel Burn Test for Stock Dell Precision T3500

    - by HK1
    I'm troubleshooting an issue on a Dell Precision T3500. As part of my troubleshooting I've decided to try running a stress test using Intel Burn Test software. This machine is a stock configuration with 12GB of RAM and a Xeon W3670 processor (nothing overclocked). When I run IBT using the standard mode, SpeedFan reports a processor temperature in excess of 80C. I've seen numbers as high as 90C but even at that temperature the machine does not become unstable or crash. However, it seems way too high. This processor has a TCase of 67.9C according to Intel's website. I'm guessing that means I'm in the danger zone any time I go over that temperature. I've checked the cooling system and everything looks fine. I've even took out the heat sink and reinstalled it with new thermal compound. This did not appear to make the problem better or worse. Is there a discrepancy somewhere here in the way temperatures are measured or displayed? I've also tried using HWMonitor from CPUID and it reports the same temperatures. Should I just let the Standard Test go and disregard the temperature outputs?

    Read the article

  • Is there a clean way to obtain exclusive access to a physical partition under Windows?

    - by zneak
    Hey guys, I'm trying, under Windows 7, to run a virtual machine with VMWare Player from an OS installed on a physical partition. However, when I boot the virtual machine, VMWare Player says that it couldn't access the physical drive for writing. This seems to be a generally acknowledged problem in the VMWare community, as Windows Vista introduced a compelling new security feature that makes it impossible to write to a raw drive without obtaining exclusive access to it first. I have googled the issue and found a few workarounds. However, the clean ones seem to only work on whole physical disks, and not on partitions. So I would be left with the dirty solution. In short, it meddles with the MBR to erase any trace of the partitions to use, makes Windows forget about them, then restores the MBR so we can launch the VM. I'm not sure I want to do that. Is there a way to let VMWare acquire exclusive access to the partition without requiring me to nuke it away? What I'd be looking for, I suppose, is a way to put just partitions offline instead of whole physical drives.

    Read the article

  • fglrx-legacy-driver not seeing Radeon HD 4650 AGP

    - by Rocket Hazmat
    I am running Debian Squeeze on an old Dell Dimension 8300 box. It has an AGP Radeon HD 4650 card. I use this machine to mine bitcoins, and today I noticed that the machine had rebooted! My precious uptime! Anyway, my miner wouldn't start, so I figured might as well update my graphics driver, maybe that would fix the issue. I went to amd.com and downloaded the newest driver (12.6 legacy), but after installing it, aticonfig gave an error: aticonfig: No supported adapters detected I uninstalled the driver and figured I'd try to install it from apt. AMD has dropped support for the HD 4000 series in fglrx, forcing me to use fglrx-legacy-driver (currently only in experimental). In order to install this, I had to update libc6 (and some other important packages, like gcc), I had to use their wheezy versions. I finally got fglrx-legacy-driver installed, but I still got: aticonfig: No supported adapters detected Why isn't the driver finding my video card? I have a hunch it has something to do with the fact that it's an AGP video card. Here is the output of lspci -v (why does it say Kernel driver in use: fglrx_pci?): 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV730 Pro AGP [Radeon HD 4600 Series] (prog-if 00 [VGA controller]) Subsystem: Advanced Micro Devices [AMD] nee ATI Device 0028 Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 16 Memory at e0000000 (32-bit, prefetchable) [size=256M] I/O ports at de00 [size=256] Memory at fe9f0000 (32-bit, non-prefetchable) [size=64K] Expansion ROM at fea00000 [disabled] [size=128K] Capabilities: [50] Power Management version 3 Capabilities: [58] AGP version 3.0 Kernel driver in use: fglrx_pci EDIT: fglrx 12.4 seems to work. Thing is, since I am on kernel 3.2, I need to apply this patch to common/lib/modules/fglrx/build_mod/firegl_public.c. I thought ATI dropped support for the 4xxx series after 12.4. Why doesn't 12.6 legacy work?

    Read the article

  • All network devices freezing when Airport Extreme Base Station is connected. Any ideas?

    - by Jon
    I've been troubleshooting this issue for a while, and through a series of events have it narrowed down to my airport extreme base station. I like this router, since I'm able to connect to IPV6 sites without any insane configuration (my alternate router is too old and doesn't support v6). My question is: Has anyone else had this issue, if so how is it resolved? If not, can you recommend a good IPv6 router? Here is how I came to the conclusion that it is the router: Devices: XBOX 360, HTC Incredible, Home-Built machine running FreeBSD, Home-Built machine running Ubuntu 10.04. 1.) Noticed freezing on Ubuntu Box. 2.) Noticed freezing on XBOX360 3.) Noticed freezing on HTC Incredible (only when connected to my network wirelessly). The above all happened at random times throughout the past few weeks. Over the last few days, I was playing XBOX and noticed that the XBOX and Ubuntu machines both froze. I picked up my phone, and it was also frozen. I reset all devices, power-cycled my router, and all was fine again. About two hours later, it happened again (I was playing Forza III, the XBOX froze; I went to the Ubuntu box and it was frozen; unfortunately, the HTC phone was not connected wirelessly, and the FreeBSD box was turned off). I can't even begin to imaging what a router could be doing to freeze devices with such differing hardware/software/OS, and I feel absurd for coming to this conclusion, but I have nothing else. I hooked up my archaic Netgear router, and have had no problems since. :(

    Read the article

  • TEMP environment variable occasionally set incorrectly

    - by Roger Lipscombe
    Occasionally, I find my TEMP and TMP environment variables set to C:\Windows\TEMP. They should be set to %USERPROFILE%\AppData\Local\Temp, and are configured correctly in System Properties. This manifests itself as error messages like the following: ---> System.InvalidOperationException: Unable to generate a temporary class (result=1). error CS2001: Source file 'C:\Windows\TEMP\gb_pz65v.0.cs' could not be found error CS2008: No inputs specified ...which occurs in various .NET applications (in particular Visual Studio 2010 or SQL Server Management Studio). Alternatively, SQL Server Management Studio will report: Value cannot be null. Parameter name: viewInfo (Microsoft.SqlServer.Management.SqlStudio.Explorer) If I run PowerShell elevated, then $env:TEMP is set correctly. If I run PowerShell non-elevated, then it's not. I believe that it should be set correctly in both cases. If not, it's the wrong way round. The same is true for CMD.EXE. Rebooting fixes it, temporarily, until something breaks it again. Presumably something loaded into Explorer.exe is messing with its environment variables, but what? The values in the registry are correct, even while this is happening: HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment has TEMP = %SYSTEMROOT%\Temp HKCU\Environment has TEMP = %USERPROFILE%\AppData\Local\Temp By setting a breakpoint on shell32!RegenerateUserEnvironment, I'm able to trap it when it happens, but I still don't know why explorer.exe is reading the wrong environment variables. I can reproduce it consistently by broadcasting a WM_SETTINGCHANGE message (I wrote a one-line C++ program to do this). Watching the activity in Process Monitor shows that explorer.exe doesn't even look at HKCU\Environment. What is going on?

    Read the article

  • Replacing HD in an MacOS 10.6.8 server caused all shares to fail

    - by Cheesus
    I'm hoping someone might have a helpful suggestion about this problem. We have 2 MacOSX servers available for file sharing. (quad Xeons - 2GB RAM, both 10.6.8), No.1 is an Open Directory Master with 50+ user accounts, No.2 has only 2 local accounts (/local/Default) and looks at the OD Master for all user accounts (/LDAPv3/10.x.x.20/) Both servers have 3 internal HD's, The boot volume with only Server OS and minimal Apps. A 'DataShare' HD (500GB) and a backup drive (500GB). After upgrading the DataShare HD in Server No.2 from a small internal HD (500GB) to larger capacity (2TB) drive, users are unable to connect to shares on Server No.2. Users get an error "There are no shares available or you are not allowed to access them on the server" The process I followed was to use Carbon Copy Cloner to create an exact copy of the original data drive (keeps all ownership data, UID, permissions, last edit date and time). Everything booted up ok, no indication there was any issues. (Paths to the sharepoint look good) Notes during troubleshooting - Server1 is operating perfectly, all users can access shares and authenticate etc. - I've checked the SACL (Server Access Control List) settings is ok. - On Server2 in the Server Admin' app, I can see all the shares listed ok. The paths seem valid, I can disable / reenable the shares, no errors. - On Server2 'workgroup manager' lists all the accounts from the OD Master in the LDAP dir view. All seems fine from here. Basically everything looks normal but no file shares on Server2 can be accessed from regular users.

    Read the article

  • DPM server 2010 Attach agent error : administrator privileges missing?

    - by Michael
    I’m hoping you would be able to help me out with this little problem I’m having. I installed DPM 2010 in our test environment to test backups on Exchange 2010 servers. The environment includes : 1xDC 2x Exchange Server 2010 1x DPM 2010 server All of these are running on Microsoft server 2008 R2 Virtual machines. The host machines are using Hyper-v. So the problem goes like this : 1- I tried to install the agents from the DPM server GUI, which failed saying I didn’t have the correct permissions. 2- So then I tried the manual installation using the commands from : the Microsoft site http://technet.microsoft.com/en-us/library/bb870935.aspx 3- The agent installation worked but when I get to attaching the agents to the DPM server it still gives me the error saying that the specified account does not have administrator rights. 4- I tried the Domain admin, users who are domain admin + local admin, single local admins. 5- I have turned off the windows firewall and made sure all the services are running. So now I’m out of ideas and really need help, the agent attach to the DPM server is the last thing that is holding me back from deploying everything to the production site. Any help would be really appreciated.

    Read the article

< Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >