Search Results

Search found 5907 results on 237 pages for 'dist upgrade'.

Page 199/237 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • Enable re-attached mouse/keyboard via ssh?!

    - by aidan
    I had Ubuntu 9.10 x64 Desktop installed on a nettop I have (that I normally run headless), and yesterday I decided to take the plunge and update to 10.04. So, I plugged in a screen and usb mouse/keyboard, booted up and set to work. It was 1am, and it was telling me it had 3hrs left to install all the new packages, so I unplugged the screen and usb mouse/keyboard, left the box running, and went to bed. This evening, I plugged it all back in again to check progress. It's asking if I want to remove obsolete packages. I do, but neither the mouse nor keyboard work! I can access the box via SSH like I normally do; is there any way I can re-enable the keyboard from there? I'm reluctant to restart the box (via ssh) mid-way through such a complicated upgrade. Thanks for any help! lsusb (with wireless mouse/keyboard receiver unplugged): Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub lsusb (with wireless mouse/keyboard receiver attached): Bus 004 Device 005: ID 045e:005f Microsoft Corp. Wireless MultiMedia Keyboard Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    Read the article

  • Ubuntu 10.04 RGBA(Transparent theme) bug

    - by Nrew
    I tried to follow this tutorial frome ghacks.net. But I end up with a bug. Everytime I try to change the desktop background or the theme. It opens up lots of folders. And then close it back again. So I cannot do anything when I try to change the background or the theme to the default. Here is the tutorial And I can't even shutdown my machine now, please help. EDIT I tried executing these commands in the terminal sudo add-apt-repository ppa:erik-b-andersen/rgba-gtk sudo apt-get update && sudo apt-get upgrade sudo apt-get install gnome-color-chooser gtk2-module-rgba sudo apt-get install murrine-themes And it installed gnome color changer. It works, and the windows became transparent, but I can't change to the default theme and I can't put a background Here's the screenshot, when I try to change something in the preferences: Its just okay if I can't change the theme, but it won't let me change the background either. Or maybe the background is changed but its transparent just like the other windows. You can see that in the taskbar(or whatever its called in Ubuntu). Its full. What can I do to atleast make the background visible.

    Read the article

  • Which is the fastest way to move 1Petabyte from one storage to a new one?

    - by marc.riera
    First of all, thanks for reading, and sorry for asking something related to my job. I understand that this is something that I should solve by myself but as you will see its something a bit difficult. A small description: Now Storage = 1PB using DDN S2A9900 storage for the OSTs, 4 OSS , 10 GigE network. (lustre 1.6) 100 compute nodes with 2x Infiniband 1 infiniband switch with 36 ports After Storage = Previous storage + another 1PB using DDN S2A 990 or LSI E5400 (still to decide) (lustre 2.0) 8 OSS , 10GigE network 100 compute nodes with 2x Infiniband Previous experience: transfered 120 TB in less than 3 days using following command: tar -C /old --record-size 2048 -b 2048 -cf - dir | tar -C /new --record-size 2048 -b 2048 -xvf - 2>&1 | tee /tmp/dir.log So , big problem here, using big mathematical equations I conclude that we are going to need 1 month to transfer the data from one side to the new one. During this time the researchers will need to step back, and I'm personally not happy with this. I'm telling you that we have infiniband connections because I think that may be there is a chance to use it to transfer the data using 18 compute nodes (18 * 2 IB = 36 ports) to transfer the data from one storage to the other. I'm trying to figure out if the IB switch will handle all the traffic but in case it just burn up will go faster than using 10GigE. Also, having lustre 1.6 and 2.0 agents on same server works quite well, with this there is no need to go by 1.8 to upgrade the metadata servers with two steps. Any ideas? Many thanks Note 1: Zoredache, we can divide it in two blocks (A)600Tb and (B)400Tb. The idea is to move (A) to new storage which is lustre2.0 formated, then format where (A) was with lustre2.0 and move (B) to this lustre2.0 block and extend with the space where (B) was. This way we will end with (A) and (B) on separate filesystems, with 1PB each.

    Read the article

  • Libvirt / QEmu Machine Fails and Refuses Restart Because of Memory Allocation Errors

    - by Elmar Weber
    I'm having a problem with libvirt. On a system restart all virtual machines (VMs) are started without a problem and keep running. Then at some point in time a set of machines shuts down according to their log. When I try to restart the machine, I'm getting an error that the memory allocation failed, although more than enough memory is free. server ~ # free total used free shared buffers cached Mem: 16176648 16025476 151172 0 285432 950300 -/+ buffers/cache: 14789744 1386904 Swap: 0 0 0 server ~ # virsh start zimbra error: Failed to start domain zimbra error: Unable to read from monitor: Connection reset by peer server ~ # tail -n 4 /var/log/libvirt/qemu/zimbra.log LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 3072 -smp 2,sockets=2,cores=1,threads=1 -name zimbra -uuid d05ddb7a-83c4-a77b-d8bc-a322648520cf -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/zimbra.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -drive file=/var/lib/libvirt/images/zimbra.img,if=none,id=drive-ide0-0-0,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -netdev tap,fd=19,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:21:a9:ad,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -vnc 192.168.1.2:25 -k de -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 char device redirected to /dev/pts/2 Failed to allocate 3221225472 B: Cannot allocate memory 2012-07-06 08:42:56.076+0000: shutting down server ~ # uname -a Linux server 3.2.0-26-generic #41-Ubuntu SMP Thu Jun 14 17:49:24 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux The system is a Ubuntu 12.04 server. The problem seems to occurs since the last restart, which was due to a number of package upgrades and a kernel upgrade. I tried booting with the previous kernel, the problem persists. I was not able to pinpoint an exact event when the machines fail, they do it at nearly the same time. The last time a duplicity job was running, this was not always the case however. Any suggestions on how to debug this? Best regards, elm

    Read the article

  • Exchange 2010 DAG + VMWare HA = no support?

    - by Dan
    We currently have an Exchange 2003 clustered environment (two machine cluster) that we're looking to upgrade to 2010. We recently purchased a VMWare virtualization environment (three Dell R710's with an EMC NS-120 serving up NFS datastores - iSCSI is available) that we wish to use for this new environment. I'm seeing that Microsoft does not support Exchange 2010 DAGs with a virtualization high availability solution (see links below). I would like to utilize the DAG to ensure the data stays available if one host goes down, and HA to ensure that if the physical host goes down, the VM will come back up on the other available host. Does anybody know why MS does not support this? VMWare HA will only restart the VM if it is hung/down - I don't see any difference between this and restarting the physical box if someone pulled the power... Will we only run into issues with support if it has something to do with HA/DAG failover or will they see we have HA and tell us to put it on a physical box even if it has nothing to do with HA? If we disable HA for these VM's will that satisfy them on a support case? Has anybody set up an Exchange 2010 DAG on VMware with HA enabled? Will they have any issues with using an NFS datastore? We have much greater flexibility on the EMC with NFS vs iSCSI, so I would prefer to continue utilizing that. Thanks for any input! http://www.vmwareinfo.com/2010/01/verifying-microsoft-exchange-2010.html Take a look at the second image under "Not Supported" http://technet.microsoft.com/en-us/library/aa996719.aspx "Microsoft doesn't support combining Exchange high availability solutions (database availability groups (DAGs)) with hypervisor-based clustering, high availability, or migration solutions. DAGs are supported in hardware virtualization environments provided that the virtualization environment doesn't employ clustered root servers."

    Read the article

  • Computer turns itself on after any off mode

    - by Patrick
    Whenever I shut down my computer, or put it in sleep/hybernate, it turns on after two seconds. It doesn't post, it just powers on and then idles. To actually turn it off, I switch off the psu. The problem is now, whenever I switch the psu on and try to boot, it doesn't always turn on. It takes a good amount of flicking the psu switch on and off before the motherboard lights up. So far I've determined the things its not: its not caused by the mouse or network waking up the computer. I've been able to go into hybernate for the past year. And all "wake on X" settings in the bios are diabled. its not a scheduled task waking up the computer at a given hour, it occurs every single time its not due to an upgrade or new installation, since I haven't done either in a very long time I'm sure its a hardware issue. So I'd like to know, is my psu dead, or the motherboard? The psu is an Antec Earthwatts 600w, the motherboard is an Asus P5Q-E, both one year old.

    Read the article

  • Trying to run a codeigniter app on custom php

    - by hamstar
    I have a CodeIgniter app that I deployed to a server with php 5.2 and my dev box has 5.3, and some stuff doesn't work anymore. I didn't want to upgrade php and risk the other app on the server having issues. Anyway I compiled a custom PHP and added the following to a single .conf file in /etc/httpd/conf.d/zcid.conf with all the other conf files. <VirtualHost *:80> DocumentRoot /var/www/cid/app ServerName sub.example.co.nz </VirtualHost> <Directory "/var/www/cid/app"> authtype Basic authname "oh dear how did this get here i am no good with computer" authuserfile /path/to/auth require valid-user RewriteEngine on RewriteCond $1 !^(index\.php|robots\.txt|createEvent\.php|/cgi-bin) RewriteRule ^(.*)$ /index.php/$1 [L] AddHandler custom-php .php Action custom-php /cgi-bin/php53.cgi </Directory> In /var/www/cid/app I have the cgi-bin folder and the php53.cgi that I copied from /usr/local/php53/bin/php-cgi But now when I navigate to the subdomain it says: The requested URL /cgi-bin/php53.cgi/index.php/ was not found on this server. And if I try to browse to /cgi-bin it says (what it is supposed to?): You don't have permission to access /cgi-bin/ on this server. Quite confused now. Anyone know what to do here? Thanks :)

    Read the article

  • Guest can't access host windows network share

    - by Asteroza
    HI folks, I've recently run into a strange problem after upgrading to VMware player 3. Certain virtual machines (currently an XP and a VIsta VM) seem to have lost the ability to access the host (XP) network shared folders (SMB). Both VM machines are bridged networking, firewall is up. Host firewall is up. Host and guests use DHCP. All OS are workgroup connected. The Vista VM I am not completely sure, but the XP VM did have access to the host's network shared folders after the player upgrade. Then today it wouldn't work, network path can't be found. Now here's the wierd part. The host's network shared folders can be accessed properly by other PC's on the network (and as far as I know, no settings have been changed). The host is pingable from the guests, and name resolution works. The guests can access network shares on other PC's in the network, and access the internet. My Network Places shows the host PC, but double clicking on it takes a long time before it finally times out with an error. Doing a wireshark packet capture, the guest is sending out the protocol negotiation, and the host is sending a response, but after that the guest behaves like it didn't receive anything and is doing TCP retransmissions. Anybody have any idea what could be wrong? Yes I know I can drag and drop files or setup the special VMware shared folders, but I want to access the host just like any other network accessible shared folder. It just seems really odd when any other computer works, just not between the guest and host.

    Read the article

  • Jenkins CI - Cannot allocate memory

    - by Programmieraffe
    I tested jenkins-ci successfully on a ubuntu 10.4 (with vmware fusion) on my local computer. Now I want to install and use it on my virtual server at hosteurope. The basic installation was no problem, but now I have problems with my build project. After pulling an mercurial update from a repository, ant is invoked and throws the following error in my build project: "Buildfile: /var/lib/jenkins/workspace/concrete5-seed-clean/build.xml [property] java.io.IOException: Cannot run program "/usr/bin/env": java.io.IOException: error=12, Cannot allocate memory" There is a known problem with heap size at virtual servers at hosteurope (http://faq.hosteurope.de/index.php?cpid=13918), so I tried to set the heap size manually: # for ant export ANT_OPTS="-Xms512m -Xmx512m" # jenkins # edited /etc/default/jenkins, added line JAVA_ARGS="-Xms512m -Xmx512m" # restarted jenkins via /etc/init.d/jenkins restart After setting this for ant, the command "ant -diagnostics" runs through and does not cause an error, but the error still occurs when I try to build the project. Server-Details: - http://www.hosteurope.de/produkt/Virtual-Server-Linux-L Ubuntu 10.4 LTS RAM: 1GB / Dynamic 2GB My questions: - Is 1GB enough for Jenkins or do I have to upgrade the server? - Is this error caused by ant or jenkins? Update: I got it running with ant options -Xmx128m -Xms128m, but sometimes the error occurs again. (this freaks me out, cause i can not reproduce it by now :/ ) Help much appreciated! Cheers, Matthias

    Read the article

  • Update BIOS on Sun Fire X4150 server

    - by Massimo
    I have some Sun Fire X4150 servers with a very old BIOS release (1ADQW015), which seems to have some compatibility problems with WMware ESX Server 3.5 and Windows 2008 R2 virtual machines; so I want to update the BIOS on them. The problem: according to this page, if your servers run ELOM (mine do), you first need to update to the latest ELOM release, then to the interim transition release, then finally you can update to the latest one. Ok, I'm willing to do that... but it looks like Sun (now Oracle) will happily let you download the latest firmware DVD (3.3.0), but it will not let you download the transition release (2.0) if you don't have a support contract. Well, I actuall don't care at all about the servers' management controllers (we don't even use them), so upgrading from ELOM to ILOM is totally irrelevant to me; but I need to update the servers' BIOS. So my question is: can I update the servers' BIOS to the latest version without doing the full ELOM-to-ILOM migration, or will this not work (or even make the servers unusable)? Do BIOS versions and SP ones need to be matched, or can one be updated without bothering with the other? Bonus question: if this whole ELOM-to-ILOM thing actually is needed in order to update the BIOS, can that 2.0 CD-ROM be obtained without having a support contract with Sun/Oracle (which we are definitely not going to sign, being that quite old hardware)? Update: I tried upgrading only the BIOS on one of the servers, and it didn't boot anymore. So it really looks like a full firmware upgrade is needed, and the management controller and BIOS versions should be kept in sync. So... where can I find that *&!£%$% 2.0 CD-ROM? Or at least the transition firmware that can be found on it?

    Read the article

  • What is the best way to connect a 3 switches with a router?

    - by Carlos Morales
    Hello everyone, I'm trying to rebuild the network from my work and I was thinking what is the best way to connect three switches and a router. The router has 4 ports so I thought to connect 2 switches to the router (each switch connected with 2 cables to the router) and then connect the third switch to one of the others with two cables. So is like this, two cables from switch one to the router, two cables from switch two to the router and two cables from switch 3 to switch 1 or 2. So my questions are: Is it better to connect the router to each switch with a cable or the more cables you have the better? If I connect the switch 3 to switch 1 or 2 is it better to connect it with a cable or you get better performance with more cables. If I'm wrong and there is a better or more efficient way to connect them please let me know. The router is a Netgear RP114 (I'll upgrade it to a Sonicwall NSA 240), switch 1 is a Netgear GS748T, switch 2 is a Cisco Catalyst 2924-XL and switch 3 is a D-link DGS-1024D Thank you very much

    Read the article

  • Is it worth hiring a hacker to perform some penetration testing on my servers ?

    - by Brann
    I'm working in a small IT company with paranoid clients, so security has always been an important consideration to us ; In the past, we've already mandated two penetration testing from independent companies specialized in this area (Dionach and GSS). We've also ran some automated penetration tests using Nessus. Those two auditors were given a lot of insider information, and found almost nothing* ... While it feels comfortable to think our system is perfectly sure (and it was surely comfortable to show those reports to our clients when they performed their due diligence work), I've got a hard time believing that we've achieved a perfectly sure system, especially considering that we have no security specialist in our company (Security has always been a concern, and we're completely paranoid, which helps, but that's far as it goes!) If hackers can hack into companies that probably employ at least a few people whose sole task is to ensure their data stays private, surely they could hack into our small business, right ? Does someone have any experience in hiring an "ethical hacker"? How to find one? How much would it cost? *The only recommendation they made us was to upgrade our remote desktop protocols on two windows servers, which they were able to access because we gave them the correct non-standard port and whitelisted their IP

    Read the article

  • Weird PCI bug: lots of missed packets, or data comes in "bursts"

    - by Thomas O
    I have an ABIT KN9 motherboard. It has one PCI-e x16 slot, three PCI-e x4 slots and two legacy PCI. My problem is with the legacy PCI (which I shall just call "PCI".) I currently have an Nvidia GeForce 8600 GT (a low end card) installed in the x16 slot and a TV card in PCI #1; the x4 slots are unused, as is PCI #2. I plan to upgrade the graphics card soon, the current card was spare. I sometimes install a USB expander in PCI #2 but it causes a lot of problems - see below. The problem is under Linux (Ubuntu 10.10, Linux 2.6.35-22-generic), but probably under all operating systems (I have not yet been able to test Windows, but I suspect it will do the same as the problems occur on the BIOS/POST side too, e.g. when using a USB keyboard on the expander the keyboard will not work at all) PCI has an enourmous delay, and packets arrive in large chunks. For example, when using the USB expander, my USB mouse lags and jumps in large steps every second or so, while using the motherboard USB does not present this problem. My TV card will only do one or two frames per second, and the program (xawtv) usually times out and crashes. In dmesg, I'm getting messages like: bttv0: timeout: drop=74, irq=154/100476, risc=31f6256c, bits: VSYNC HSYNC OFLOW RISCI for my TV card, and similar timeout issues for my USB expander with a mouse. I received the motherboard, processor and RAM second hand and have only just got around to building it, so I don't know if this problem has always existed, or if it's a result of my set up. If anyone has any hints or solutions it would be appreciated - this is kind of a show-stopper for me.

    Read the article

  • DAS vs SAN storage for serving 2 to 4 nodes

    - by Luke404
    We currently have 4 Linux nodes with local storage, arranged in two active/passive pairs with storage mirrored using DRBD, running virtual machines (actually using Xen Hypervisor) for typical hosting workloads (mail, web, a couple VPS, etc.). We're approaching the (presumed) maximum IOPS of those servers, and we're planning to migrate to an external storage solution with two active nodes, with capacity for up to four active nodes. Since we're an all-Dell shop I've done some research and found the MD3200 / MD3200i products should be the ones we're looking for. We are pretty sure we won't be attaching more than 4 hosts on a single storage and I'm wondering if there is any clear advantage for one or the other. In theory I should be able to attach 4 SAS hosts to a single MD3200 (single links on a single controller MD3200, or dual redundant SAS links from each host to a dual-controller MD3200), or 4 iSCSI hosts to a single MD3200i (directly on its 4 GigE ports without any switch, again with dual links for the dual controller option). Both setups should let us implement live VM migration since all hosts can access all the LUNs at the same time, and also some shared filesystem like GFS2 or OCFS2. Also, both setups should allow full redundancy of the whole system (assuming dual controllers in the storage). One difference I can see is that the DAS solution is actually limited to 4 hosts while the iSCSI one should be able to grow to more hosts (adding two GigE switches to the mix). One point for the iSCSI solution is that it would allow us to start out with our current nodes and upgrade them at a later time (we can't add other SAS controllers, but they already have 4 GigE ports each). With the right (iSCSI|SAS) controllers I should be able to connect diskless nodes and boot them off the external storage which I think is a good thing (get rid of any local storage). On the other hand, I would have thought the SAS one to be cheaper but it seems like an MD3200 actually costs a little less than an MD3200i (?) (please note: I've used Dell gear in my examples since that's what we're looking for but I assume the same goes with other vendors) I would like to know if my assumptions above are correct, and if I'm missing any important difference between the two setups.

    Read the article

  • IIS8 Asp.net State service remote connection failure

    - by maxisam
    Recently we upgrade our web server to windows server 2012 with IIS8. We have this issue when users try to connect the asp.net state service to this web server remotely. It always popup Unable to make the session state request to the session state server. Please ensure that the ASP.NET State service is started and that the client and server ports are the same. If the server is on a remote machine, please ensure that it accepts remote requests by checking the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\aspnet_state\Parameters\AllowRemoteConnection. If the server is on the local machine, and if the before mentioned registry value does not exist or is set to 0, then the state server connection string must use either 'localhost' or '127.0.0.1' as the server name. In IIS7 / 7.5 we use the same way and it works fine. As long as the state service is running and firewall is set properly, we don't have any problem. However, in IIS8 it doesn't work. (We even turn off firewall to test it) Thanks for helping.

    Read the article

  • Netgear FVS336G: appropriate solution for today's small businesses?

    - by bwerks
    Hey all, I've been looking into a routers to facilitate a vpn solution for a small business. While the Netgear FVS336G looks good on paper, it appears to have some fairly crippling setbacks that drag down what appears to be some great hardware. First off, the unit has been around for a couple years now, perhaps before 64-bit operating systems were as common as they are now, and complaints are everywhere that claim that SSL or IPsec (or both) VPN connections will not work with 64-bit operating systems. However, most of these claims mention only Vista, which makes me think that these problems could have potentially been solved since then. Unfortunately though, Netgear's support forums seem to be incredibly private, and policed by some troll named jmizuguchi who just closes down public posts in order to marshal them into the private ones. Danger, will robinson. Apparently their firmware upgrade process is a nightmare too, but that's beside the point. My question is this: has anyone configured one a Netgear FVS336G to operate in a server 2008 (or R2)/windows 7 64-bit network? If so, is it possible to use the microsoft vpn client or are third party clients still required? If this thing has just failed the test of time, is there a feature-comparable unit that I've missed, at anywhere near the same price range? Thanks!

    Read the article

  • hp proliant dl360 disk diagnostic issue

    - by user1039384
    We recently got two used drives (15000) and installed on our HP proliant dl360 G5 server. Created RAID1 and used HP SmartStart CD to perform diagnostics. Interestingly, the Diagnostic tab immidiately fails on Logical drive testing saying the Disk1 should be replaced, while the Test tab successfully runs all the complete tests on both disks and does not find any issue. At the meantime, when booting to esxi 5, vSphere periodically shows the Disk1 as Unknown and Logical drive in recovery process. This happens every 5-10 minutes. Here is the log from HP SmartScan diagnostic: 1 - Device, Test: Logical Drive 1, Storage Controller in Slot 0 1 - Description: The controller has reported a critical error in the drive error log. 1 - Recommended Repair: This drive should be replaced. 1 - Failed Count: 44 1 - Error code: F157 There is also another error log record (see below): 2 - Device, Test: test_components/libstorage.so ID 2 - Description: An unexpected exception occurred while performing an operation. Exception message: CISS_StatusHandler::evaluate: commandStatus = 4 (INVALID); hexdump of CISS_ErrorInfo: 00000000: __ __ 04 __ 20 __ __ __ __ __ __ __ __ __ __ __ .... ... ........ 00000010: __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ ........ ........ 00000020: __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ ........ ........ Device: Hard Drive 2, Storage Controller in Slot 0 Property name: Bad Target Count 2 - Recommended Repair: Reboot or restart Insight Diagnostics. Retry the test. If the problem persists, upgrade to the latest version of Insight Diagnostics. 2 - Failed Count: 48 2 - Error Code: F62 Note that rebooting didn't help and I was running the latest diagnostic software version. Anyone has a clue? Is this a real disk issue? BTW, the controller is Smart Array E200i Thanks in advance

    Read the article

  • No clue for high load average on top

    - by Oz.
    We have several machines on Amazon (ec2) of the type c1.xlarge with 16 cpus, running the Amazon AMI. Details on the machine: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge One out of the several machines is showing a high load average, since we have run the last yum upgrade a couple of weeks a go. We did not yet update the other machines, and everything looks normal on them. The strange thing is that the top command not showing any hint for the cause of the load. CPUs are 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st(see below). Mem is about 1.5GB free. Any idea what could it be, or where else can we check? Many thanks for the help. # # top # top - 07:57:42 up 4:18, 1 user, load average: 1.36, 1.45, 1.47 Tasks: 131 total, 1 running, 130 sleeping, 0 stopped, 0 zombie Cpu(s): 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 7120092k total, 5644920k used, 1475172k free, 532888k buffers Swap: 0k total, 0k used, 0k free, 3463936k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1557 mysql 20 0 1829m 374m 6448 S 14.3 5.4 11:15.09 mysqld 6655 apache 20 0 416m 49m 3744 S 9.3 0.7 0:04.85 httpd 27683 apache 20 0 421m 54m 3708 S 9.0 0.8 0:00.99 httpd 6682 apache 20 0 424m 57m 3788 S 8.3 0.8 0:03.81 httpd 16816 apache 20 0 419m 51m 3760 S 4.3 0.7 0:04.09 httpd 22182 apache 20 0 417m 50m 3756 S 1.7 0.7 0:06.34 httpd 219 root 20 0 0 0 0 S 0.3 0.0 0:00.34 kworker/7:1 699 root 20 0 0 0 0 S 0.3 0.0 0:00.40 kworker/3:1 1 root 20 0 19376 1508 1212 S 0.0 0.0 0:00.29 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.71 ksoftirqd/0

    Read the article

  • RTL8192SU + RTL8191E Linux Issue Installing Driver

    - by s32ialx
    OK I've read tons of fourms of people getting the onboard RTL8191E working and the RTL8192SU working dif is U = USB they are both N and i have both Toshiba L500D-00T pre-installed Win Vistax64-HP and i have obtained the free Win7x64-HP upgrade the onboard wificard sucks and can't hold a stable connection for more then 20minutes in windows but the usb is amazing. Now problem is i tried both Ubuntu and Mandriva with no resolve the issue is the onboard drive detects and actually SHOWS that it's there but no wireless networks detect so it's saying no SSID's are broadcasting which i know is a lie since I'm running a 2wire bell dsl modem with built in wifi and a Linksys wrt54g w/ DD-WRT firmware and both are broadcasting fine. Why don't i use the USB? new in Mandriva Linux Control Center 2010.0 it shows up in Other/Unknown as RTL8191S WLAN Adapter and on the right pane this shows up Identification Vendor: ?Manufacturer Realtek Description: ?RTL8191S WLAN Adapter Media class: ? Connection Bus: ?USB Bus PCI #: ?1 PCI device #: ?5 Vendor ID: ?0x0bda Device ID: ?0x8172 Sub vendor ID: ?0x0000 Sub device ID: ?0x0000 Misc Module: ?rtl819xU In the hardware device manager in mandriva it shows up as unknown but shows that it's realtek and that it's a 8192 chipset. but no option to for a driver install and when i do a make in terminal i get this error and no clue what it means [root@John-PC rtl8192se_linux_2.6.0010.1020.2009_64bit]# make make: *** /lib/modules/2.6.31.12-desktop-3mnb/build: No such file or directory. Stop. make: *** [all] Error 2 [root@John-PC rtl8192se_linux_2.6.0010.1020.2009_64bit]# any help appreciated. and just encase I'm running currently Mandriva Spring 2010 Free

    Read the article

  • What is the best way to connect 3 switches with a router?

    - by Carlos Morales
    Hello everyone, I'm trying to rebuild the network from my work and I was thinking what is the best way to connect three switches and a router. The router has 4 ports so I thought to connect 2 switches to the router (each switch connected with 2 cables to the router) and then connect the third switch to one of the others with two cables. So is like this, two cables from switch one to the router, two cables from switch two to the router and two cables from switch 3 to switch 1 or 2. So my questions are: Is it better to connect the router to each switch with a cable or the more cables you have the better? If I connect the switch 3 to switch 1 or 2 is it better to connect it with a cable or you get better performance with more cables. If I'm wrong and there is a better or more efficient way to connect them please let me know. The router is a Netgear RP114 (I'll upgrade it to a Sonicwall NSA 240), switch 1 is a Netgear GS748T, switch 2 is a Cisco Catalyst 2924-XL and switch 3 is a D-link DGS-1024D Thank you very much

    Read the article

  • Looking for a host based network monitor solution

    - by Ole Martin Handeland
    Hi all! Problem So, my hosting company has a network usage graph for my dedicated server. It seems that one day earlier this month, my network usage suddenly spiked with several hundred megabytes transferred (usually it's in the tens, not hundreds). It was probably me, but i just can't be sure who or what it was. Question So my question is; does anyone know of any host based solution for monitoring network usage that would tell me the client's IP-address, the port/service he/she used? What I don't want I'm just guessing that someone will suggest i use nagios, munin, zabbix, cacti, mrtg - I've also looked at those, but a graph over network usage will not give me the answers I'm looking for. :-) Almost there I've already looked at a lot of monitoring solutions, and I've tried [ntop][http://www.ntop.org/], [darkstat][http://unix4lyfe.org/darkstat/] and others. Darkstat just didn't give me the answers. Although it listed a lot of statistics, and i could list the clients - it doesn't show me the network usage for a particular period. Ntop is by far the best I've seen so far - but i think it mostly shows current network usage, not the historical part. I could run apt-get upgrade and download a whole bunch of software, but not see it in the log afterwards.

    Read the article

  • Windows 7 - system error 5 problem

    - by ianhobson
    My wife has just had a new computer for Christmas (with an upgrade from VISTA to Windows 7), and has joined the home network. We are using a mix of WindowsXP and Ubuntu boxes linked via a switch. We are all in the same workgroup. (No domain). Internet access, DHCP, and DNS server is an SME server that thinks it is domain controller (although we are not using a domain). I need to run a script to back up my wife's machine (venus). In the past the script creates a share on a machine with lots of space (leda), and then executes the line. PSEXEC \\venus -u admin -p adminpassword -c -f d:\Progs\snapshot.exe C: \\leda\Venus\C-drive.SNA With the wife's old XP machine, this would run the sysinternals utility, copy shapshot,exe to her machine and run it, which would then back up her C: drive to the share on leda. I cannot get this to work with Windows 7, nor can I link through to the C$ share on her machine. This gives me a permissions error (system error 5). The admin account is a full admin account. And yes - I do know the password. The ordinary shares on her machine work fine! I guess I'm missing something that Microsoft have built into Windows 7 - but what? The machine is running Windows 7 business, with windows firewall, AVG anti virus, and all the crap-ware you get with a new PC removed. Thanks

    Read the article

  • How do I fix a super slow MacBook?

    - by MakingScienceFictionFact
    I'm running a black MacBook 4.1. Intel Core 2 Duo @ 2.4 GHz, 2 GB RAM, 250 GB hard disk drive, bus speed is 800 MHz. It's about three years old in excellent shape externally. I treat this thing like a baby. It used to run awesome, but now it's super slow at everything. I get the spinning pizza of death constantly. It takes a long time to boot up or load any program, even Safari and iTunes. iPhoto is terribly slow. The Internet doesn't work properly and it reminds me of a buggy PC. I've formatted it and re-installed Mac OS X 10.6 (with all updates), and I've done the disk repairs process. As an iOS developer this is driving me crazy, but luckily I have an iMac to work on in the day which is fast. I'm ready to format it again, but that didn't work last time. After the last format, I copied back files from an external drive so maybe the offending files were hidden in there somewhere. Here are the hard disk drive and RAM specifications. It is upgrade-able to 4 GB of RAM. Hard disk drive: The Fujitsu Mobile MHY2250BH is a 250 GB, standard hard disk drive. Its burst transfer rate is 150 Mbyte/s. This is a 5400 RPM drive and comes with an 8 MB buffer. RAM: two sticks of 1 GB DDR2 SDRAM, speed: 667 MHz.

    Read the article

  • How to burn a data DVD in Windows XP

    - by SabreWolfy
    I am trying to burn a data DVD (DVD+R) in Windows XP SP3 on a Dell desktop computer. The computer has a licensed copy of Nero 6.3. Nero indicates that an update to version 6.6 is available, but after following the link provided, it redirects me to the Nero website to purchase the upgrade. I'm not interested in doing this. After creating a project in Nero 6.3, inserting a blank DVD+R and trying to start burning the data DVD, Nero indicates that I should insert an appropriate disk into the drive. It does not seem to detect the blank DVD+R. I downloaded infrarecorder and cdrtfe from Sourceforge. Neither of these programs worked either. They both indicated that I should insert the correct media, with cdrtfe saying there is no disk in the drive. I tried with another blank DVD +R with the same effect. I inserted a CDR containing data into the drive and the Windows read read this CDR without a problem. I have no reason to believe that the drive is faulty. I am aware that Windows XP itself is not able to burn DVDs. However, it seems that three third-party software programs are not able to burn a data DVD in Window XP. The specifications provided in Nero indicate that DVD+R is compatible with the drive. How can I burn a backup data DVD in Windows XP?

    Read the article

  • Using my old PC as a web/file server?

    - by Garrett
    I have an old desktop computer that I've been trying to sell for AGES. I guess nobody is looking for computers because it was advertised at a dirt cheap price on craigslist, local papers, etc. Anyways, I was wondering if it would be worth it to set it up as a home file server, a web dev server (I have a web host for actual production use), and maybe host a few server applications (ex: ventrillo). The computer is actually an old Dell that I cannibalized after the motherboard being destroyed by lightning, so it has fairly new parts in it. The specs are: P4 3.4GHz w/ HT and Artic Cooling Freezer 7 3GB DDR2 533 RAM 80GB hdd (will upgrade the hard drive if it's even worth using as a server) basic dvd rom 430 Watt Thermaltake PSU (it might be important to note that it is only 60% efficiency) ATI Radeon x600 256MB Antec 300 case It's not a really beefy machine, I just can't see giving it away or putting it in the corner to just collect dust. I have Windows Server 2008 R2 Standard and I am confident in my skills in operating most Linux operating systems. I'd also be using it to tinker with when I learn new things in my server admin classes (I'm finishing my 2nd year in college at the moment so I'm still learning) Also, my house is quite old and the electrical wiring is pretty poor (it MIGHT be up to code, then again, where I live most people don't even know what regulations are or let alone know how to spell it...) Would it be safe to leave it running all day and is it going to run up my electric bill because of the PSU efficiency? I only have 5mbit cable internet, but I won't be running very bandwidth intense services on it so it should be ok. I should elaborate on why I am concerned about the power. The circuits should be fine, but I'm more concerned about fire hazard. What is the likelihood that the server could cause an electrical fire? Again, thank you all for the feedback!

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >