Search Results

Search found 3147 results on 126 pages for 'debian wheezy'.

Page 52/126 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Understanding the Linux boot process, subsystem initialization, & udev rules?

    - by quack quixote
    I'm creating UDEV rules for automounting external drives on a headless server, much in the same way as Gnome-VFS does automounting during a user session. I'm concerned with the rule's behavior at boot-time. There's a good chance one of these drives will be connected during a boot, and I'd prefer any connected drives get mounted in the right place. The drives might be either USB or Firewire, and they are mounted from a shell script fired off by UDEV on detecting an "add". Here are my questions: When UDEV runs the mount for these devices at boot, will the system be ready to mount it? Or will the script get triggered too early? If it's too early, what's a good way for a script to tell that the system isn't ready yet (so sleep a while before checking again)? The UDEV rule matches ACTION=="add". Does this event even fire at system boot?

    Read the article

  • catch-22 with apt-get

    - by Mark J Seger
    I'd recently installed a package and discovered my scp stopped working. After removing and installing some things I got it fixed but then I stated getting errors in apt-get about dpkg: error: configuration error: /etc/dpkg/dpkg.cfg.d/multiarch:1: unknown option 'foreign-architecture' so I commented it out and thought that fixed the problem until I discovered the chrome icon in my launcher turned into a ? and chrome no longer worked. I tried to reinstall it and got the apt-get error: "ambiguous package name 'libglib2.0-0' with more than one installed instance" if I try to remove libglib2 I get the error The following packages have unmet dependencies: iceape-browser : Depends: iceape but it is not going to be installed iceape-chatzilla : Depends: iceape (>= 2.7.11) but it is not going to be installed Depends: iceape (<= 2.7.11-1.1~) but it is not going to be installed E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. so then I tried to remove icape-browser and it complained about the 2 instances of libglinb2. in fact virtually any command I issue does the same thing so I don't know what I can do to untangle things.

    Read the article

  • How secure are third party Ubuntu (APT) repository mirrors

    - by bakytn
    Hello! We have locally an Ubuntu mirrors to save a lot of traffic (our external traffic is not free) So whenever I apt-get install "program" it gets from that repository. the question is...basically they can substitute any package with their own? So it's 100% on my own risk and I can be hacked easily on any apt-get upgrade or a-g install or a-g dist-upgrade? for example the very basic ones like "telnet" or any other.

    Read the article

  • aptitude: list all previous recommended packages

    - by casper
    sometimes when installing a package, aptitude recommends several other packages. Is there a way to show all previous recommended packages of all installed packages? Thanks in advance. Casper Edit: Thanks for the replys so far. I already tried: aptitude show ~i | grep '^Recommends' | cut -d ' ' -f 2- Thats mostly ok. But it gives also things back like: console-setup | console-data (>= 2002.12.04dbs-1) I want an easy way, to install all missing recommended packages. So aptitude install console-setup | console-data (>= 2002.12.04dbs-1) won't work ;-) Is there a way, without manual checking all entries, to do this?

    Read the article

  • changing timezone with dpkg-reconfigure tzdata and debconf-set-selections

    - by Carlos Campderrós
    I want to set up a script that automatically changes the timezone on a machine (running ubuntu 11.10) and also sets the right values to the debconf database. I've tried the following, but it does not work (at the end, the current timezone remains unchanged, and if I run manually the dpkg-reconfigure tzdata command, the selected values are indeed the old ones): #!/bin/sh -e echo "tzdata tzdata/Areas select Europe" | debconf-set-selections echo "tzdata tzdata/Zones/Europe select Madrid" | debconf-set-selections echo "tzdata tzdata/Zones/America select " | debconf-set-selections dpkg-reconfigure -f noninteractive tzdata So, by now, I'm doing it messing with the files /etc/localtime and /etc/timezone directly, but I'd rather prefer the dpkg-reconfigure and debconf way.

    Read the article

  • Jenkins shell command isn't executing

    - by Dmitro
    In Jenkins project I add command for executiong rm /var/www/ru.liveyurist.ru/tmp/* But when I build project I get error: Started by user anonymous Building in workspace /var/www/ru.myproject.ru Updating https://subversion.assembla.com/svn/myproject/trunk At revision 1168 no change for https://subversion.assembla.com/svn/liveexpert/trunk since the previous build [ru.myproject.ru] $ /bin/sh -xe /tmp/hudson7189633355149866134.sh FATAL: command execution failed java.io.IOException: Cannot run program "/bin/sh" (in directory "/var/www/ru.myproject.ru"): java.io.IOException: error=12, Cannot allocate memory at java.lang.ProcessBuilder.start(ProcessBuilder.java:475) at hudson.Proc$LocalProc.<init>(Proc.java:244) at hudson.Proc$LocalProc.<init>(Proc.java:216) at hudson.Launcher$LocalLauncher.launch(Launcher.java:709) at hudson.Launcher$ProcStarter.start(Launcher.java:338) at hudson.Launcher$ProcStarter.join(Launcher.java:345) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:82) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:58) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19) at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:703) at hudson.model.Build$RunnerImpl.build(Build.java:178) at hudson.model.Build$RunnerImpl.doRun(Build.java:139) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:473) at hudson.model.Run.run(Run.java:1410) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:238) Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory at java.lang.UNIXProcess.<init>(UNIXProcess.java:164) at java.lang.ProcessImpl.start(ProcessImpl.java:81) at java.lang.ProcessBuilder.start(ProcessBuilder.java:468) ... 16 more Build step 'Execute shell' marked build as failure Finished: FAILURE I started Jenkins from root user. Please advise what can be reason of this error?

    Read the article

  • Aircrack-ng, is it illegal?

    - by Jasen
    I just found out about this linux-based network toolset called aircrack-ng, and its pretty interesting, and I'm learning how to use it. But it just occured to me, that like a lot of linux programs, they're developed in foreign countries, with different laws. I did do a google search, and the only thing in regards to what i am asking that pops up is a question on another site. So, my question is, is it illegal in America, specifically Ohio, to use this program? I mean i know it has a legitimate purpose, but so does hemp. Also, sorry if this is the wrong place for this question, but I'm not exactly sure where it should go.

    Read the article

  • Mindtouch broke my Apache2 virtual host configuration.

    - by grenade
    I installed mindtouch using the instructions here and it seems to have broken my Virtual Host configuration. I have several domains running off the same apache instance and this was working fine but now all my domain names resolve to the virtualhost where mindtouch was installed. So mindtouch made all my domain names point to the new mindtouch instance. Grrr! I use debians default virtual host mechanisms (sites-enabled, etc). Does anyone know what apache directive mindtouch is using to ruin my vh setup? I've scoured all the conf files and there is nothing obvious in apache2.conf or httpd.conf that would cause the behaviour. Did it create a sym-link somewhere that I should destroy? I should add that I uninstalled the mindtouch packages already but apache persists in redirecting all domains to the first one mentioned in the sites-enabled folder. thini:~# apache2ctl -S [Wed Jan 05 13:39:11 2011] [warn] NameVirtualHost *:80 has no VirtualHosts VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:* www.openancestry.org (/etc/apache2/sites-enabled/openancestry.org:1) *:* www.pragmantra.com (/etc/apache2/sites-enabled/pragmantra.com:1) *:* services.pragmantra.com (/etc/apache2/sites-enabled/services.pragmantra.com:1) *:* www.subversionreports.com (/etc/apache2/sites-enabled/subversionreports.com:1) *:* www.thijssen.ch (/etc/apache2/sites-enabled/thijssen.ch:1) Syntax OK

    Read the article

  • Bacula vs. BackupPC [closed]

    - by ujjain
    I have been googling about the differences between them. Bacula has lots of roles BackupPC is easier to configure Bacula works with agent, not rsync (great for Windows backups) It seems that Bacula is most often compared to Amanda though, while BackupPC seems a perfectly lovely and popular backup distribution to. I currently backup my servers with rsnapshot, but I am looking for a professional scalable solution that could also back-up 50 hosts without problems. Preferably a solution that can offer bare metal restores for my Linux servers. I am not looking to reinstall the exact same version of Plesk, the software, etc... Update: I see this ranks high in Google, I found a good article: http://www.serverfocus.org/backuppc-vs-bacula-vs-amanda. I personally think that BackupPC is good for smaller environment, but Bacula, despite the high learning curve, is better for environments that requilre scaling.

    Read the article

  • What is /etc/apache2/sites-available used for and is it necessary?

    - by Mariane
    I have 3 sites, each with a specific IP, running on apache2 (up-to-date Ubuntu). To put a site online, I just created a file in: /etc/apache2/sites-enabled and in this file I told apache which directory was the root directory for this site, and to which IP it should correspond. So I have 000-default 001-www.lapf.eu 002-www.felkin.info 003-www.seidhr.fr in this directory. My first site, lapf suddenly lost contact with its database after the domain name was transferred from another registrar unto the registrar who is also hosting the site's data. Then I did an update, and I reinstalled mysql-server and mysql-common, and I did I-have-forgotten-what to reinstall the locales (uft8 and such) which had vanished for some reason. This fixed my first site. Now I noticed that the other 2 sites are offline. Pointing a browser to them just hangs until timeout. They used to function, and their domain names did not move, they are still registered at the same place. The files are still in /etc/apache2/sites-enabled I noticed another directory: /etc/apache2/sites-available with just defaut and default.ssl in it. Why are there 2 directories, sites-enabled and sites-available? Should I copy the files from "sites-enabled" into "sites-available"? Or should I put a modified version of each in "sites-available"? command: "apache2ctl -S" VirtualHost configuration: 92.243.20.169:80 Charlotte (/etc/apache2/sites-enabled/001-www.lapf.eu:1) 92.243.21.141:80 xvm-21-141.ghst.net (/etc/apache2/sites-enabled/002-www.felkin.info:1) 92.243.4.114:80 xvm-4-114.ghst.net (/etc/apache2/sites-enabled/003-www.seidhr.fr:1) wildcard NameVirtualHosts and default servers: *:80 is a NameVirtualHost default server Charlotte (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost Charlotte (/etc/apache2/sites-enabled/000-default:1) Syntax OK

    Read the article

  • Can I create an SSH user which can access only certain directory?

    - by RiMMER
    I have a Virtual Private Server which I can connect to using SSH with my root account, being able to execute any linux command and access all the disk area, obviously. I would like to create another user account, which would be able to access this server using SSH too, but only to a certain directory, for example /var/www/example.com/ For example, imagine this user has a HUGE error.log file (500 MB) located in /var/www/example.com/logs/error.log When accessing this file using FTP, this user needs to download 500 MB to view the last lines of the log, but I'd like him to be able to execute something like this: tail error.log Therefore I need him to be able to access the server using SSH, but I don't want to grant him access to all server areas. How can I do this?

    Read the article

  • DKIM- Filter No Signature Data

    - by Vineet Sharma
    I have installed DKIM-Filter on Postfix after reading this tutorial http://www.unibia.com/unibianet/systems-networking/how-setup-domainkeys-identified-mail-dkim-postfix-and-ubuntu-server My email now has a DKIM signature but still it is landing in the SPAM folder. Here is the header Received-SPF: neutral (google.com: 69.164.193.167 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=69.164.193.167; Authentication-Results: mx.google.com; spf=neutral (google.com: 69.164.193.167 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected]; dkim=hardfail (test mode) [email protected] Received: from promote.a2labs.in (localhost [127.0.0.1]) by promote.a2labs.in (Postfix) with ESMTPA id 34858530E8 for <[email protected]>; Mon, 28 Feb 2011 12:23:07 +0530 (IST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=a2labs.in; s=mail; t=1298875987; bh=bo+H1VYPIHMja2u7i1lnzr4k/j4Pe8iSf79bVw94XpI=; h=To:Subject:Message-ID:Date:From:Reply-To:MIME-Version: Content-Type:Content-Transfer-Encoding; b=nhTdlnUwo0iUJ92ycQzKSRjw 5Pfya0DJcJrAc8Mr2hIv8OLpgzBCzdOMWTGqR5nuUmAzgCGYBhYAM2XZwVxo9JG/iz7 oYKysmNQnskFx0TRyW3UOkDWcfHcPnCL6Y7fGzZWinmsyjsg47k+mKZg/e8jqlwTAMO PYKkt5pBz7SM0= Also my mail.err file shows Feb 28 12:17:03 ivineet dkim-filter[32181]: 1F788530E1: no signature data Feb 28 12:18:02 ivineet dkim-filter[32181]: 432BA530E2: no signature data How to fix it

    Read the article

  • SVN: Error validating server certificate for svn hook linux

    - by Dr Casper Black
    Hi, I managed to setup a SVN (over SSL) server and TortoiseSVN client on Win. I made a Post-Commit Hook for test project. The Post-Commit will update the web dir so the App in PHP can be executed with the newest version. It all works when done over shell. The only problem is, when i commit the changes over the client in Win the change is commited but HOOK throws error post-commit hook failed (exit code 1) with output: Error validating server certificate for 'https://SERVER_IP:443': - The certificate is not issued by a trusted authority. Use the fingerprint to validate the certificate manually! - The certificate hostname does not match. Certificate information: - Hostname: DEVSRVR - Valid: from Fri, 28 Jan 2011 09:22:45 GMT until Sat, 28 Jan 2012 09:22:45 GMT - Issuer: PHP, SS, SS, SRB - Fingerprint: 5f:d0:50:d6:dd:a6:d4:64:a5:ac:3a:4b:7c:7d:33:e3:75:dd:23:9f (R)eject, accept (t)emporarily or accept (p)ermanently? svn: OPTIONS of 'https://SERVER_IP/svn/myproject/trunk': Server certificate verification failed: certificate issued for a different hostname, issuer is not trusted (https://SERVER_IP)

    Read the article

  • Upgrade from Etch to Lenny, server no longer reboots

    - by Jack
    Hi, I just upgraded my (remote) webserver to Lenny from Etch. and no real issue occurs except that I cannot reboot that server. (I can only start in a rescue mode) I recall that a MBR change performed and I am wondering if that could be the reason for this failure. Would someone know how I can revert this MBR change ? I had switched back to my previous linux kernel, still cannot reboot. I would appreciate your help. thanks Jack

    Read the article

  • OOM-Killer called every now and then..

    - by SpyrosP
    Hello there, i have a dedicated server where i've installed apache2, as well as Rails Passenger. Although i have 2GBs of RAM and most times about 1,5GB is free, there are some random times when i lose ssh and generic connectivity because oom-killer is killing processes. I suppose there is a memory leak but i cannot find out where it comes from. oom-killer kills apache2, mysql, passenger, whatever. Yesterday, i did a "cat syslog | grep -c oom-killer" and got 57 occurences ! It seems that something seriously destroys the memory. Once i reboot, everything comes back to normal. I suspect that it can be related to Passenger, but i'm still trying to figure it out. Can you think of anything else, or do you have anything to suggest that will make the leak identification procedure easier ? (i was even thinking of writing a bash script, to be run with cron for like every 5 minutes).

    Read the article

  • Create .gitconfig for chrooted users

    - by Vincent LITUR
    I have several chrooted users on my server, and I want to install git for specific users. I block at the command : git config --global user.name "user_name" I use this command connected as the user, and I got this error : error: could not lock config file /home/username/.gitconfig: Permission denied I tried to create the file from root, and then put chmod 755 and chown username .gitconfig, but I get the error. Is there a way to do this ? Edit : This question http://stackoverflow.com/questions/17908386/unable-to-create-gitconfig-file-for-user answers mine

    Read the article

  • Apache, ISPConfig & Roundcube alias

    - by Jay Zus
    I'm using ISPConfig to setup all the websites on my server but I also like to try to fiddle with the files myself to see how it works. Like you guessed, yes, I've broken something. I can't access my webmail setup by default on the server with the alias /webmail (I access it via the http://xxx.xxx.xx.xx/webmail) Firefox tells me that The page isn't redirecting properly Firefox has detected that the server is redirecting the request for this address in a way that will never complete. So I cleaned up my vhost files and the one of my websites work as intended, I think that the problem comes from my default.vhost. Here's the content of it <Directory /var/www/> AllowOverride None Order Deny,Allow Deny from all </Directory> <VirtualHost *:80> DocumentRoot /var/www/ ServerAdmin [email protected] <Directory /var/www/> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> This isn't a lot and I can't really see what's wrong with it, all I know is that it isn't the one that came with ISPConfig and I can't find an original one. Here's the roundcube.conf that loads with apache # Those aliases do not work properly with several hosts on your apache server # Uncomment them to use it or adapt them to your configuration # Alias /roundcube/program/js/tiny_mce/ /usr/share/tinymce/www/ # Alias /roundcube /var/lib/roundcube Alias /webmail /var/lib/roundcube/ # Access to tinymce files <Directory "/usr/share/tinymce/www/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order allow,deny allow from all </Directory> <Directory /var/lib/roundcube/> Options +FollowSymLinks # This is needed to parse /var/lib/roundcube/.htaccess. See its # content before setting AllowOverride to None. AllowOverride All order allow,deny allow from all </Directory> # Protecting basic directories: <Directory /var/lib/roundcube/config> Options -FollowSymLinks AllowOverride None </Directory> <Directory /var/lib/roundcube/temp> Options -FollowSymLinks AllowOverride None Order allow,deny Deny from all </Directory> <Directory /var/lib/roundcube/logs> Options -FollowSymLinks AllowOverride None Order allow,deny Deny from all </Directory> I didn't touch that file, but I guess it has something to do with the problem. I just can't find why it doesn't work. EDIT: This is the errors in my apache's log [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 540 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0" [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 475 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0" [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 475 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0" [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 475 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0" [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 475 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0" [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 475 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0" [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 475 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0"

    Read the article

  • Is there any way to limit my Internet connection to a per program basis?

    - by Igoru
    My Linux connection is REALLY free. I live in Brazil, so where I live I can only have 1 Mbit/s. Yes I know it's sad, but it's not the point. Everytime I'm updating my Ubuntu 9.04 or downloading something, it does eat all my bandwidth. Like, while update-manager is downloading the packages, I can see by netspeed applet in my panel that the incoming traffic goes to 110 kB/s. And then, my Emesene suddenly goes disconnected, and I can't navigate. As you can imagine, I can't use my Internet connection again until the packages are all downloaded or I cancel the update in the middle. As I said, same thing happens when I'm dowloading something, but less intrusive and immediate. The question is: is there any way to limit that APT/downloads traffic to some way I can still use my other Internet services, or to reserve some bandwidth for common navigation tasks (like we have on Windows, but I forgot this thing's name, it's like "something packages".

    Read the article

  • Not getting gigbit from a gigabit link?

    - by marcusw
    I just upgraded my LAN to gigabit. This is what netperf has to say about things. Before: marcus@lt:~$ netperf -H 192.168.1.1 TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.1 (192.168.1.1) port 0 AF_INET : demo Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 10.02 94.13 After: marcus@lt:~$ netperf -H 192.168.1.1 TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.1 (192.168.1.1) port 0 AF_INET : demo Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 10.01 339.15 Only 340 Mbps? What's up with that? Background info: I'm connecting through a gigabit switch to a sheevaplug. I have Cat5e wiring in the walls and the run is maybe 30 feet. If you're not familiar with netperf, it has a tendency to give very stable results and never lie.

    Read the article

  • www-data mkdir - Permission denied after update

    - by user788721
    I updated my server from lenny to squeeze and Ispconfig. I got everything back in place except a right problems. if i us "mkdir" with php i get : permission denied. whoami return : "www-data" File owner are like before (for example web54:client4), same for directory Ftp user work fine, create new file and edit file with the same right (web54:client4) I don't understand why it does'nt work in php and I don't have any idea where to look now ? Thanks for your help, Francois

    Read the article

  • Why is my mdadm raid-1 recovery so slow?

    - by dimmer
    On a system I'm running Ubuntu 10.04. My raid-1 restore started out fast but quickly became ridiculously slow (at this rate the restore will take 150 days!): dimmer@paimon:~$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdc1[2] sdb1[1] 1953513408 blocks [2/1] [_U] [====>................] recovery = 24.4% (477497344/1953513408) finish=217368.0min speed=113K/sec unused devices: <none> Eventhough I have set the kernel variables to reasonably quick values: dimmer@paimon:~$ cat /proc/sys/dev/raid/speed_limit_min 1000000 dimmer@paimon:~$ cat /proc/sys/dev/raid/speed_limit_max 100000000 I am using 2 2.0TB Western Digital Hard Disks, WDC WD20EARS-00M and WDC WD20EARS-00J. I believe they have been partitioned such that their sectors are aligned. dimmer@paimon:/sys$ sudo parted /dev/sdb GNU Parted 2.2 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: ATA WDC WD20EARS-00M (scsi) Disk /dev/sdb: 2000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2000GB 2000GB ext4 (parted) unit s (parted) p Number Start End Size File system Name Flags 1 2048s 3907028991s 3907026944s ext4 (parted) q dimmer@paimon:/sys$ sudo parted /dev/sdc GNU Parted 2.2 Using /dev/sdc Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: ATA WDC WD20EARS-00J (scsi) Disk /dev/sdc: 2000GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2000GB 2000GB ext4 I am beginning to think that I have a hardware problem, otherwise I can't imagine why the mdadm restore should be so slow. I have done a benchmark on /dev/sdc using Ubuntu's disk utility GUI app, and the results looked normal so I know that sdc has the capability to write faster than this. I also had the same problem on a similar WD drive that I RMAd because of bad sectors. I suppose it's possible they sent me a replacement with bad sectors too, although there are no SMART values showing them yet. Any ideas? Thanks. As requested, output of top sorted by cpu usage (notice there is ~0 cpu usage). iowait is also zero which seems strange: top - 11:35:13 up 2 days, 9:40, 3 users, load average: 2.87, 2.58, 2.30 Tasks: 142 total, 1 running, 141 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3096304k total, 1482164k used, 1614140k free, 617672k buffers Swap: 1526132k total, 0k used, 1526132k free, 535416k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 45 root 20 0 0 0 0 S 0 0.0 2:17.02 scsi_eh_0 1 root 20 0 2808 1752 1204 S 0 0.1 0:00.46 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/0 4 root 20 0 0 0 0 S 0 0.0 0:00.17 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/1 ... dmesg errors, definitely looking like hardware: [202884.000157] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [202884.007015] ata5.00: failed command: FLUSH CACHE EXT [202884.013728] ata5.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 [202884.013730] res 40/00:00:ff:59:2e/00:00:35:00:00/e0 Emask 0x4 (timeout) [202884.033667] ata5.00: status: { DRDY } [202884.040329] ata5: hard resetting link [202889.400050] ata5: link is slow to respond, please be patient (ready=0) [202894.048087] ata5: COMRESET failed (errno=-16) [202894.054663] ata5: hard resetting link [202899.412049] ata5: link is slow to respond, please be patient (ready=0) [202904.060107] ata5: COMRESET failed (errno=-16) [202904.066646] ata5: hard resetting link [202905.840056] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [202905.849178] ata5.00: configured for UDMA/133 [202905.849188] ata5: EH complete [203899.000292] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [203899.007096] ata5.00: failed command: IDENTIFY DEVICE [203899.013841] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [203899.013843] res 40/00:00:ff:f9:f6/00:00:38:00:00/e0 Emask 0x4 (timeout) [203899.041232] ata5.00: status: { DRDY } [203899.048133] ata5: hard resetting link [203899.816134] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [203899.826062] ata5.00: configured for UDMA/133 [203899.826079] ata5: EH complete [204375.000200] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [204375.007421] ata5.00: failed command: IDENTIFY DEVICE [204375.014799] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [204375.014800] res 40/00:00:ff:0c:0f/00:00:39:00:00/e0 Emask 0x4 (timeout) [204375.044374] ata5.00: status: { DRDY } [204375.051842] ata5: hard resetting link [204380.408049] ata5: link is slow to respond, please be patient (ready=0) [204384.440076] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [204384.449938] ata5.00: configured for UDMA/133 [204384.449955] ata5: EH complete [204395.988135] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [204395.988140] ata5.00: failed command: IDENTIFY DEVICE [204395.988147] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [204395.988149] res 40/00:00:ff:0c:0f/00:00:39:00:00/e0 Emask 0x4 (timeout) [204395.988151] ata5.00: status: { DRDY } [204395.988156] ata5: hard resetting link [204399.320075] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [204399.330487] ata5.00: configured for UDMA/133 [204399.330503] ata5: EH complete

    Read the article

  • Best Postfix spam RBL policy weight daemon?

    - by TRS-80
    I just heard about policyd-weight so I did an apt-cache search policyd which returns three options: policyd-weight postfix-policyd postfwd Which one is the best, and do you have any tips on setting them up? Our current setup is whitelister plus postgrey to greylist RBLd hosts, then fail2ban them for 10 minutes if they have 10 failures, followed by content filtering (Kaspersky Anti-Spam). The content filtering is pretty good, but there's still a lot of spam that gets through the RBL greylisting.

    Read the article

  • Added autossh in rc.local, but the dynamic port forwarding won't work

    - by rankjie
    I am using Rasbian on my newly arrived Rasp.Pi, and decided to make it my own proxy server. Now I need to set up a ssh tunnel on the Pi to my Linode server, and make it auto start with the system. What did I do: Add this line to /etc/rc.local autossh -f theRemoteServer -N -D 5555 -L 1234:localhost:22 After I reboot, I found out that I can't use the localhost:5555 as a socks proxy. So I type the command ps -A | grep ssh then I can see the autossh and ssh all running: pi@raspberrypi ~ $ ps -A | grep ssh 2018 ? 00:00:00 sshd 2116 ? 00:00:00 autossh 2119 ? 00:00:00 sshd 2195 ? 00:00:00 sshd 3173 ? 00:00:00 ssh (I've installed autossh, and the command works if I type it manually.) (I use the passwordless key auth, so I don't have to enter password.) Much appreciated and sorry for my poor English.

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >