Search Results

Search found 297 results on 12 pages for 'crontab'.

Page 9/12 | < Previous Page | 5 6 7 8 9 10 11 12  | Next Page >

  • rm failing inside cron script

    - by Nicholas
    I have a cron job calling a bash script which runs fine, except for one line inside it that is suppose to remove all fines in a directory. The result of this line is always 'no such file or directory' even though I have verified (many times) that there are files in that directory. The line in question is as simply: rm /dir1/dir2/dir3/* The script works fine when run manually in the terminal, so it must be something about how the cron is run. I've tried giving 'dir3' and all the files inside it every permission possible, so it shouldn't be a permission problem. (The directory and files are also owned by the user). I've tried specifing 'SHELL=/bin/bash' inside 'crontab'. There is no sticky bit set and there is no alias on the rm command. Interestingly changing the 'rm' command to 'ls' gives the same negative result (unless you remove the trailing '*', and then that works). What am I missing here?

    Read the article

  • cron not even sending local mail to /var/mail/

    - by Yang
    I'm using a very plain Ubuntu Server 9.04, and cron isn't delivering any mail to my /var/mail/USER (the file hasn't even been created). Here's my full crontab: # m h dom mon dow command 15 * * * * $HOME/.cron/sync-bookmarks.bash If I add # m h dom mon dow command 15 * * * * $HOME/.cron/sync-bookmarks.bash >& /tmp/log then I see the stdout and stderr in /tmp/log. I'm not (yet) interested in actual remote email delivery, just local delivery to the mail spool file. Why isn't mail working? Thanks in advance for any tips.

    Read the article

  • How to use HTML markup tags inside Bash script

    - by CONtext
    I have crontab and a simple bash script that sends me emails every often containing PHP, NGINX, MYSQL errors from their log files. This is a simplified example. #/home/user/status.sh [email protected] PHP_ERROR=`tail -5 /var/log/php-fpm/error.log` NGINX_ERROR=`tail -5 /var/log/nginx/error.log` MYSQL_ERROR=`tail /var/log/mysqld.log` DISK_SPACE=`df -h` echo " Today's, server report:: ================================== DISK_SPACE: $DISK_SPACE --------------------------------- MEMORY_USAGE: $MEMORY_USAGE ----------------------------------- NGINX ERROR: $NGINX_ERROR ----------------------------------- PHP ERRORS: $PHP_ERROR ------------------------------------ MYSQL_ERRORS: $MYSQL_ERROR ------------------------------------- " | mail -s "Server reports" $EMAIL I know this is a very basic usage, but as you can see, I am trying to separate the errors, but not of the html tags including \n are working. So, my question is, is it possible to use HTML tags to format the text, if not .. then what are the alternatives.

    Read the article

  • Linux: Schedule command to run once after reboot (RunOnce equivalent)

    - by Christopher Parker
    I'd like to schedule a command to run after reboot on a Linux box. I know how to do this so the command consistently runs after every reboot with a @reboot crontab entry, however I only want the command to run once. After it runs, it should be removed from the queue of commands to run. I'm essentially looking for a Linux equivalent to RunOnce in the Windows world. In case it matters: $ uname -a Linux devbox 2.6.27.19-5-default #1 SMP 2009-02-28 04:40:21 +0100 x86_64 x86_64 x86_64 GNU/Linux $ bash --version GNU bash, version 3.2.48(1)-release (x86_64-suse-linux-gnu) Copyright (C) 2007 Free Software Foundation, Inc. $ cat /etc/SuSE-release SUSE Linux Enterprise Server 11 (x86_64) VERSION = 11 PATCHLEVEL = 0 Is there an easy, scriptable way to do this?

    Read the article

  • Debian server doesn't free memory after backup

    - by stan31337
    I have production server that is running Debian 6.0.6 Squeeze #uname -a Linux debsrv 2.6.32-5-xen-amd64 #1 SMP Sun Sep 23 13:49:30 UTC 2012 x86_64 GNU/Linux Every day cron executes backup script as root: #crontab -e 0 5 * * * /root/sites_backup.sh > /dev/null 2>&1 #nano /root/sites_backup.sh #!/bin/bash str=`date +%Y-%m-%d-%H-%M-%S` tar pzcf /home/backups/sites/mysite-$str.tar.gz /var/sites/mysite/public_html/www mysqldump -u mysite -pmypass mysite | gzip -9 > /home/backups/sites/mysite-$str.sql.gz cd /home/backups/sites/ sha512sum mysite-$str* > /home/backups/sites/mysite-$str.tar.gz.DIGESTS cd ~ Everything works perfectly, but I notice that Munin's memory graph shows increase of cache and buffers after backup. Then I just download backup files and delete them. After deletion Munin's memory graph returns cache and buffers to the state that was before backup. Here's Munin graph: Unfortunately I don't have enough rep to add image here. So here's a link:

    Read the article

  • Unix shell script to monitor a process [on hold]

    - by SIJAR
    I have to make sure that one process in server never goes down. So I'm think of cron vs daemon. Please provide me a good example of unix shell scripts that will run as daemon process. I'm trying to avoid the non sense of permission issue for crontab. Also there is not much good resource on the web for this. Will this daemon process automatically start during a server/system restart. If not how can I achieve this?

    Read the article

  • hudson/jenkins: help needed to get started with customization work

    - by user64204
    I'm would to customize jenkins by adding links to the left hand side panel and use the pages associated with these links to serve some custom content in place of the jobs/views table displayed by default. I managed to add links to the side-bar using the sidebar-links plugin. Now I'm trying to see how to replace the content of the <td id="main-panel"> element with some custom content. The custom content is generated by some PHP scripts which ideally should be called by hudson every time the custom pages are requested, though if too complicated I can either create static content to be served by jenkins by calling my PHP scripts in a crontab or see if calls to the PHP scripts can be done by apache itself before the page requests are sent to jenkins. I'm not sure writing a plugin is the best way to proceed and I would like to have your thoughts as to how you think I should implement this.

    Read the article

  • Make Windows Task Scheduler alert me on fail

    - by acidzombie24
    I have an automated script that pulls backups from my website to my local computer. Once my server was down, another time i accidentally move my script. How do i make Windows Task Scheduler tell me with the script fails (or doesnt run/not found)? I dont care if a prompt comes up, an email or something that appears on my desktop. I want to be notified if something goes wrong. On my server crontab emails me about errors which is great. I want something like that on my windows 7 local computer.

    Read the article

  • Schedule EC2 instances

    - by mattcodes
    I want to be able to schedule some simple EC2 EBS backed instances (already configured) to start at 8am and stop at 4pm. This is only time I'll be using my integration server. Is there a simple services (paid or not) that I can use to handle this. All I found so far is to buy a cheap VPS at linode or somewhere and install ec2 tools and schedule via crontab, but what a PITA that is to. On the other end is something enterprisey like Rightscale but not my idea of simple.

    Read the article

  • Linux script to find time difference and send an email if need

    - by Gnanam
    Hi, I'm not an expert in writing shell scripts but also I'm looking for a very specific solution. OS: CentOS release 5.2 (Final) I've a Java standalone which keeps writing (all System.out.println) to a log file. For some unknown reason, this Java standalone stops working at some point of time in my server and eventually logs writing also stops working. I want to have a script which checks the last modified date & time of the log file with current date & time in the server. If the time difference exceeds more than 5 minutes, I want to send an email immediately to my recipients list. This way I'll come to know when this Java standalone has stopped working. I'll move this script to crontab and make it run for every 1 minute, so that this whole process is automated. Log file location: /usr/local/logs/standalone.log

    Read the article

  • Successful su for user by root in /var/log/auth.log

    - by grs
    I have this sorts of entries in my /var/log/auth.log: Apr 3 12:32:23 machine_name su[1521]: Successful su for user1 by root Apr 3 12:32:23 machine_name su[1654]: Successful su for user2 by root Apr 3 12:32:24 machine_name su[1772]: Successful su for user3 by root Situation: All users are real accounts in /etc/passwd; None of the users has its own crontab; All of those users are logged in the machine some time ago via SSH or No Machine - time varies from few minutes to few hours; no cron jobs are scheduled to run at that time, anacron is removed; I can see similar entries for other days and other times. The common part is the users are logged in when it appears. It does not appear during login, but some time afterwards. This machine has similar setup with few others but it is the only one where I see these entries. What causes them? Thanks

    Read the article

  • "ssh_exchange_identification: Connection closed by remote host lost connection" when running cron job

    - by grautur
    I have a Ruby script that connects to a remote machine via ssh and executes a command. The script runs fine when I just run it in my terminal. In my crontab, I have 1 * * * * /bin/bash -l -c 'ruby myfile.rb' and if I go ahead and run /bin/bash -l -c 'ruby myfile.rb', everything executes fine. But when cron itself executes the job, I get a ssh_exchange_identification: Connection closed by remote host error. What's the cause of this? How do I fix it?

    Read the article

  • Cron Jobs unable to deliver email error report

    - by root
    I am sure I have right syntax and I am still unable to receive report emails to my email address. My OS is CentOS 6.4. My Crontab script is MAILTO="[email protected]" * * * * * /usr/bin/php5 /home/myusername/public_html/cron.php /post/find_submit_test/1/ Email address [email protected] is working fine and I tested sendmail from ssh which is too working fine but cron reports are unable to be delivered. I checked WHM for notification settings couldn't even find anything relevant there. Please advice me how to fix this. Thanks

    Read the article

  • Hourly CRON task running more frequently than one hour

    - by Justin
    I have a cron task that calls a special PHP script via wget. Here is the crontab entry: 0 * * * * wget http://www.... It will work perfect for several days, running on the hour. However, after a few days the cron job will start to be called several times an hour. I have never seen CRON drift like this, so I imagine it can't really be a CRON issue. However, the logs of the script that is called clearly show it running several times an hour. Server details: Ubuntu Luci Apache MySQL PHP5 Time is showing correct @ command line Server is setup to sync with a NTP server In order for the script to run it must be passed a unique 50-character hash key in the URL, so this script isn't being called from any other source accidentally. What might cause CRON to drift like this?

    Read the article

  • What does "every two minutes" mean in cron?

    - by Ambrose
    I've got two scripts in cron set to run every two minutes: */2 -- the thing is, they're out of step. One runs at 1,3,5,7,9 minutes, etc. and the other at 0,2,4,6,8. This is not a mission-critical problem, but means I've got two status reports, one a bit stale compared to the other. What does cron do exactly? Run the first one in crontab document order, waiting till it's finished to run the second one? Is there any way I can make the run at the same time, or as close as possible?

    Read the article

  • Bash script getting automatically deleted from Ubuntu 12.04 Server?

    - by Kris Anderson
    I'm running a bash script on an ubuntu 12.04 through cron. The script works fine for a few weeks (runs daily backups of websites, mysql databases, and copies to Amazon S3). However, twice now I've noticed that backups stopped happening. Both times the backup script (backupscript.sh) located in my home folder was no longer there. No one else has access to this server, so nothing was manually changed on the server and no one deleted the file by mistake. The cron job (nano /etc/crontab) still references this script, but the script itself disappears. What could cause this to happen? Does Ubuntu delete the script if it runs into some sort of error?

    Read the article

  • aws s3 works with script but not on cron

    - by user3800017
    guys.. My first post ! hope not the last .. I have few bunch of servers on aws ec2 platforms. I made a simple script to backup my custom logs on their s3 storage bucket. The problem is the script works fine .. but I tried to add it to the crontab. And the script executes but not the s3 sync/mv part ! Here is my code: NOW=$(date "+%b_%d_%Y") MY_HOSTNAME=`uname -n` mv /opt/req/req* /opt/req/bkup/ mv /opt/response/res* /opt/req/bkup/ cd /opt/req/bkup/ tar -cvf ${MY_HOSTNAME}_req_bkup_${NOW}.tar re* rm *.txt aws s3 mv /opt/req/bkup/* s3://req `

    Read the article

  • Privileged command as part of cronjob

    - by user42756
    Hi, I'm facing a weired problem on a unix-based machine. Here is the story: I have a personal username/password on a unix machine with limited privileges. Whenever I need to execute some commands I have to substitute user using the su command, then I execute it normally. Now, I need to add a cronjob that uses such privileged commands so I added the cronjob on the crontab of the user I substituted to in order to have access to these commands. Strangely, it turned out to me that these commands fail to run for some reason as a cronjob although when I execute them directly from shell (after su) they work seamlessly. Why does this happen? Why do these commands not work as part of cronjobs? Thank you

    Read the article

  • How to find what is written to filesystem under linux

    - by bardiir
    How can i find out what processes write to a specific disc over time? In my particular case I got a little homeserver running 24/7 and I included a script in the crontab to shutdown all drives that are not used (no change in /proc/diskstats for 15 minutes). But my system disc won't come down at all. I'm suspecting logs but it's probably not only logs writing to the filesystem on the system disk and I don't want to go all the way moving the logfiles to something else just to find out the disc still doesn't spin down and there's nothing i can do against it.

    Read the article

  • Do best-practices say to restrict the usage of /var to sudoers?

    - by NewAlexandria
    I wrote a package, and would like to use /var to persist some data. The data I'm storing would perhaps even be thought of as an addition for /var/db. The pattern I observe is that files in /var/db, and the surrounds, are owned by root. The primary (intended) use of the package filters cron jobs - meaning you would need permissions to edit the crontab. Should I presume a sudo install of the package? Should I have the package gracefully degrade to a /usr subdir, and if so then which one? If I 'opinionate' that any non-sudo install requires a configrc (with paths), where should the package look (presuming a shared-host environment) for that config file? Incidentally, this package is a ruby gem, and you can find it here.

    Read the article

  • Avoid putty ssh terminal to crash when disconnecting from server

    - by JBoy
    I'm connecting via ssh to a remote 'live' server where i have some bash scripts automated via the crontab, when an error happens in some of the automation scripts within the server, the connection to the server is killed, this is fine to me, but the problem is that Putty closes the entire window, which is a behavior i don't want. I have checked all around the web, unfortunately the putty site does not have a support page, but nothing. Under putty's option i have tried all the menus expanding all options, but still i cant find the right one, i would expect it to be under Windowbehaviour Do you have an advice? Thx

    Read the article

  • can I use @reboot in cron.d files?

    - by fschwiet
    I want to run a job with cron on reboot as a particular user. I have been able to do this successfully using crontab to write to /var/spool/cron/crontabs/username with something like: @reboot ./run.sh >>~/tracefile 2>&1 However, I want to use /etc/cron.d/filename. Cron jobs in this file require an extra column to indicate what user runs, so I use: @reboot wwwuser ./run.sh >>~/tracefile 2>&1 This doesn't seem to work. Should I be able to use @reboot with a username in a cron.d file?

    Read the article

  • How to autostart service in ANY linux?

    - by user329115
    I need to install the dumbest possible service (a binary) and have it reliably run as the current user at boot (or login, whatever) at as many platforms as possible (of the aging point-of-sale type). The app monitors another archives generated by another app in the user-session. Startup alternatives considered: init.d @reboot in crontab a .desktop file in ~/.config/autostart a myriad of other solutions including .profile and .bashrc All of the above break down at some point. The problems stem from not wanting to run as root (I want the files generated to be user-accessible), and not having a way to reliably get the current user name in sudo on all platforms. Ideally not even sudo can be assumed available. Hey, I just want to run something on boot and I have "root" power to do so. Windows get the job done easily enough. This isn't rocket science, is it?

    Read the article

  • starting command automaticaly [closed]

    - by aaa
    playing with Debian on raspberry pi. I have a 3g dongle modem on the system and there is a software called sakis3g to connect to internet. I want system to connect to internet automatically every time it starts. It takes about 30-40 seconds to get connected. copied sakis3g to /sbin folder. This command has to be run as root: sakis3g connect parameters blah blah blah I tried to put it in the /etc/rc.local and rebooted system, but no luck. I tried to put it in the crontab as: @reboot sakis3g connect parameters blah blah blah What am I missing here?

    Read the article

  • (date) script for freenas

    - by malgaboy
    My webserver is running freenas 8.0.4. Currently in /etc/crontab there is this task (daily) find /mnt/vol1/1 -type f -mtime +16 -exec rm -R {} \; This allow me to delete files older than 16 days.. infact every night a new file is added in /mnt/vol1/1 folder.. but now I wanna make a script that check the folder and keeps only the youngest 2 file.. whitout taking in consideration the days.. Thanks in advance. (sorry for my bad english).

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12  | Next Page >