Search Results

Search found 6550 results on 262 pages for 'john sh'.

Page 19/262 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • tail -f and then exit on matching string

    - by Patrick
    I am trying to configure a startup script which will startup tomcat, monitor the catalina.out for the string "Server startup", and then run another process. I have been trying various combinations of tail -f with grep and awk, but haven't got anything working yet. The main issue I am having seems to be with forcing the tail to die after grep or awk have matched the string. I have simplified to the following test case. test.sh is listed below: #!/bin/sh rm -f child.out ./child.sh > child.out & tail -f child.out | grep -q B child.sh is listed below: #!/bin/sh echo A sleep 20 echo B echo C sleep 40 echo D The behavior I am seeing is that grep exits after 20 seconds , however the tail will take a further 40 seconds to die. I understand why this is happening - tail will only notice that the pipe is gone when it writes to it which only happens when data gets appended to the file. This is compounded by the fact that tail is to be buffering the data and outputting the B and C characters as a single write (I confirmed this by strace). I have attempted to fix that with solutions I found elsewhere, such as using unbuffer command, but that didn't help. Anybody got any ideas for how to get this working how I expect it? Or ideas for waiting for successful Tomcat start (thinking about waiting for a TCP port to know it has started, but suspect that will become more complex that what I am trying to do now). I have managed to get it working with awk doing a "killall tail" on match, but I am not happy with that solution. Note I am trying to get this to work on RHEL4.

    Read the article

  • ubuntu apache subdomains pointing to main domain

    - by Suhail Thakur
    I have a ubuntu server with apache setup, the main domain on the server is a subdomain app.example.com, which is working fine. Now if I setup john.app.example.com, then that also is displaying the web page of app.example.com, the DocumentRoot for john.app.example.com is different, still it shows the web page of app.example.com. how can I resolve this, so john.app.example.com displays the pages that are there in its DocumentRoot.

    Read the article

  • Copying email with qmail and Plesk

    - by Greg
    I need to keep a copy of all outgoing and incoming email (for a single domain if possible) using qmail or Plesk. I can't recompile qmail, so qmailtap is out of the question, as is setting QUEUE_EXTRA in extra.h. I'm pretty sure it should be possible with Plesk's mailmng utility, aka Mail Handlers but I'm having trouble getting them to work. I've registered 2 hooks: incoming hook ./mailmng --add-handler --handler-name=incoming --recipient-domain=example.com --executable=/xxx/incoming.sh --context=/xxx/incoming/ --hook=before-local incoming.sh #!/bin/bash # The email is passed on stdin - grab it to a variable e=`cat -` # $1 = context (/xxx/incoming) # $3 = recipient ([email protected]) # Create /xxx/incoming/[email protected] mkdir -p $1$3 # Save the email to /xxx/incoming/[email protected]/0123456789.txt echo "$e" > $1$3/`date +%s%N`.txt # Echo PASS to stderr echo 'PASS' >&2 # Echo the email to stdout echo "$e" outgoing hook # ./mailmng --add-handler --handler-name=outgoing --sender-domain=holidaysplease.com --executable=/xxx/outgoing.sh --context=/xxx/outgoing/ --hook=before-remote The outgoing.sh file is the same as incoming.sh, except replace $3 (recipient) with $2 (sender). The incoming hook does work, but saves 2 copies of each email - one before and one after SpamAssassin has run. The outgoing hook doesn't seem to get called at all. So finally, my questions are: How can I make the incoming hook save only a single copy (preferably after SpamAssassin has run)? How can I get the outgoing hook to work?

    Read the article

  • How to add LDAP user to existing local group in RHEL?

    - by Highway of Life
    I'm attempting to add some of our LDAP users to a locally defined group on our RHEL server, however I get an error stating that the LDAP user is not found in /etc/passwd. What would be the best way to allow LDAP users to be added to local groups? My feeling is that this must be done manually. I could edit: /etc/group and add the LDAP group to the list. Would that be ideal? [server]# id apache uid=409(apache) gid=409(apache) groups=409(apache) context=user_u:system_r:unconfined_t:s0 [server]# id john.doe uid=11389(john.doe) gid=6097(ABC_Corporate_US) groups=6097(ABC_Corporate_US) context=user_u:system_r:unconfined_t:s0 [server]# /usr/sbin/usermod -a -G apache john.doe usermod: john.doe not found in /etc/passwd OS: RHEL (Red Hat Enterprise Linux Server release 5.3 (Tikanga)) Note: Updating the OS on this machine is not an option.

    Read the article

  • mac cron can't use the shell correctly

    - by carneades
    I've set up cron to run a simple hello world shell script, but it's giving me an error that Google isn't helping me resolve. I've got to be missing something really simple! Here's my crontab: [email protected] SHELL=/bin/bash 30 * * * * * $HOME/hello.sh Here's hello.sh: #!/bin/bash echo HELLO WORLD! I get this error email: /bin/bash: 555: command not found I have tried setting shell to /bin/sh but it makes no difference. I still get an analogous error message.

    Read the article

  • Archiving mails with postfix: how to filter mails?

    - by Tronic
    i wanto to implement the following scenario: we use a postfix mailserver. to archive all old and new mails, i want to setup a second postfix on our fileserver and create a single mailbox "archive". then every mail gets forwarded as bcc to this mailbox automatically. now, i want to create different folders in a maildir structure and let the server move each mail to the right subfolder of the mailbox based on its sender or receiver. e.g. when we get a mail to one of our employees named "John Doe" at john[email protected], the mail should be moved to "Inbox/John Doe Incoming". the same applies when john doe sends a mail, folder would be "Inbox/John Doe Outgoing". how can i implement this filter behaviour. i heard of Procmail and Maildrop. Which of the two would you prefer? Which is more easy to configure? Any out-of-box solutions here? thanks in advance!

    Read the article

  • Both servers running keepalived become master

    - by pcent
    After a network failure,both servers running keepalived become master. When the network is reestablished, both keep the MASTER state. What could be causing it? Edited: Another information that might be relevant, each server has two NICs. Here is the virtual instance configuration: vrrp_instance VGAPP { interface eth0 virtual_router_id 61 state BACKUP nopreempt priority 50 advert_int 3 virtual_ipaddress { 10.26.57.61/24 } track_interface { eth0 } track_script { jboss_check #tomcat_check #interface_check #interface_check02 } notify_master "/opt/keepalived/scripts/set_state.sh MASTER" notify_backup "/opt/keepalived/scripts/set_state.sh BACKUP" notify_fault "/opt/keepalived/scripts/set_state.sh FAULT" notify_stop "/opt/keepalived/scripts/set_state.sh STOPPED"}

    Read the article

  • Strange strace and setuid behaviour: permission denied under strace, but not running normally.

    - by Autopulated
    This is related to this question. I have a script (fix-permissions.sh) that fixes some file permissions: #! /bin/bash sudo chown -R person:group /path/ sudo chmod -R g+rw /path/ And a small c program to run this, which is setuided: #include "sys/types.h" #include "unistd.h" int main(){ setuid(geteuid()); return system("/path/fix-permissions.sh"); } Directory: -rwsr-xr-x 1 root root 7228 Feb 19 17:33 fix-permissions -rwx--x--x 1 root root 112 Feb 19 13:38 fix-permissions.sh If I do this, everything seems fine, and the permissions do get correctly fixed: james $ sudo su someone-else someone-else $ ./fix-permissions but if I use strace, I get: someone-else $ strace ./fix-permissions /bin/bash: /path/fix-permissions.sh: Permission denied It's interesting to note that I get the same permission denied error with an identical setup (permissions, c program), but a different script, even when not using strace. Is this some kind of heureustic magic behaviour in setuid that I'm uncovering? How should I figure out what's going on? System is Ubuntu 10.04.2 LTS, Linux 2.6.32.26-kvm-i386-20101122 #1 SMP

    Read the article

  • placing shell script under systemd control

    - by Calvin Cheng
    Assuming I have a shell script like this:- #!/bin/sh # cherrypy_server.sh PROCESSES=10 THREADS=1 # threads per process BASE_PORT=3035 # the first port used # you need to make the PIDFILE dir and insure it has the right permissions PIDFILE="/var/run/cherrypy/myproject.pid" WORKDIR=`dirname "$0"` cd "$WORKDIR" cp_start_proc() { N=$1 P=$(( $BASE_PORT + $N - 1 )) ./manage.py runcpserver daemonize=1 port=$P pidfile="$PIDFILE-$N" threads=$THREADS request_queue_size=0 verbose=0 } cp_start() { for N in `seq 1 $PROCESSES`; do cp_start_proc $N done } cp_stop_proc() { N=$1 #[ -f "$PIDFILE-$N" ] && kill `cat "$PIDFILE-$N"` [ -f "$PIDFILE-$N" ] && ./manage.py runcpserver pidfile="$PIDFILE-$N" stop rm -f "$PIDFILE-$N" } cp_stop() { for N in `seq 1 $PROCESSES`; do cp_stop_proc $N done } cp_restart_proc() { N=$1 cp_stop_proc $N #sleep 1 cp_start_proc $N } cp_restart() { for N in `seq 1 $PROCESSES`; do cp_restart_proc $N done } case "$1" in "start") cp_start ;; "stop") cp_stop ;; "restart") cp_restart ;; *) "$@" ;; esac From the bash script, we can essentially do 3 things: start the cherrypy server by calling ./cherrypy_server.sh start stop the cherrypy server by calling ./cherrypy_server.sh stop restart the cherrypy server by calling ./cherrypy_server.sh restart How would I place this shell script under systemd's control as a cherrypy.service file (with the obvious goal of having systemd start up the cherrypy server when a machine has been rebooted)? Reference systemd service file example here - https://wiki.archlinux.org/index.php/Systemd#Using_service_file

    Read the article

  • Command does not execute in crontab while command itself works just fine

    - by fuzzybee
    I have this script from Colin Johnson on Github - https://github.com/colinbjohnson/aws-missing-tools/tree/master/ec2-automate-backup It seems great. I have modified it to send email to myself every time an EBS snapshot is created or deleted. The following works like a charm ec2-automate-backup.sh -v "vol-myvolumeid" -k 3 However, it does not execute at all as part of my crontab (I didn't receive any emails) #some command that got commented out */5 * * * * ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3; * * * * * date /root/logs/crontab.log; */5 * * * * date /root/logs/crontab2.log Please note that the 2nd and 3rd execute just fines as I can see the date and time in log files. What could I have missed here? The full ec2-automate-backup.sh is as follows: #!/bin/bash - # Author: Colin Johnson / [email protected] # Date: 2012-09-24 # Version 0.1 # License Type: GNU GENERAL PUBLIC LICENSE, Version 3 # #confirms that executables required for succesful script execution are available prerequisite_check() { for prerequisite in basename ec2-create-snapshot ec2-create-tags ec2-describe-snapshots ec2-delete-snapshot date do #use of "hash" chosen as it is a shell builtin and will add programs to hash table, possibly speeding execution. Use of type also considered - open to suggestions. hash $prerequisite &> /dev/null if [[ $? == 1 ]] #has exits with exit status of 70, executable was not found then echo "In order to use `basename $0`, the executable \"$prerequisite\" must be installed." 1>&2 | mailx -s "Error happened 0" [email protected] ; exit 70 fi done } #get_EBS_List gets a list of available EBS instances depending upon the selection_method of EBS selection that is provided by user input get_EBS_List() { case $selection_method in volumeid) if [[ -z $volumeid ]] then echo "The selection method \"volumeid\" (which is $app_name's default selection_method of operation or requested by using the -s volumeid parameter) requires a volumeid (-v volumeid) for operation. Correct usage is as follows: \"-v vol-6d6a0527\",\"-s volumeid -v vol-6d6a0527\" or \"-v \"vol-6d6a0527 vol-636a0112\"\" if multiple volumes are to be selected." 1>&2 | mailx -s "Error happened 1" [email protected] ; exit 64 fi ebs_selection_string="$volumeid" ;; tag) if [[ -z $tag ]] then echo "The selected selection_method \"tag\" (-s tag) requires a valid tag (-t key=value) for operation. Correct usage is as follows: \"-s tag -t backup=true\" or \"-s tag -t Name=my_tag.\"" 1>&2 | mailx -s "Error happened 2" [email protected] ; exit 64 fi ebs_selection_string="--filter tag:$tag" ;; *) echo "If you specify a selection_method (-s selection_method) for selecting EBS volumes you must select either \"volumeid\" (-s volumeid) or \"tag\" (-s tag)." 1>&2 | mailx -s "Error happened 3" [email protected] ; exit 64 ;; esac #creates a list of all ebs volumes that match the selection string from above ebs_backup_list_complete=`ec2-describe-volumes --show-empty-fields --region $region $ebs_selection_string 2>&1` #takes the output of the previous command ebs_backup_list_result=`echo $?` if [[ $ebs_backup_list_result -gt 0 ]] then echo -e "An error occured when running ec2-describe-volumes. The error returned is below:\n$ebs_backup_list_complete" 1>&2 | mailx -s "Error happened 4" [email protected] ; exit 70 fi ebs_backup_list=`echo "$ebs_backup_list_complete" | grep ^VOLUME | cut -f 2` #code to right will output list of EBS volumes to be backed up: echo -e "Now outputting ebs_backup_list:\n$ebs_backup_list" } create_EBS_Snapshot_Tags() { #snapshot tags holds all tags that need to be applied to a given snapshot - by aggregating tags we ensure that ec2-create-tags is called only onece snapshot_tags="" #if $name_tag_create is true then append ec2ab_${ebs_selected}_$date_current to the variable $snapshot_tags if $name_tag_create then ec2_snapshot_resource_id=`echo "$ec2_create_snapshot_result" | cut -f 2` snapshot_tags="$snapshot_tags --tag Name=ec2ab_${ebs_selected}_$date_current" fi #if $purge_after_days is true, then append $purge_after_date to the variable $snapshot_tags if [[ -n $purge_after_days ]] then snapshot_tags="$snapshot_tags --tag PurgeAfter=$purge_after_date --tag PurgeAllow=true" fi #if $snapshot_tags is not zero length then set the tag on the snapshot using ec2-create-tags if [[ -n $snapshot_tags ]] then echo "Tagging Snapshot $ec2_snapshot_resource_id with the following Tags:" ec2-create-tags $ec2_snapshot_resource_id --region $region $snapshot_tags #echo "Snapshot tags successfully created" | mailx -s "Snapshot tags successfully created" [email protected] fi } date_command_get() { #finds full path to date binary date_binary_full_path=`which date` #command below is used to determine if date binary is gnu, macosx or other date_binary_file_result=`file -b $date_binary_full_path` case $date_binary_file_result in "Mach-O 64-bit executable x86_64") date_binary="macosx" ;; "ELF 64-bit LSB executable, x86-64, version 1 (SYSV)"*) date_binary="gnu" ;; *) date_binary="unknown" ;; esac #based on the installed date binary the case statement below will determine the method to use to determine "purge_after_days" in the future case $date_binary in gnu) date_command="date -d +${purge_after_days}days -u +%Y-%m-%d" ;; macosx) date_command="date -v+${purge_after_days}d -u +%Y-%m-%d" ;; unknown) date_command="date -d +${purge_after_days}days -u +%Y-%m-%d" ;; *) date_command="date -d +${purge_after_days}days -u +%Y-%m-%d" ;; esac } purge_EBS_Snapshots() { #snapshot_tag_list is a string that contains all snapshots with either the key PurgeAllow or PurgeAfter set snapshot_tag_list=`ec2-describe-tags --show-empty-fields --region $region --filter resource-type=snapshot --filter key=PurgeAllow,PurgeAfter` #snapshot_purge_allowed is a list of all snapshot_ids with PurgeAllow=true snapshot_purge_allowed=`echo "$snapshot_tag_list" | grep .*PurgeAllow'\t'true | cut -f 3` for snapshot_id_evaluated in $snapshot_purge_allowed do #gets the "PurgeAfter" date which is in UTC with YYYY-MM-DD format (or %Y-%m-%d) purge_after_date=`echo "$snapshot_tag_list" | grep .*$snapshot_id_evaluated'\t'PurgeAfter.* | cut -f 5` #if purge_after_date is not set then we have a problem. Need to alter user. if [[ -z $purge_after_date ]] #Alerts user to the fact that a Snapshot was found with PurgeAllow=true but with no PurgeAfter date. then echo "A Snapshot with the Snapshot ID $snapshot_id_evaluated has the tag \"PurgeAllow=true\" but does not have a \"PurgeAfter=YYYY-MM-DD\" date. $app_name is unable to determine if $snapshot_id_evaluated should be purged." 1>&2 | mailx -s "Error happened 5" [email protected] else #convert both the date_current and purge_after_date into epoch time to allow for comparison date_current_epoch=`date -j -f "%Y-%m-%d" "$date_current" "+%s"` purge_after_date_epoch=`date -j -f "%Y-%m-%d" "$purge_after_date" "+%s"` #perform compparison - if $purge_after_date_epoch is a lower number than $date_current_epoch than the PurgeAfter date is earlier than the current date - and the snapshot can be safely removed if [[ $purge_after_date_epoch < $date_current_epoch ]] then echo "The snapshot \"$snapshot_id_evaluated\" with the Purge After date of $purge_after_date will be deleted." ec2-delete-snapshot --region $region $snapshot_id_evaluated echo "Old snapshots successfully deleted for $volumeid" | mailx -s "Old snapshots successfully deleted for $volumeid" [email protected] fi fi done } #calls prerequisitecheck function to ensure that all executables required for script execution are available prerequisite_check app_name=`basename $0` #sets defaults selection_method="volumeid" region="ap-southeast-1" #date_binary allows a user to set the "date" binary that is installed on their system and, therefore, the options that will be given to the date binary to perform date calculations date_binary="" #sets the "Name" tag set for a snapshot to false - using "Name" requires that ec2-create-tags be called in addition to ec2-create-snapshot name_tag_create=false #sets the Purge Snapshot feature to false - this feature will eventually allow the removal of snapshots that have a "PurgeAfter" tag that is earlier than current date purge_snapshots=false #handles options processing while getopts :s:r:v:t:k:pn opt do case $opt in s) selection_method="$OPTARG";; r) region="$OPTARG";; v) volumeid="$OPTARG";; t) tag="$OPTARG";; k) purge_after_days="$OPTARG";; n) name_tag_create=true;; p) purge_snapshots=true;; *) echo "Error with Options Input. Cause of failure is most likely that an unsupported parameter was passed or a parameter was passed without a corresponding option." 1>&2 ; exit 64;; esac done #sets date variable date_current=`date -u +%Y-%m-%d` #sets the PurgeAfter tag to the number of days that a snapshot should be retained if [[ -n $purge_after_days ]] then #if the date_binary is not set, call the date_command_get function if [[ -z $date_binary ]] then date_command_get fi purge_after_date=`$date_command` echo "Snapshots taken by $app_name will be eligible for purging after the following date: $purge_after_date." fi #get_EBS_List gets a list of EBS instances for which a snapshot is desired. The list of EBS instances depends upon the selection_method that is provided by user input get_EBS_List #the loop below is called once for each volume in $ebs_backup_list - the currently selected EBS volume is passed in as "ebs_selected" for ebs_selected in $ebs_backup_list do ec2_snapshot_description="ec2ab_${ebs_selected}_$date_current" ec2_create_snapshot_result=`ec2-create-snapshot --region $region -d $ec2_snapshot_description $ebs_selected 2>&1` if [[ $? != 0 ]] then echo -e "An error occured when running ec2-create-snapshot. The error returned is below:\n$ec2_create_snapshot_result" 1>&2 ; exit 70 else ec2_snapshot_resource_id=`echo "$ec2_create_snapshot_result" | cut -f 2` echo "Snapshots successfully created for volume $volumeid" | mailx -s "Snapshots successfully created for $volumeid" [email protected] fi create_EBS_Snapshot_Tags done #if purge_snapshots is true, then run purge_EBS_Snapshots function if $purge_snapshots then echo "Snapshot Purging is Starting Now." purge_EBS_Snapshots fi cron log Oct 23 10:24:01 ip-10-130-153-227 CROND[28214]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:24:01 ip-10-130-153-227 CROND[28215]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:25:01 ip-10-130-153-227 CROND[28228]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:25:01 ip-10-130-153-227 CROND[28229]: (root) CMD (date >> /root/logs/crontab2.log) Oct 23 10:26:01 ip-10-130-153-227 CROND[28239]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:27:01 ip-10-130-153-227 CROND[28247]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:27:01 ip-10-130-153-227 CROND[28248]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:28:01 ip-10-130-153-227 CROND[28263]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:29:01 ip-10-130-153-227 CROND[28275]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:30:01 ip-10-130-153-227 CROND[28292]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:30:01 ip-10-130-153-227 CROND[28293]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:30:01 ip-10-130-153-227 CROND[28294]: (root) CMD (date >> /root/logs/crontab2.log) Oct 23 10:31:01 ip-10-130-153-227 CROND[28312]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:32:01 ip-10-130-153-227 CROND[28319]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:33:01 ip-10-130-153-227 CROND[28325]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:33:01 ip-10-130-153-227 CROND[28324]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:34:01 ip-10-130-153-227 CROND[28345]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:35:01 ip-10-130-153-227 CROND[28362]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:35:01 ip-10-130-153-227 CROND[28363]: (root) CMD (date >> /root/logs/crontab2.log) Mails to root From [email protected] Tue Oct 23 06:00:01 2012 Return-Path: <[email protected]> Date: Tue, 23 Oct 2012 06:00:01 GMT From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <root@ip-10-130-153-227> root ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3 Content-Type: text/plain; charset=UTF-8 Auto-Submitted: auto-generated X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> X-Cron-Env: <USER=root> Status: R /bin/sh: root: command not found

    Read the article

  • Managing Linux Directory Permissions & SFTP

    - by Dizzle
    Good morning; I have a RHEL 5.7 web server configured to allow SSH/SFTP only by specific groups. I'd like for content managers to upload content to their respective directories and have that content inherit the user/group ownership of the directory regardless of upload method or application. For example: John is in group "web" for SSH/SFTP rights and "finance" for directory permissions, and uploads to directory "webstuff" via SFTP. Directory "webstuff" has permissions of "2760" (rwxrws---), and ownership of "apache:finance". If John uploads an update to an existing file in "webstuff", the ownership of the file stays at "apache:finance". If John uploads a new file to "webstuff", the ownership of the file is "john:finance". My desire is to have any file from John uploaded to "webstuff" to change to the directory's owner. I've tried with setuid and setgid both set, but the user-ownership didn't take. I've seen mentions on ServerFault of using ACL's, or a chrooted jail for SFTP but I have yet to configure and test them, and I don't know if they're a viable solution (they could be, I just don't know because I've never done either). Any thoughts and assistance would be greatly appreciated.

    Read the article

  • check file revision through http only

    - by romant
    if the svn repo is exposed through say : http://svn to the users, and there's a file called script.sh Is there a way one can get the latest revision number of script.sh by means of just http access? Something along the lines of http://svn/rev?script.sh ?! Thank you.

    Read the article

  • tail -f and then exit on matching string

    - by Patrick
    I am trying to configure a startup script which will startup tomcat, monitor the catalina.out for the string "Server startup", and then run another process. I have been trying various combinations of tail -f with grep and awk, but haven't got anything working yet. The main issue I am having seems to be with forcing the tail to die after grep or awk have matched the string. I have simplified to the following test case. test.sh is listed below: #!/bin/sh rm -f child.out ./child.sh > child.out & tail -f child.out | grep -q B child.sh is listed below: #!/bin/sh echo A sleep 20 echo B echo C sleep 40 echo D The behavior I am seeing is that grep exits after 20 seconds , however the tail will take a further 40 seconds to die. I understand why this is happening - tail will only notice that the pipe is gone when it writes to it which only happens when data gets appended to the file. This is compounded by the fact that tail is to be buffering the data and outputting the B and C characters as a single write (I confirmed this by strace). I have attempted to fix that with solutions I found elsewhere, such as using unbuffer command, but that didn't help. Anybody got any ideas for how to get this working how I expect it? Or ideas for waiting for successful Tomcat start (thinking about waiting for a TCP port to know it has started, but suspect that will become more complex that what I am trying to do now). I have managed to get it working with awk doing a "killall tail" on match, but I am not happy with that solution. Note I am trying to get this to work on RHEL4.

    Read the article

  • Server setup scripts, patches and migrations

    - by Ben Swinburne
    I have written some scripts which I use to configure various servers in a uniform way. Each time I deploy a server I run the relevant scripts so that I know they're all configured the same. I then have some patch scripts, which are changes to the originals which I can then run to ensure that modifications to the original set up can be run on each server. E.g. disable.sh - Disable SELinux etc to ensure other scripts all run correctly general.sh - Jailkit, AV, Repos, RKHunter, security tweaks, uninstall unused bits etc web.sh - Installs and configures Apache2 001_update_nr_licence_key.sh - Update a licence key for a piece of software which has changed since its install in general.sh I can run the first 3 without a problem, but when it comes to running patches I am a bit stuck. Is there a sensible way of doing these with some software? My current thought is write to a log file the role of the server be it web or db for example and then note the name of the patch which has run. It could then iterate through a folder to find all patches for that role which it has not yet run and execute them. This seems a bit long winded however. Could someone advise me as to the best way I can keep my servers uniform?

    Read the article

  • Subversion hooks no longer running

    - by Chris Lieb
    I don't know when this started happening, but, for some reason, none of my Subversion hooks are running anymore. I am running Subversion 1.6.9 on a Gentoo Linux machine, which has had its hooks work in the past. I am running Subversion through the svn_dav module for Apache2.2. I modified the hook scripts that I make use of to write into a file in the /tmp directory owned by apache:apache whenever they are executed, but after making a commit, there is nothing in the file that should be written to. The scripts are executable and owned by apache:apache, so I don't think that is the issue. Here is one of my test scripts (post-commit.sh) that isn't getting executed: #!/bin/sh /bin/echo post-commit >> /tmp/z_test exit 0 After running a commit, I expect both the pre-commit.sh and post-commit.sh hooks to be run, but neither of them appear to be writing into the desired file (/tmp/z_test). What's going on?

    Read the article

  • Correct MySQL username/password, but getting Access Denied error when run from script

    - by Nick
    I'm currently trying to run the following command from within a shell script. /usr/bin/mysql -u username -ppassword -h localhost database It works perfectly fine when executed manually, and not from within a script. When I try to execute a script that contains that command, I get the following error: ERROR 1045 (28000) at line 3: Access denied for user 'username'@'localhost' (using password: YES) I literally copied and pasted the working command into the script. Why the error? As a sidenote: the ultimate intent is to run the script with cron. EDIT: Here is a stripped down version of my script that I'm trying to run. You can ignore most of it up until the point where it connects to MySQL around line 19. #!/bin/sh #Run download script to download product data cd /home/dir/Scripts/Linux /bin/sh script1.sh #Run import script to import product data to MySQL cd /home/dir/Mysql /bin/sh script2.sh #Download inventory stats spreadsheet and rename it cd /home/dir /usr/bin/wget http://www.url.com/file1.txt mv file1.txt sheet1.csv #Remove existing export spreadsheet rm /tmp/sheet2.csv #Run MySQL queries in "here document" format /usr/bin/mysql -u username -ppassword -h localhost database << EOF --Drop old inventory stats table truncate table table_name1; --Load new inventory stats into table Load data local infile '/home/dir/sheet1.csv' into table table_name1 fields terminated by ',' optionally enclosed by '"' lines terminated by '\r\n'; --MySQL queries to combine product data and inventory stats here --Export combined data in spreadsheet format group by p.value into outfile '/tmp/sheet2.csv' fields terminated by ',' optionally enclosed by '"' lines terminated by '\r\n'; EOF EDIT 2: After some more testing, the issue is with the << EOF that is at the end of the command. This is there for the "here document". When removed, the command works fine. The problem is that I need << EOF there so that the MySQL queries will run.

    Read the article

  • [Ubuntu]df Total size is not correct compared with the size of the disk

    - by John John
    I'm running Ubuntu Squeeze and on one of the partitions df is showing the Total size as 335G: Filesystem Size Used Avail Use% Mounted on /dev/sdb 335G 225G 94G 71% /mnt However in the past it was showing as 360GB (which is the actual size): fdisk -l /dev/sdb Disk /dev/sdb: 365.0 GB, 365041287168 bytes lsof +L1 does not return anything (and anyway if this would be the case the Total space should not be affected.) On this partition I'm writing (and deleting) a lot of files and this happened before in the past, but problem solved by itself.

    Read the article

  • Hadoop initscript askes password

    - by Ramesh
    I have installed hadoop on my ubuntu 12.04 single node .I am trying to execute an init script to make the hadoop run on start up but it asks password every time i execute. #!/bin/sh ### BEGIN INIT INFO # Provides: hadoop services # Required-Start: $network # Required-Stop: $network # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Description: Hadoop services # Short-Description: Enable Hadoop services including hdfs ### END INIT INFO PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin HADOOP_BIN=/home/naveen/softwares/hadoop-1.0.3/bin NAME=hadoop DESC=hadoop USER=naveen ROTATE_SUFFIX= test -x $HADOOP_BIN || exit 0 RETVAL=0 set -e cd / start_hadoop () { set +e su $USER -s /bin/sh -c $HADOOP_BIN/start-all.sh > /var/log/hadoop/startup_log case "$?" in 0) echo SUCCESS RETVAL=0 ;; 1) echo TIMEOUT - check /var/log/hadoop/startup_log RETVAL=1 ;; *) echo FAILED - check /var/log/hadoop/startup_log RETVAL=1 ;; esac set -e } stop_hadoop () { set +e if [ $RETVAL = 0 ] ; then su $USER -s /bin/sh -c $HADOOP_BIN/stop-all.sh > /var/log/hadoop/shutdown_log RETVAL=$? if [ $RETVAL != 0 ] ; then echo FAILED - check /var/log/hadoop/shutdown_log fi else echo No nodes running RETVAL=0 fi set -e } restart_hadoop() { stop_hadoop start_hadoop } case "$1" in start) echo -n "Starting $DESC: " start_hadoop echo "$NAME." ;; stop) echo -n "Stopping $DESC: " stop_hadoop echo "$NAME." ;; force-reload|restart) echo -n "Restarting $DESC: " restart_hadoop echo "$NAME." ;; *) echo "Usage: $0 {start|stop|restart|force-reload}" >&2 RETVAL=1 ;; esac exit $RETVAL Please tell me how to run hadoop without entering password.

    Read the article

  • recommended way to collect email notifications from crond in Arch Linux

    - by nponeccop
    Arch Linux doesn't have sendmail installed by default. So I get the following messages in my syslog: Sep 15 13:16:01 zorro crond[18497]: mailing cron output for user collectors sh cronjob.sh Sep 15 13:16:01 zorro crond[18497]: unable to exec /usr/sbin/sendmail: cron output for user collectors sh cronjob.sh to /dev/null What is the recommended way to fix this default behaviour so actual messages are sent? heirloom-mailx is installed and capable of sending email messages using SMTP. Is it possible for crond to use mailx to send notifications? Is there any drop-in replacement for sendmail that sends using mailx? Sendmail is not even in the repositories.

    Read the article

  • Sarg report error

    - by amyassin
    I have a proxy server that runs Ubuntu Server 11.10, Squid 2.7.STABLE9. I installed sarg (version 2.3.1 Sep-18-2010) to generate reports using the ordinary apt-get install, and added a cron job to generate a report of the day every 5 minutes (that will overwrite the 5-minutes-older one): */5 * * * * /root/proxy_report.sh And the content of /root/proxy_report.sh is: #!/bin/bash /usr/bin/sarg -nd `date +"%d/%m/%Y"` > /dev/null 2>&1 And I added another cron job to generate a full report every hour at :32 (not to collide with the 5 minutes job): */32 * * * * /root/proxy_report_full.sh And the content of /root/proxy_report_full.sh is : #!/bin/bash /usr/bin/sarg -n > /dev/null 2>&1 And I added a small script to remove the yesterday full report (the full report that ends in yesterday that won't be overwritten by the new today full report) in /etc/rc.local to run at startup: /usr/bin/rm_yesterday.sh &>> /var/log/rm_yesterday Where /usr/bin/rm_yesterday.sh: #!/bin/bash find /var/www/sarg/ | grep `date -d Apr1 +"%Y%b%d"`-* | grep -v `date +"%Y%b%d"` | xargs rm -rf * Apr1 is the starting date of the proxy... ** I've placed it in /usr/bin to be mounted early at startup... That arrangement went OK for about a month and a half, except for one time I noticed some errors and reports wasn't generated, and fixed that by making an offset (the two minutes in 32 of the second cron job). However, it then started not to generate reports anymore. By manually trying to generate it it gives the following error: root@proxy-server:~# sarg -n SARG: getword_atoll loop detected after 3 bytes. SARG: Line="154 192.168.10.40 TCP_MISS/200 39 CONNECT www.google.com" SARG: Record="154 192.168.10.40 TCP_MISS/200 39 CONNECT www.google.com" SARG: searching for 'x2f' SARG: getword backtrace: SARG: 1:sarg() [0x8050a4a] SARG: 2:sarg() [0x8050c8b] SARG: 3:sarg() [0x804fc2e] SARG: 4:/lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x129113] SARG: 5:sarg() [0x80501c9] SARG: Maybe you have a broken date in your /var/log/squid/access.log file When I looked to /var/log/squid/ folder, I noticed that it contains some rotated logs: root@proxy-server:~# ls /var/log/squid/ access.log access.log.1 cache.log cache.log.1 store.log store.log.1 So maybe sarg installed logrotate with it? Or it comes with the standard Ubuntu? I don't remember I installed it manuallly. The question is: What could've gone wrong? Does it have something to do with rotating the log? How can I trace the error and start generating reports again?

    Read the article

  • rsync osx to linux

    - by Nick
    I did a backup to a remote nfs folder with rsync, from a MAC to a Remote Debian. The final backup is 58GB less than the original. Rsync says that everything was OK, and nothing to update. Macintosh:/Volumes/Data1 root# du -sh Produccion/ 319G Produccion/ root@Disketera:/mnt/soho_storage/samba/shares# du -sh Produccion/ 260G Produccion/ can I trust in rsync? I'm using rsync -av --stats /Volumes/Data1/Produccion/ /mnt/red/ (/mnt/red is my samba mountpoint) Some differents folders root@Disketera:/mnt/soho_storage/samba/shares/Produccion/tiposok# du -sh * 0 IndoSanBol 0 IndoSans-Bold 0 IndoSans-Italic 0 IndoSans-Light 0 IndoSans-Regular 40K PalatinoLTStd-Black.otf 40K PalatinoLTStd-BlackItalic.otf 40K PalatinoLTStd-Bold.otf 44K PalatinoLTStd-BoldItalic.otf 44K PalatinoLTStd-Italic.otf 40K PalatinoLTStd-Light.otf 40K PalatinoLTStd-LightItalic.otf 40K PalatinoLTStd-Medium.otf 40K PalatinoLTStd-MediumItalic.otf 56K PalatinoLTStd-Roman.otf 12K TCL IndoSans_mac Macintosh:/Volumes/Data1/Produccion/tiposok root# du -sh * 36K IndoSanBol 40K IndoSans-Bold 36K IndoSans-Italic 36K IndoSans-Light 36K IndoSans-Regular 40K PalatinoLTStd-Black.otf 40K PalatinoLTStd-BlackItalic.otf 40K PalatinoLTStd-Bold.otf 44K PalatinoLTStd-BoldItalic.otf 44K PalatinoLTStd-Italic.otf 40K PalatinoLTStd-Light.otf 40K PalatinoLTStd-LightItalic.otf 40K PalatinoLTStd-Medium.otf 40K PalatinoLTStd-MediumItalic.otf 56K PalatinoLTStd-Roman.otf 160K TCL IndoSans_mac

    Read the article

  • How to solve "command not found" on Ubuntu bash shell?

    - by redcoder
    I am very new to Ubuntu, struggling to find the proper command to install Zend Server on my Ubuntu 9.10 Server. After downloading the ZF Server and extracting it, I try to run this command: install_zs.sh 5.3 This is the ls of the extracted ZF archive: install_zs.sh README upgrade_zs_php.sh zend.deb.repo zend.rpm.repo But it says command not found. Any idea?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >