Search Results

Search found 5228 results on 210 pages for 'bash alias'.

Page 82/210 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Find does not work in Expect Send command

    - by Sharjeel Sayed
    I run this bash command to display contents of somefile.cf in a Weblogic domain directory. find $(/usr/ucb/ps auwwx | grep weblogic | tr ' ' '\n' | grep security.policy | grep domain | awk -F'=' '{print $2}' | sed -e 's/weblogic.policy//' -e 's/security\///' -e 's/dep\///' | awk -F'/' '{print "/"$2"/"$3"/"$4"/somefile.cf"}' | sort | uniq) 2> /dev/null -exec ls {} \; -exec cat {} \; I tried incorporating this in an expect script and also escaped some special characters which would throw an error in expect but its still not working. send "echo ; echo 'Weblogic somefile.cf:' ; find \$(/usr/ucb/ps auwwx | grep weblogic | tr ' ' '\n' | grep security.policy | grep domain | awk -F'=' '{print \$2}' | sed -e 's/weblogic.policy//' -e 's/security\\///' -e 's/dep\\///' | awk -F'/' '{print "/"\$2"/"\$3"/"\$4"/somefile.cf"}' | sort | uniq) 2> /dev/null -exec ls {} \\; -exec cat {} \\; ; echo\r" I guess it needs some more escaping of special characters or probably I dint escape the existing ones correctly. Any help would be appreciated.

    Read the article

  • Processing a log to fix a malformed IP address ?.?.?.x

    - by skymook
    I would like to replace the first character 'x' with the number '7' on every line of a log file using a shell script. Example of the log file: 216.129.119.x [01/Mar/2010:00:25:20 +0100] "GET /etc/.... 74.131.77.x [01/Mar/2010:00:25:37 +0100] "GET /etc/.... 222.168.17.x [01/Mar/2010:00:27:10 +0100] "GET /etc/.... My humble beginnings... #!/bin/bash echo Starting script... cd /Users/me/logs/ gzip -d /Users/me/logs/access.log.gz echo Files unzipped... echo I'm totally lost here to process the log file and save it back to hd... exit 0 Why is the log file IP malformed like this? My web provider (1and1) has decide not to store IP address, so they have replaced the last number with the character 'x'. They told me it was a new requirement by 'law'. I personally think that is bs, but that would take us off topic. I want to process these log files with AWstats, so I need an IP address that is not malformed. I want to replace the x with a 7, like so: 216.129.119.7 [01/Mar/2010:00:25:20 +0100] "GET /etc/.... 74.131.77.7 [01/Mar/2010:00:25:37 +0100] "GET /etc/.... 222.168.17.7 [01/Mar/2010:00:27:10 +0100] "GET /etc/.... Not perfect I know, but least I can process the files, and I can still gain a lot of useful information like country, number of visitors, etc. The log files are 200MB each, so I thought that a shell script is the way to go because I can do that rapidly on my Macbook Pro locally. Unfortunately, I know very little about shell scripting, and my javascript skills are not going to cut it this time. I appreciate your help.

    Read the article

  • Data in linux FIFO seems lost

    - by Utoah
    Hi, I have a bash script which wants to do some work in parallel, I did this by putting each job in an subshell which is run in the background. While the number of job running simultaneously should under some limit, I achieve this by first put some lines in a FIFO, then just before forking the subshell, the parent script is required to read a line from this FIFO. Only after it gets a line can it fork the subshell. Up to now, everything works fine. But when I tried to read a line from the FIFO in the subshell, it seems that only one subshell can get a line, even if there are apparently more lines in the FIFO. So I wonder why cannot other subshell(s) read a line even when there are more lines in the FIFO. My testing code looks something like this: #!/bin/sh fifo_path="/tmp/fy_u_test2.fifo" mkfifo $fifo_path #open fifo for r/w at fd 6 exec 6 $fifo_path process_num=5 #put $process_num lines in the FIFO for ((i=0; i<${process_num}; i++)); do echo "$i" done &6 delay_some(){ local index="$1" echo "This is what u can see. $index \n" sleep 20; } #In each iteration, try to read 2 lines from FIFO, one from this shell, #the other from the subshell for i in 1 2 do date /tmp/fy_date #If a line can be read from FIFO, run a subshell in bk, otherwise, block. read -u6 echo " $$ Read --- $REPLY --- from 6 \n" /tmp/fy_date { delay_some $i #Try to read a line from FIFO read -u6 echo " $$ This is in child # $i, read --- $REPLY --- from 6 \n" /tmp/fy_date } & done And the output file /tmp/fy_date has content of: Mon Apr 26 16:02:18 CST 2010 32561 Read --- 0 --- from 6 \n Mon Apr 26 16:02:18 CST 2010 32561 Read --- 1 --- from 6 \n 32561 This is in child # 1, read --- 2 --- from 6 \n

    Read the article

  • How to print "Hello, world!" (in every possible way)

    - by Attila Oláh
    Here's what I', trying to do: 1 language: (Python < 3): print "Hello, world!" 2 languages: (Python < 3 & Windows Shell, aka .bat file): rem=""" echo "Hello, world!" exit """ print "Hello, world!" Next step could be something like bash. Since the above one raises an exception, I tried to make it not raise exceptions, like this: rem=""" echo "Hello, world!" exit """ exit="" exit print "Hello, world!" The only issue is, of course, it won't print the hello world. And I really want it to print that hello world for me. Anyone can help with this? Also, any other language would do it, just don't break the previous ones (i.e. the answer still has to be valid Python code and print out the nice hello world greeting when run with Python.) Any ideas are welcome. I'm making this a community wiki so feel free to add ideas to the list.

    Read the article

  • how do I paste text to a line by line text filter like awk, without having stdin echo to the screen?

    - by Barton Chittenden
    I have a text in an email on a windows box that looks something like this: 100 some random text 101 some more random text 102 lots of random text, all different 103 lots of random text, all the same I want to extract the numbers, i.e. the first word on each line. I've got a terminal running bash open on my Linux box... If these were in a text file, I would do this: awk '{print $1}' mytextfile.txt I would like to paste these in, and get my numbers out, without creating a temp file. my naive first attempt looked like this: $ awk '{print $1}' 100 some random text 100 101 some more random text 101 102 lots of random text, all different 103 lots of random text, all the same 102 103 The buffering of stdin and stdout make a hash of this. I wouldn't mind if stdin all printed first, followed by all of stdout; this is what would happen if I were to paste into 'sort' for example, but awk and sed are a different story. a little more thought gave me this: open two terminals. Create a fifo file. Read from the fifo on one terminal, write to it on another. This does in fact work, but I'm lazy. I don't want to open a second terminal. Is there a way in the shell that I can hide the text echoed to the screen when I'm passing it in to a pipe, so that I paste this: 100 some random text 101 some more random text 102 lots of random text, all different 103 lots of random text, all the same but see this? $ awk '{print $1}' 100 101 102 103

    Read the article

  • Parsing Strings ( .crt files )

    - by user1661521
    Base Knowledge : I have a .crt file ( certification authoritie file ) and he is composed of many fields but in one line that resumes this question i have this : Certificate: ...(alot of stuff before)... Subject: C=US, ST=Maryland, L=Pasadena, O=Brent Baccala, OU=FreeSoft, CN=www.freesoft.org/[email protected] Subject Public Key Info: ...(alot of stuff after) and i need to parse the file to populate a .csv file and i have that done the problem that i need help is, i need to get the field: CN=www.fresoft.org but when i get this kind of CN=...(Value instead of the ...) with alot of slashes i get a error in the parsing like the raw string is: CN=foo/bar/the/hell/emailAddress=blablabla and i need only: foo/bar/the/hell and for a moment i got that in the correct column but when i dont have the emailAddress something just fail in my parsing and i then get in my CN .csv column the information wrong instead of |CN| foo/bar/the/hell i get: |CN| OU=FreeSoft, foo/bar/the/hell. I have this code doing the CN parsing: #!/bin/bash subject_line=$(echo $cert | grep -o "Subject:.*Subject Public Key Info") cn=$(echo $subject_line | grep -o "CN=.*" ) if [ $(echo $cn | grep -c ".*email.*") -gt 0 ]; then end_cn=$(echo $cn | grep -b -o emailAddress) end_cn_idx=$(echo $end_cn | grep -o .*:) final_end_cn=${end_cn_idx:0:-1} common_name=${cn:3:$final_end_cn-4} echo $common_name else end_cn=$(echo $cn | grep -b -o "Subject Public Key Info") end_cn_idx=$(echo $end_cn | grep -o .*:) final_end_cn=${end_cn_idx:0:-1} common_name=${cn:3:$final_end_cn-5} echo $common_name fi

    Read the article

  • Cleaning up a sparsebundle with a script

    - by nickg
    I'm using time machine to backup some servers to a sparse disk image bundle and I'd like to have a script to clean up the old back ups and re-size the image once space has been freed up. I'm fairly certain the data is protected because if I delete the old backups by right clicking, I have to enter a password to be able to delete them. To allow my script to be able to delete them, I've been running it as root. For some reason, it just won't run and every file it tries to delete I get rm: /file/: Operation not permitted Here is what I have as my script: #!/bin/bash for server in servername; do /usr/bin/hdiutil attach -mountpoint /path/to/mountpoint /path/to/sparsebundle/$server.sparsebundle/; /bin/sleep 10; /usr/bin/find /path/to/mountpoint -type d -mtime +7 -exec /bin/rm -rf {} \; /usr/bin/hdiutil unmount /path/to/mountpoint; /bin/sleep 10; /usr/bin/hdiutil compact /path/to/sparsebundle/$server.sparsebundle/; done exit; One of the problems I thought was causing this was it needed to have a mountpoint specified since the default mount was to /Volumes/Time\ Machine\ Backups/ that's why I created a mountpoint. I also thought that it was trying to delete the files to quickly after mounting and it wasn't actually mounted yet, that's why I added the sleep. I've also tried using the -delete option for find instead of -exec, but it didn't make any difference. Any help on this would be greatly appreciated because I'm out of ideas as to why this won't work.

    Read the article

  • How can I reorder an mbox file chronologically?

    - by Joshxtothe4
    Hello, I have a single spool mbox file that was created with evolution, containing a selection of emails that I wish to print. My problem is that the emails are not placed into the mbox file chronologically. I would like to know the best way to place order the files from first to last using bash, perl or python. I would like to oder by received for files addressed to me, and sent for files sent by me. Would it perhaps be easier to use maildir files or such? The emails currently exist in the format: From [email protected] Fri Aug 12 09:34:09 2005 Message-ID: <[email protected]> Date: Fri, 12 Aug 2005 09:34:09 +0900 From: me <[email protected]> User-Agent: Mozilla Thunderbird 1.0.6 (Windows/20050716) X-Accept-Language: en-us, en MIME-Version: 1.0 To: someone <[email protected]> Subject: Re: (no subject) References: <[email protected]> In-Reply-To: <[email protected]> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 8bit Status: RO X-Status: X-Keywords: X-UID: 371 X-Evolution-Source: imap://[email protected]/ X-Evolution: 00000002-0010 Hey the actual content of the email someone wrote: > lines of quotedtext I am wondering if there is a way to use this information to easily reorganize the file, perhaps with perl or such.

    Read the article

  • Extract a specific string from a curl'd result

    - by allentown
    Given this curl command: curl --user-agent "fogent" --silent -o page.html "http://www.google.com/search?q=insansiate" * Spelling is intentionally incorrect. I want to grab the suggestion as my result. I want to be able to either grep into the page.html file perhaps with grep -oE or pipe it right from curl and never store a file. The result should be: 'instantiate' I need only the word 'instantiate', or the phrase, whatever google is auto correcting, is what I am after. Here is the basic html that is returned: <span class=spell style="color:#cc0000">Did you mean: </span><a href="/search?hl=en&amp;ie=UTF-8&amp;&amp;sa=X&amp;ei=VEMUTMDqGoOINraK3NwL&amp;ved=0CB0QBSgA&amp;q=instantiate&amp;spell=1"class=spell><b><i>instantiate</i></b></a>&nbsp;&nbsp;<span class=std>Top 2 results shown</span> So perhaps from/to of the string below, which I hope is unique enough to cover all my bases. class=spell><b><i>instantiate</i></b></a>&nbsp;&nbsp; I keep running into issues with greedy grep; perhaps I should run it though an html prettify tool first to get a line break or 50 in there. I don't know of any simple way to do so in bash, which is what I would ideally like this to be in. I really don't want to deal with firing up perl, and making sure I have the correct module. Any suggestions, thank you?

    Read the article

  • Trap SIGPIPE when trying to write without reader

    - by Matt
    I am trying to implement a named-pipe communication solution in BASH between two processes. The first process runs a script which echo something in a named-pipe: send(){ echo 'something' > $NAMEDPIPE } And the second script is supposed to read the named-pipe via another script which contains: while true;do if read line < $NAMEDPIPE;do someCommands fi done Not that the named pipe has been previously created using the traditional command mkfifo $NAMEDPIPE My problem is that the reader script is not always running so that if the writer script try to write in the named-pipe it stay blocked until a reader connect the pipe. I want to avoid this behavior, and a solution would be to trap a SIGPIPE signal. Indeed, according to man 7 signal is supposed to be send when trying to write in a pipe with no reader. So I changed my red function by: read(){ trap 'echo "SIGPIPE received"' SIGPIPE echo 'something' > $NAMEDPIPE } But when I run the reader script, the script stay blocked, and not "SIGPIPE received" appears... Am I mistaking on the signal mechanism or is there any better solution to my problem ? Thank you for your help.

    Read the article

  • ps: Clean way to only get parent processes?

    - by shkschneider
    I use ps ef and ps rf a lot. Here is a sample output for ps rf: PID TTY STAT TIME COMMAND 3476 pts/0 S 0:00 su ... 3477 pts/0 S 0:02 \_ bash 8062 pts/0 T 1:16 \_ emacs -nw ... 15733 pts/0 R+ 0:00 \_ ps xf 15237 ? S 0:00 uwsgi ... 15293 ? S 0:00 \_ uwsgi ... 15294 ? S 0:00 \_ uwsgi ... And today I needed to retrieve only the master process of uwsgi in a script (so I want only 15237 but not 15293 nor 15294). As of today, I tried some ps rf | grep -v ' \\_ '... but I would like a cleaner way. I also came accross another solution from unix.com's forums: ps xf | sed '1d' | while read pid tty stat time command ; do [ -n "$(echo $command | egrep '^uwsgi')" ] && echo $pid ; done But still a lot of pipes and ugly tricks. Is there really no ps option or cleaner tricks (maybe using awk) to accomplish that?

    Read the article

  • Incremental deploy from a shell script

    - by WishCow
    I have a project, where I'm forced to use ftp as a means of deploying the files to the live server. I'm developing on linux, so I hacked together a bash script that makes a backup of the ftp server's contents, deletes all the files on the ftp, and uploads all the fresh files from the mercurial repository. (and taking care of user uploaded files and folders, and making post-deploy changes, etc) It's working well, but the project is starting to get big enough to make the deployment process too long. I'd like to modify the script to look up which files have changed, and only deploy the modified files. (the backup is fine atm as it is) I'm using mercurial as a VCS, so my idea is to somehow request the changed files between two revisions from it, iterate over the changed files, and upload each modified file, and delete each removed file. I can use hg log -vr rev1:rev2, and from the output, I can carve out the changed files with grep/sed/etc. Two problems: I have heard the horror stories that parsing the output of ls leads to insanity, so my guess is that the same applies to here, if I try to parse the output of hg log, the variables will undergo word-splitting, and all kinds of transformations. hg log doesn't tell me a file is modified/added/deleted. Differentiating between modified and deleted files would be the least. So, what would be the correct way to do this? I'm using yafc as an ftp client, in case it's needed, but willing to switch.

    Read the article

  • Makefile; mirroring a growing tree through a process

    - by Martineau
    I would like to periodically mirror a growing tree, say, from $in to $out, doing a process in between (saving the only file header). As; #!/bin/bash in=./segd out=./db for f in `find $in -name "*.segd"`;do # Deduct output (dir + name) d=`dirname $f|perl -pe 's!'$in'!'$out'!'` n=`basename $f|perl -pe 's!$!_hdr!'` if [ ! -e $d/$n ]; then [ ! -d $d ] && mkdir -p $d; printf "From %s now build %s\n" $f "$d/$n" # Do something, whathever. For example e.g; dd if=$f bs=32 count=1 conv=swab 2>/dev/null|od -x > $d/$n fi done That is about fair. However; to be more robust, for a better synchronization (say if a source file did change or whatever), I would like to use a Makefile, as in; HDR := $(patsubst ./segd/%.segd,./db/%.segd_hdr,$(wildcard ./segd/*.segd)) all: ${HDR} db/%.segd_hdr: ./segd/%.segd echo "Doing" dd if=$< bs=32 count=1 conv=swab 2>/dev/null|od -x > $@ My problem; I cannot code this Makefile to "dive" more deeply within the source ./segd tree. How can we do it and is there a way ? Many thanks for your kind recommendations. PS: The idea will be to later rsync the (smaller) destination tree over a sat connection.

    Read the article

  • Preventing a child process (HandbrakeCLI) from causing the parent script to exit

    - by Chris
    I have a batch conversion script to turn .mkvs of various dimensions into ipod/iphone sized .mp4s, cropping/scaling to suit. Determining the original dimensions, required crop, output file is all working fine. However, on successful completion of the first conversion, HandbrakeCLI causes the parent script to exit. Why would this be? And how can I stop it? The code, as it currently stands: #!/bin/bash find . -name "*.mkv" | while read FILE do # What would the output file be? DST=../Touch/$(dirname "$FILE") MKV=$(basename "$FILE") MP4=${MKV%%.mkv}.mp4 # If it already exists, don't overwrite it if [ -e "$DST/$MP4" ] then echo "NOT overwriting $DST/$MP4" else # Stuff to determine dimensions/cropping removed for brevity HandbrakeCLI --preset "iPhone & iPod Touch" --vb 900 --crop $crop -i "$FILE" -o "$DST/$MP4" > /dev/null 2>&1 if [ $? != 0 ] then echo "$FILE had problems" >> errors.log fi fi done I have additionally tried it with a trap, but that didn't change the behaviour (although the last trap did fire) trap "echo Handbrake SIGINT-d" SIGINT trap "echo Handbrake SIGTERM-d" SIGTERM trap "echo Handbrake EXIT-d" EXIT trap "echo Handbrake 0-d" 0

    Read the article

  • Optimize Duplicate Detection

    - by Dave Jarvis
    Background This is an optimization problem. Oracle Forms XML files have elements such as: <Trigger TriggerName="name" TriggerText="SELECT * FROM DUAL" ... /> Where the TriggerText is arbitrary SQL code. Each SQL statement has been extracted into uniquely named files such as: sql/module=DIAL_ACCESS+trigger=KEY-LISTVAL+filename=d_access.fmb.sql sql/module=REP_PAT_SEEN+trigger=KEY-LISTVAL+filename=rep_pat_seen.fmb.sql I wrote a script to generate a list of exact duplicates using a brute force approach. Problem There are 37,497 files to compare against each other; it takes 8 minutes to compare one file against all the others. Logically, if A = B and A = C, then there is no need to check if B = C. So the problem is: how do you eliminate the redundant comparisons? The script will complete in approximately 208 days. Script Source Code The comparison script is as follows: #!/bin/bash echo Loading directory ... for i in $(find sql/ -type f -name \*.sql); do echo Comparing $i ... for j in $(find sql/ -type f -name \*.sql); do if [ "$i" = "$j" ]; then continue; fi # Case insensitive compare, ignore spaces diff -IEbwBaq $i $j > /dev/null # 0 = no difference (i.e., duplicate code) if [ $? = 0 ]; then echo $i :: $j >> clones.txt fi done done Question How would you optimize the script so that checking for cloned code is a few orders of magnitude faster? System Constraints Using a quad-core CPU with an SSD; trying to avoid using cloud services if possible. The system is a Windows-based machine with Cygwin installed -- algorithms or solutions in other languages are welcome. Thank you!

    Read the article

  • ServerAlias * problem

    - by nyrox
    I have 3 websites on a dedicate server (with cent os and Plesk control panel) and one of these websites must ServerAlias * when I try this in httpd.include , other two websites alias on mastersite.com but I dont want this i solved it with dedicated ip , but now I want do it with one ip <VirtualHost xx.xx.xx.xx:80> ServerName mastersite.com:80 ServerAlias * UseCanonicalName Off SuexecUserGroup .. .. .. sorry for my English

    Read the article

  • Write STDOUT & STDERR to a logfile, also write STDERR to screen

    - by Stefan Lasiewski
    I would like to run several commands, and capture all output to a logfile. I also want to print any errors to the screen (or optionally mail the output to someone). Here's an example. The following command will run three commands, and will write all output (STDOUT and STDERR) into a single logfile. { command1 && command2 && command3 ; } > logfile.log 2>&1 Here is what I want to do with the output of these commands: STDERR and STDOUT for all commands goes to a logfile, in case I need it later--- I usually won't look in here unless there are problems. Print STDERR to the screen (or optionally, pipe to /bin/mail), so that any error stands out and doesn't get ignored. It would be nice if the return codes were still usable, so that I could do some error handling. Maybe I want to send email if there was an error, like this: { command1 && command2 && command3 ; } logfile.log 2&1 || mailx -s "There was an error" [email protected] The problem I run into is that STDERR loses context during I/O redirection. A '2&1' will convert STDERR into STDOUT, and therefore I cannot view errors if I do 2 error.log Here are a couple juicier examples. Let's pretend that I am running some familiar build commands, but I don't want the entire build to stop just because of one error so I use the '--keep-going' flag. { ./configure && make --keep-going && make install ; } > build.log 2>&1 Or, here's a simple (And perhaps sloppy) build and deploy script, which will keep going in the event of an error. { ./configure && make --keep-going && make install && rsync -av --keep-going /foo devhost:/foo} > build-and-deploy.log 2>&1 I think what I want involves some sort of Bash I/O Redirection, but I can't figure this out.

    Read the article

  • log4bash: Cannot find a way to add MaxBackupIndex to this logger implementation

    - by Syffys
    I have been trying to modify this log4bash implementation but I cannot manage to make it work. Here's a sample: #!/bin/bash TRUE=1 FALSE=0 ############### Added for testing log4bash_LOG_ENABLED=$TRUE log4bash_rootLogger=$TRACE,f,s log4bash_appender_f=file log4bash_appender_f_dir=$(pwd) log4bash_appender_f_file=test.log log4bash_appender_f_roll_format=%Y%m log4bash_appender_f_roll=$TRUE log4bash_appender_f_maxBackupIndex=10 #################################### log4bash_abs(){ if [ "${1:0:1}" == "." ]; then builtin echo ${rootDir}/${1} else builtin echo ${1} fi } log4bash_check_app_dir(){ if [ "$log4bash_LOG_ENABLED" -eq $TRUE ]; then dir=$(log4bash_abs $1) if [ ! -d ${dir} ]; then #log a seperation line mkdir $dir fi fi } # Delete old log files # $1 Log directory # $2 Log filename # $3 Log filename suffix # $4 Max backup index log4bash_delete_old_files(){ ##### Added for testing builtin echo "Running log4bash_delete_old_files $@" &2 ##### if [ "$log4bash_LOG_ENABLED" -eq $TRUE ] && [ -n "$3" ] && [ "$4" -gt 0 ]; then local directory=$(log4bash_abs $1) local filename=$2 local maxBackupIndex=$4 local suffix=$(echo "${3}" | sed -re 's/[^.]/?/g') local logFileList=$(find "${directory}" -mindepth 1 -maxdepth 1 -name "${filename}${suffix}" -type f | xargs ls -1rt) local fileCnt=$(builtin echo -e "${logFileList}" | wc -l) local fileToDeleteCnt=$(($fileCnt-$maxBackupIndex)) local fileToDelete=($(builtin echo -e "${logFileList}" | head -n "${fileToDeleteCnt}" | sed ':a;N;$!ba;s/\n/ /g')) ##### Added for testing builtin echo "log4bash_delete_old_files About to start deletion ${fileToDelete[@]}" &2 ##### if [ ${fileToDeleteCnt} -gt 0 ]; then for f in "${fileToDelete[@]}"; do #### Added for testing builtin echo "Removing file ${f}" &2 #### builtin eval rm -f ${f} done fi fi } #Appender # $1 Log directory # $2 Log file # $3 Log file roll ? # $4 Appender Name log4bash_filename(){ builtin echo "Running log4bash_filename $@" &2 local format local filename log4bash_check_app_dir "${1}" if [ ${3} -eq 1 ];then local formatProp=${4}_roll_format format=${!formatProp} if [ -z ${format} ]; then format=$log4bash_appender_file_format fi local suffix=.`date "+${format}"` filename=${1}/${2}${suffix} # Old log files deletion local previousFilenameVar=int_${4}_file_previous local maxBackupIndexVar=${4}_maxBackupIndex if [ -n "${!maxBackupIndexVar}" ] && [ "${!previousFilenameVar}" != "${filename}" ]; then builtin eval export $previousFilenameVar=$filename log4bash_delete_old_files "${1}" "${2}" "${suffix}" "${!maxBackupIndexVar}" else builtin echo "log4bash_filename $previousFilenameVar = ${!previousFilenameVar}" fi else filename=${1}/${2} fi builtin echo $filename } ######################## Added for testing filename_caller(){ builtin echo "filename_caller Call $1" output=$(log4bash_abs $(log4bash_filename "${log4bash_appender_f_dir}" "${log4bash_appender_f_file}" "1" "log4bash_appender_f" )) builtin echo ${output} } #### Previous logs generation for i in {1101..1120}; do file="${log4bash_appender_f_file}.2012${i:2:3}" builtin echo "${file} $i" touch -m -t "2012${i}0000" ${log4bash_appender_f_dir}/$file done for i in {1..4}; do filename_caller $i done I expect log4bash_filename function to step into the following if only when the calculated log filename is different from the previous one: if [ -n "${!maxBackupIndexVar}" ] && [ "${!previousFilenameVar}" != "${filename}" ]; then For this scenario to apply, I'd need ${!previousFilenameVar} to be correctly set, but it's not the case, so log4bash_filename steps into this if all the time which is really not necessary... It looks like the issue is due to the following line not working properly: builtin eval export $previousFilenameVar=$filename I have a some theories to explain why: in the original code, functions are declared and exported as readonly which makes them unable to modify global variable. I removed readonly declarations in the above sample, but probleme persists. Function calls are performed in $() which should make them run into seperated shell instances so variable modified are not exported to the main shell But I cannot manage to find a workaround to this issue... Any help is appreciated, thanks in advance!

    Read the article

  • Django-admin.py not working (-bash:django-admin.py: command not found)

    - by Diego
    I'm having trouble getting django-admin.py to work... it's in this first location: /Users/mycomp/bin/ but I think I need it in another location for the terminal to recognize it, no? Noob, Please help. Thanks!! my-computer:~/Django-1.1.1 mycomp$ sudo ln -s /Users/mycomp/bin/django-admin.py /Users/mycomp/django-1.1.1/django-admin.py Password: ln: /Users/mycomp/django-1.1.1/django-admin.py: File exists my-computer:~/Django-1.1.1 mycomp$ django-admin.py --version -bash: django-admin.py: command not found

    Read the article

  • how to use a bash function defined in your .bashrc with find -exec

    - by sharrajesh
    my .bashrc has the following function function myfile { file $1 } export -f myfile it works fine when i call it directly rajesh@rajesh-desktop:~$ myfile out.ogv out.ogv: Ogg data, Skeleton v3.0 it does not work when i try to invoke it through exec rajesh@rajesh-desktop:~$ find ./ -name *.ogv -exec myfile {} \; find: `myfile': No such file or directory is there a way to call bash script functions with exec? Any help is greatly appreciated. Thanks, sharrajesh

    Read the article

  • how to pass parameters to a linux bash shell

    - by chun
    hi i have a linux bash shell 'myshell' i want it to read two date as parameters, ex: myshell date1 date2 i am a Java programer, but don't know how to write a shell to get this done the rest of the shell is like this sed "s/$date1/$date2/g" wlacd_stat.xml tmp.xml mv tmp.xml wlacd_stat.xml thanks

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >