Search Results

Search found 7607 results on 305 pages for 'bash profile'.

Page 26/305 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Filtering Filenames with bash

    - by Stefan Liebenberg
    I have a directory full of log files in the form ${name}.log.${year}{month}${day} such that they look like this: logs/ production.log.20100314 production.log.20100321 production.log.20100328 production.log.20100403 production.log.20100410 ... production.log.20100314 production.log.old I'd like to use a bash script to filter out all the logs older than x amount of month's and dump it into *.log.old X=6 #months LIST=*.log.*; for file in LIST; do is_older = file_is_older_than_months( ${file}, ${X} ); if is_older; then cat ${c} >> production.log.old; rm ${c}; fi done; How can I get all the files older than x months? and... How can I avoid that *.log.old file is included in the LIST attribute? Thank you Stefan

    Read the article

  • Importing Python module from the Bash

    - by Morlock
    I am launching a Python script from the command line (Bash) under Linux. I need to open Python, import a module, and then have lines of code interpreted. The console must then remain in Python (not quit it). How do I do that? I have tried an alias like this one: alias program="cd /home/myname/programs/; python; import module; line_of_code" But this only starts python and the commands are not executed (no module import, no line of code treated). What is the proper way of doing this, provided I need to keep Python open (not quit it) after the script is executed? Many thanks.

    Read the article

  • SIMPLE BASH Programming.

    - by atif089
    I am a newbie to BASH so please dont mind my stupid questions because I am not able to get any good sources to learn that. I want to create a script to display filename and its size. This is what the code is like filename=$1 if [ -f $filename ]; then filesize=`du -b $1` echo "The name of file is $1" echo "Its size is $filesize" else echo "The file specified doesnot exists" fi The output is like this $ ./filesize.sh aa The name of file is aa Its size is 88 aa But in the last line I dont want to show the name of the file. How do I do that ? I want to do the same thing using wc as well.

    Read the article

  • List the names of existing directories from .tgz file in a bash variable

    - by Tom
    I would like to find all the directories that are in a .tgz file and that already exist on the system and put the result in a bash variable. I have tried this: EXISTING=`for f in \`tar tzf $ARCHIVE\`; do if [ -d "/tmp/unpacked-data/\$f" ]; then echo \$f; fi; done` with no luck. If I echo the value of $f before the if in the loop, I get all the files, ie this works: EXISTING=`for f in \`tar tzf $ARCHIVE\`; do echo \$f; done` Can someone tell me why the \$f doesn't work in the if statement? Thanks, Tom

    Read the article

  • Remove first element from $@ in bash

    - by Herms
    I'm writing a bash script that needs to loop over the arguments passed into the script. However, the first argument shouldn't be looped over, and instead needs to be checked before the loop. If I didn't have to remove that first element I could just do: for item in "$@" ; do #process item done I could modify the loop to check if it's in its first iteration and change the behavior, but that seems way too hackish. There's got to be a simple way to extract the first argument out and then loop over the rest, but I wasn't able to find it.

    Read the article

  • Sed not working inside bash script

    - by Isabelle
    Hello. I believe this may be a simple question, but I've looked everywhere and tried some workarounds, but I still haven't solved the problem. Problem description: I have to replace a character inside a file and I can do it easily using the command line: sed -e 's/pattern1/pattern2/g' full_path_to_file/file But when I use the same line inside a bash script I can't seem to be able to replace it, and I don't get an error message, just the file contents without the substitution. #!/bin/sh VAR1="patter1" VAR2="patter2" VAR3="full_path_to_file" sed -e 's/${VAR1}/${VAR2}/g' ${VAR3} Any help would be appreciated. Thank you very much for your time.

    Read the article

  • Listing time every second as a Bash script

    - by Caleb
    Hello all, first time here as I've finally started to learn programming. Anyway, I'm just trying to print the time in nanoseconds every second here, and I have this: #!/usr/bin/env bash while true; do date=(date +%N) ; echo $date ; sleep 1 ; done Now, that simply yields a string of date's, which isn't what I want. My learning has been rather messy, so I hope you'll excuse me for this if it's really simple. Also, I did manage to fine this, that worked on the prompt: while true ; do date +%N ; sleep 1 ; done But that obviously doesn't work as a script.

    Read the article

  • Use a grepped file as an included source in bash

    - by Andrew
    I'm on a shared webhost where I don't have permission to edit the global bash configuration file at /ect/bashrc. Unfortunately there is one line in the global file, mesg y, which puts the terminal in tty mode and makes scp and similar commands unavailable. My local ~./bashrc includes the global file as a source, like so: # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi My current workaround uses grep to output the global file, sans offending line, into a local file and use that as a source. # Source global definitions if [ -f /etc/bashrc ]; then grep -v mesg /etc/bashrc > ~/.bash_global . ~/.bash_global fi Is there a way to do include a grepped file like this without the intermediate step of creating an actual file? Something like this? . grep -v mesg /etc/bashrc > ~/.bash_global

    Read the article

  • cURL: from PHP to BASH

    - by flienteen
    Hi.. I've never done any curl before so am in need of some help. php: <?php $ch = curl_init(); $data = array( 'uptype'=>'file', 'file'=>'@'.$argv[1], ); curl_setopt($ch, CURLOPT_URL, 'http://my_site_ex/up.php'); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $data); curl_exec($ch); curl_close($ch); ?> how to make the same script in BASH?

    Read the article

  • mkdir error in bash script

    - by Don
    Hi, The following is a fragment of a bash script that I'm running under cygwin on Windows: deployDir=/cygdrive/c/Temp/deploy timestamp=`date +%Y-%m-%d_%H:%M:%S` deployDir=${deployDir}/$timestamp if [ ! -d "$deployDir" ]; then echo "making dir $deployDir" mkdir -p $deploydir fi This produces output such as: making dir /cygdrive/c/Temp/deploy/2010-04-30_11:47:58 mkdir: missing operand Try `mkdir --help' for more information. However, if I type /cygdrive/c/Temp/deploy/2010-04-30_11:47:58 on the command-line it succeeds, why does the same command not work in the script? Thanks, Don

    Read the article

  • bash command history update before execution of command

    - by Jon
    Hi, Bash's command history is great, especially it is useful when adding the history -a command to the COMMAND_PROMPT. However, I'm wondering if there is a way to log the commands to a file as soon as the Return key is pressed, e.g. before starting the command and not on completion of the command (using the COMMAND_PROMPT option would save the command once the prompt is there again). I read about auditing programs like snoopy and session recorder like script but I thought they're already too complex for the simple question I have. I guess that deactivating that script logs all the output of the command would lead already in the right direction but isn't there a quicker way to solve that probelm? Thanks, Jon

    Read the article

  • Automatic exit from bash shell script on error

    - by radman
    Hi, I've been writing some shell script and I would find it useful if there was the ability to halt the execution of said shell script if any of the commands failed. See below for an example: #!/bin/bash cd some_dir ./configure --some-flags make make install So in this case if the script can't change to the indicated directory then it would certainly not want to do a ./configure afterward it fails. Now I'm well aware that I could have an if check for each command (which I think is a hopeless solution), but is there a global setting to make the script exit if one of the commands fails?

    Read the article

  • Parsing getopts in bash

    - by ABach
    I've got a bash function that I'm trying to use getopts with and am having some trouble. The function is designed to be called by itself (getch), with an optional -s flag (getch -s), or with an optional string argument afterward (so getch master and getch -s master are both valid). The snippet below is where my problem lies - it isn't the entire function, but it's what I'm focusing on: getch() { if [ "$#" -gt 2 ] || [ "$1" = "-h" ] || [ "$1" = "--help" ]; then echo "Usage: $0 [-s] [branch-name]" >&2 return 1 fi while getopts "s" opt; do echo $opt # This line is here to test how many times we go through the loop case $opt in s) squash=true shift ;; *) ;; esac done } The getch -s master case is where the strangeness happens. The above should spit out s once, but instead, I get this: [user@host:git-repositories/temp]$ getch -s master s s [user@host:git-repositories/temp]$ Why is it parsing the -s opt twice?

    Read the article

  • Bash or python for changing spacing in files

    - by Werner
    Hi, I have a set of 10000 files. In all of them, the second line, looks like: AAA 3.429 3.84 so there is just one space (requirement) between AAA and the two other columns. The rest of lines on each file are completely different and correspond to 10 columns of numbers. Randomly, in around 20% of the files, and due to some errors, one gets BBB 3.429 3.84 so now there are two spaces between the first and second column. This is a big error so I need to fix it, changing from 2 to 1 space in the files where the error takes place. The first approach I thought of was to write a bash script that for each file reads the 3 values of the second line and then prints them with just one space, doing it for all the files. I wonder what do oyu think about this approach and if you could suggest something better, bashm python or someother approach. Thanks

    Read the article

  • Compare output of program to correct program using bash script, without using text files

    - by Doug
    I've been trying to compare the output of a program to known correct output by using a bash script without piping the output of the program to a file and then using diff on the output file and a correct output file. I've tried setting the variables to the output and correct output and I believe it's been successful but I can't get the string comparison to work correctly. I may be wrong about the variable setting so it could be that. What I've been writing: TEST=`./convert testdata.txt < somesampledata.txt` CORRECT="some correct output" if [ "$TEST"!="$CORRECT" ]; then echo "failed" fi

    Read the article

  • Bash Script using Grep to search for a pattern in a file

    - by atif089
    I am writing a bash script to search for a pattern in a file using GREP. I am clueless for why it isnt working. This is the program echo "Enter file name..."; read fname; echo "Enter the search pattern"; read pattern if [ -f $fname ]; then result=`grep -i '$pattern' $fname` echo $result; fi Or is there different approach to do this ? Thanks

    Read the article

  • Bash script function return value problem

    - by Eedoh
    Hi to all. Can anyone help me return the correct value from a bash script function? Here's my function that should return first (and only) line of the file passed as an argument: LOG_FILE_CREATION_TIME() { return_value=`awk 'NR==1' $1` return return_value } And here's my call of that function in the other script: LOG_FILE_CREATION_TIME "logfile" timestamp=$? echo "Timestamp = $timestamp" I always get some random values with this code. If, for example, there's a value of 62772031 in the "logfile", I get Timestamp = 255 as an output. For some other values in the file, I get other random values as a return value, never the correct one. Any ideas?

    Read the article

  • Bash variable kills script execusion

    - by Kyle Terry
    Sorry if this is better suited at serverfault, but I think it learns more towards the programming side of things. I have some code that's going into /etc/rc.local to detect what type of touch screen monitor is plugged in and changes out the xorg.conf before launching X. Here is a small snippet: CURRENT_MONITOR=`ls /dev/usb | grep 'egalax_touch\|quanta_touch'` case $CURRENT_MONITOR in '') CURRENT_MONITOR='none' ;; esac If one of those two touch screens is plugged in, it works just fine. If any other monitor is plugged in, it stops at the "CURRENT_MONITOR=ls /dev/usb | grep 'egalax_touch\|quanta_touch'." For testing I touched two files. One before creating CURRENT_MONITOR and one after CURRENT_MONITOR and only file touched before is created. I'm not a bash programmer so this might be something very obvious.

    Read the article

  • Bash variable kills script execution

    - by Kyle Terry
    Sorry if this is better suited at serverfault, but I think it learns more towards the programming side of things. I have some code that's going into /etc/rc.local to detect what type of touch screen monitor is plugged in and changes out the xorg.conf before launching X. Here is a small snippet: CURRENT_MONITOR=`ls /dev/usb | grep 'egalax_touch\|quanta_touch'` case $CURRENT_MONITOR in '') CURRENT_MONITOR='none' ;; esac If one of those two touch screens is plugged in, it works just fine. If any other monitor is plugged in, it stops at the "CURRENT_MONITOR=ls /dev/usb | grep 'egalax_touch\|quanta_touch'." For testing I touched two files. One before creating CURRENT_MONITOR and one after CURRENT_MONITOR and only file touched before is created. I'm not a bash programmer so this might be something very obvious.

    Read the article

  • Intersection of two lists in Bash

    - by User1
    I'm trying to write a simple script that will list the contents found in two lists. To simplify, let's use ls as an example. Imagine "one" and "two" are directories. one=`ls one` two=`ls two` intersection $one $two I'm still quite green in bash, so feel free to correct how I am doing this. I just need some command that will print out all files in "one" and "two". They must exist in both. You might call this the "intersection" between "one" and "two".

    Read the article

  • Parallelize Bash Script

    - by thelsdj
    Lets say I have a loop in bash: for foo in `some-command` do do-something $foo done do-something is cpu bound and I have a nice shiny 4 core processor. I'd like to be able to run up to 4 do-something's at once. The naive approach seems to be: for foo in `some-command` do do-something $foo & done This will run all do-somethings at once, but there are a couple downsides, mainly that do-something may also have some significant I/O which performing all at once might slow down a bit. The other problem is that this code block returns immediately, so no way to do other work when all the do-somethings are finished. How would you write this loop so there are always X do-somethings running at once?

    Read the article

  • bash testing a group of directories for existence

    - by Jim Jones
    Have documents stored in a file system which includes "daily" directories, e.g. 20050610. In a bash script I want to list the files in a months worth of these directories. So I'm running a find command find <path>/200506* -type f >> jun2005.lst. Would like to check that this set of directories is not a null set before executing the find command. However, if I use if[ -d 200506* ] I get a "too many arguements error. How can I get around this?

    Read the article

  • bash find xargs grep only single occurence

    - by keftebub
    hi. maybe it's a bit strange - and maybe there are other tools to do this but, well.. i am using the following classic bash command to find all files which contain some string: find . -type f | xargs grep "something" i have a great number of files, on multiple depths. first occurence of "something" is enough for me, but find continues searching, and takes a long time to complete the rest of the files. what i would like to do is something like a "feedback" from grep back to find so that find could stop searching for more files. is such a thing possible? thank you

    Read the article

  • Removing final bash script argument

    - by ctuffli
    I'm trying to write a script that searches a directory for files and greps for a pattern. Something similar to the below except the find expression is much more complicated (excludes particular directories and files). #!/bin/bash if [ -d "${!#}" ] then path=${!#} else path="." fi find $path -print0 | xargs -0 grep "$@" Obviously, the above doesn't work because "$@" still contains the path. I've tried variants of building up an argument list by iterating over all the arguments to exclude path such as args=${@%$path} find $path -print0 | xargs -0 grep "$path" or whitespace="[[:space:]]" args="" for i in "${@%$path}" do # handle the NULL case if [ ! "$i" ] then continue # quote any arguments containing white-space elif [[ $i =~ $whitespace ]] then args="$args \"$i\"" else args="$args $i" fi done find $path -print0 | xargs -0 grep --color "$args" but these fail with quoted input. For example, # ./find.sh -i "some quoted string" grep: quoted: No such file or directory grep: string: No such file or directory Note that if $@ doesn't contain the path, the first script does do what I want.

    Read the article

  • Bash script to (more or less) reliably check if the internet is up

    - by João Portela
    I need a bash script to put in a cron job that every minute checks if the internet is up. This is how I did it: #! /bin/sh host1=google.com host2=wikipedia.org curr_date=`date +"%Y%m%d%H%M"` echo -n "${curr_date};" ((ping -w5 -c3 $host1 || ping -w5 -c3 $host2) > /dev/null 2>&1) && echo "up" || (echo "down" && exit 1) How would you do it? Which hosts would you ping? Thanks in advance.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >