Search Results

Search found 24505 results on 981 pages for 'bash script'.

Page 31/981 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • how to escape white space in bash loop list

    - by MCS
    I have a bash shell script that loops through all child directories (but not files) of a certain directory. The problem is that some of the directory names contain spaces. Here are the contents of my test directory: $ls -F test Baltimore/ Cherry Hill/ Edison/ New York City/ Philadelphia/ cities.txt And the code that loops through the directories: for f in `find test/* -type d`; do echo $f done Here's the output: test/Baltimore test/Cherry Hill test/Edison test/New York City test/Philadelphia Cherry Hill and New York City are treated as 2 or 3 separate entries. I tried quoting the filenames, like so: for f in `find test/* -type d | sed -e 's/^/\"/' | sed -e 's/$/\"/'`; do echo $f done but to no avail. There's got to be a simple way to do this. Any ideas? The answers below are great. But to make this more complicated - I don't always want to use the directories listed in my test directory. Sometimes I want to pass in the directory names as command-line parameters instead. I took Charles' suggestion of setting the IFS and came up with the following: dirlist="${@}" ( [[ -z "$dirlist" ]] && dirlist=`find test -mindepth 1 -type d` && IFS=$'\n' for d in $dirlist; do echo $d done ) and this works just fine unless there are spaces in the command line arguments (even if those arguments are quoted). For example, calling the script like this: test.sh "Cherry Hill" "New York City" produces the following output: Cherry Hill New York City Again, I know there must be a way to do this - I just don't know what it is...

    Read the article

  • Problem with non blocking fifo in bash

    - by timdel
    Hi! I'm running a few Team Fortress 2 servers and I want to write a little management script. Basically the TF2 servers are a fg process which provides a server console, so I can start the server, type status and get an answer from it: ***@purple:~/tf2$ ./start_server_testing Auto detecting CPU Using AMD Optimised binary. Server will auto-restart if there is a crash. Console initialized. [bla bla bla] Connection to Steam servers successful. VAC secure mode is activated. status hostname: Team Fortress version : 1.0.6.1/15 3883 secure udp/ip : ***.***.133.31:27600 map : ctf_2fort at: 0 x, 0 y, 0 z players : 0 (2 max) # userid name uniqueid connected ping loss state adr Great, now I want to create a script which sends the command sm_reloadadmins to all my servers. The best way I found to do this is using a fifo named pipe. Now what I want to do is having this pipe readonly and non blocking to the server process, so I can write into the pipe and the server executes it, but still I want to write via console one the server, so if I switch back to the fg process of the server and I type status I want an answer printed. I tried this (assuming serverfifo is mkfifo serverfifo): ./start_server_testing < serverfifo Not working, the server won't start until something is written to the pipe. ./start_server_testing <> serverfifo Thats actually working pretty good, I can see the console output of the server and I can write to the fifo and the server executes the commands, but I can't write via console to the server anymore. Also, if I write 'exit' to the pipe (which should end the server) and I'm running it in a screen the screen window is getting killed for some reason (wtf why?). I only need the server to read the fifo without blocking AND all my keyboard input on the server itself should be send to the server AND all server ouput should be written to the console. Is that possible? If yes, how?

    Read the article

  • Editing history in bash

    - by nameanyone
    In bash, when I go back in history, edit some command and run it, this edited command is appended to history and the original one is left intact. But every once in a while I somehow manage to affect the original command, i.e. my edit replaces the original command back in history. I can't put my finger on how this happens. Can someone explain? My goal is to avoid this, so any edit to a previous command always gets appended to history and never replaces the original.

    Read the article

  • Bash Shell Script: Nested Select Statements

    - by CCG121
    I have A Script that has a Select statement to go to multiple sub select statements however once there I can not seem to figure out how to get it to go back to the main script. also if possible i would like it to re-list the options #!/bin/bash PS3='Option = ' MAINOPTIONS="Apache Postfix Dovecot All Quit" APACHEOPTIONS="Restart Start Stop Status" POSTFIXOPTIONS="Restart Start Stop Status" DOVECOTOPTIONS="Restart Start Stop Status" select opt in $MAINOPTIONS; do if [ "$opt" = "Quit" ]; then echo Now Exiting exit elif [ "$opt" = "Apache" ]; then select opt in $APACHEOPTIONS; do if [ "$opt" = "Restart" ]; then sudo /etc/init.d/apache2 restart elif [ "$opt" = "Start" ]; then sudo /etc/init.d/apache2 start elif [ "$opt" = "Stop" ]; then sudo /etc/init.d/apache2 stop elif [ "$opt" = "Status" ]; then sudo /etc/init.d/apache2 status fi done elif [ "$opt" = "Postfix" ]; then select opt in $POSTFIXOPTIONS; do if [ "$opt" = "Restart" ]; then sudo /etc/init.d/postfix restart elif [ "$opt" = "Start" ]; then sudo /etc/init.d/postfix start elif [ "$opt" = "Stop" ]; then sudo /etc/init.d/postfix stop elif [ "$opt" = "Status" ]; then sudo /etc/init.d/postfix status fi done elif [ "$opt" = "Dovecot" ]; then select opt in $DOVECOTOPTIONS; do if [ "$opt" = "Restart" ]; then sudo /etc/init.d/dovecot restart elif [ "$opt" = "Start" ]; then sudo /etc/init.d/dovecot start elif [ "$opt" = "Stop" ]; then sudo /etc/init.d/dovecot stop elif [ "$opt" = "Status" ]; then sudo /etc/init.d/dovecot status fi done elif [ "$opt" = "All" ]; then sudo /etc/init.d/apache2 restart sudo /etc/init.d/postfix restart sudo /etc/init.d/dovecot restart fi done

    Read the article

  • Mac OS X bash prompt bug?

    - by Memo
    I am trying to set my bash prompt to display the time and current directory in bold: export PS1="\[\e[1m\][\A] \w \$ \[\e[0m\]" This does apparently work, but when I use the command history (ctrl-r), after finding the command I was searching for and pressing enter, this line is not displayed correctly. Here is an example: [21:58] ~/Wyona/svn-repos/zwischengas $ (reverse-i-search)`ta': tail -F logs/log4j-cnode1.log becomes, after pressing enter: [21:58] ~/Wyona/svn-repos/zwischengas $ -F logs/log4j-cnode1.log Of course, this is not "really" a problem, since the command does work correctly, but it is still annoying. Does anybody know why this happens? And, more importantly, how to prevent/fix it? Thanks, Memo

    Read the article

  • Problem executing bash file

    - by sandelius
    HI there! I've run into some problem while learning to combine .sh files and PHP. I've create a file test.sh and in that file I call a PHP file called test.php. If I double click on the .sh file then it runs perfectly but when I try to run it from the terminal I get "command not found". I'm in the exact folder as my .sh file but it wont work. Here's my test.sh: #!/bin/bash LIB=${0/%cli/} exec php -q ${LIB}test.php one two three exit; When I doubleclick on the test.sh file then it returns the argv array like it suppost to. But why can't I run it from terminal?

    Read the article

  • Bash: Extract Range with Regular Expressioin (maybe sed?)

    - by sixtyfootersdude
    I have a file that is similar to this: <many lines of stuff> SUMMARY: <some lines of stuff> END OF SUMMARY I want to extract just the stuff between SUMMARY and END OF SUMMARY. I suspect I can do this with sed but I am not sure how. I know I can modify the stuff in between with this: sed "/SUMMARY/,/END OF SUMMARY/ s/replace/with/" fileName (But not sure how to just extract that stuff). I am Bash on Solaris.

    Read the article

  • Filtering Filenames with bash

    - by Stefan Liebenberg
    I have a directory full of log files in the form ${name}.log.${year}{month}${day} such that they look like this: logs/ production.log.20100314 production.log.20100321 production.log.20100328 production.log.20100403 production.log.20100410 ... production.log.20100314 production.log.old I'd like to use a bash script to filter out all the logs older than x amount of month's and dump it into *.log.old X=6 #months LIST=*.log.*; for file in LIST; do is_older = file_is_older_than_months( ${file}, ${X} ); if is_older; then cat ${c} >> production.log.old; rm ${c}; fi done; How can I get all the files older than x months? and... How can I avoid that *.log.old file is included in the LIST attribute? Thank you Stefan

    Read the article

  • SIMPLE BASH Programming.

    - by atif089
    I am a newbie to BASH so please dont mind my stupid questions because I am not able to get any good sources to learn that. I want to create a script to display filename and its size. This is what the code is like filename=$1 if [ -f $filename ]; then filesize=`du -b $1` echo "The name of file is $1" echo "Its size is $filesize" else echo "The file specified doesnot exists" fi The output is like this $ ./filesize.sh aa The name of file is aa Its size is 88 aa But in the last line I dont want to show the name of the file. How do I do that ? I want to do the same thing using wc as well.

    Read the article

  • List the names of existing directories from .tgz file in a bash variable

    - by Tom
    I would like to find all the directories that are in a .tgz file and that already exist on the system and put the result in a bash variable. I have tried this: EXISTING=`for f in \`tar tzf $ARCHIVE\`; do if [ -d "/tmp/unpacked-data/\$f" ]; then echo \$f; fi; done` with no luck. If I echo the value of $f before the if in the loop, I get all the files, ie this works: EXISTING=`for f in \`tar tzf $ARCHIVE\`; do echo \$f; done` Can someone tell me why the \$f doesn't work in the if statement? Thanks, Tom

    Read the article

  • Sed not working inside bash script

    - by Isabelle
    Hello. I believe this may be a simple question, but I've looked everywhere and tried some workarounds, but I still haven't solved the problem. Problem description: I have to replace a character inside a file and I can do it easily using the command line: sed -e 's/pattern1/pattern2/g' full_path_to_file/file But when I use the same line inside a bash script I can't seem to be able to replace it, and I don't get an error message, just the file contents without the substitution. #!/bin/sh VAR1="patter1" VAR2="patter2" VAR3="full_path_to_file" sed -e 's/${VAR1}/${VAR2}/g' ${VAR3} Any help would be appreciated. Thank you very much for your time.

    Read the article

  • Listing time every second as a Bash script

    - by Caleb
    Hello all, first time here as I've finally started to learn programming. Anyway, I'm just trying to print the time in nanoseconds every second here, and I have this: #!/usr/bin/env bash while true; do date=(date +%N) ; echo $date ; sleep 1 ; done Now, that simply yields a string of date's, which isn't what I want. My learning has been rather messy, so I hope you'll excuse me for this if it's really simple. Also, I did manage to fine this, that worked on the prompt: while true ; do date +%N ; sleep 1 ; done But that obviously doesn't work as a script.

    Read the article

  • Use a grepped file as an included source in bash

    - by Andrew
    I'm on a shared webhost where I don't have permission to edit the global bash configuration file at /ect/bashrc. Unfortunately there is one line in the global file, mesg y, which puts the terminal in tty mode and makes scp and similar commands unavailable. My local ~./bashrc includes the global file as a source, like so: # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi My current workaround uses grep to output the global file, sans offending line, into a local file and use that as a source. # Source global definitions if [ -f /etc/bashrc ]; then grep -v mesg /etc/bashrc > ~/.bash_global . ~/.bash_global fi Is there a way to do include a grepped file like this without the intermediate step of creating an actual file? Something like this? . grep -v mesg /etc/bashrc > ~/.bash_global

    Read the article

  • cURL: from PHP to BASH

    - by flienteen
    Hi.. I've never done any curl before so am in need of some help. php: <?php $ch = curl_init(); $data = array( 'uptype'=>'file', 'file'=>'@'.$argv[1], ); curl_setopt($ch, CURLOPT_URL, 'http://my_site_ex/up.php'); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $data); curl_exec($ch); curl_close($ch); ?> how to make the same script in BASH?

    Read the article

  • Parsing getopts in bash

    - by ABach
    I've got a bash function that I'm trying to use getopts with and am having some trouble. The function is designed to be called by itself (getch), with an optional -s flag (getch -s), or with an optional string argument afterward (so getch master and getch -s master are both valid). The snippet below is where my problem lies - it isn't the entire function, but it's what I'm focusing on: getch() { if [ "$#" -gt 2 ] || [ "$1" = "-h" ] || [ "$1" = "--help" ]; then echo "Usage: $0 [-s] [branch-name]" >&2 return 1 fi while getopts "s" opt; do echo $opt # This line is here to test how many times we go through the loop case $opt in s) squash=true shift ;; *) ;; esac done } The getch -s master case is where the strangeness happens. The above should spit out s once, but instead, I get this: [user@host:git-repositories/temp]$ getch -s master s s [user@host:git-repositories/temp]$ Why is it parsing the -s opt twice?

    Read the article

  • Custom script in .screenrc

    - by benoror
    Hi. I made a script that spawns a remote shell or runs a local shell whether it's on the current machine or not: #!/bin/bash # By: benoror <[email protected]> # # spawns a remote shell or runs a local shell whether it's on the current machine or not # $1 = hostname if [ "$(hostname)" == "$1" ]; then bash else ssh "$1.local" fi For example, if I'm on server1: ./spawnshell.sh server1 -> runs bash ./spawnshell.sh server2 -> ssh to server2.local I want that script to run automatically in separate tabs in GNU Screen, but I can't make it run, my .screenrc: ... screen -t "@server1" 1 exec /home/benoror/scripts/spawnshell.sh server1 screen -t "@server2" 2 exec /home/benoror/scripts/spawnshell.sh server2 ... But it doesn't works, I've tried without 'exec', with -X option and a lot more. Any ideas ?

    Read the article

  • Bash or python for changing spacing in files

    - by Werner
    Hi, I have a set of 10000 files. In all of them, the second line, looks like: AAA 3.429 3.84 so there is just one space (requirement) between AAA and the two other columns. The rest of lines on each file are completely different and correspond to 10 columns of numbers. Randomly, in around 20% of the files, and due to some errors, one gets BBB 3.429 3.84 so now there are two spaces between the first and second column. This is a big error so I need to fix it, changing from 2 to 1 space in the files where the error takes place. The first approach I thought of was to write a bash script that for each file reads the 3 values of the second line and then prints them with just one space, doing it for all the files. I wonder what do oyu think about this approach and if you could suggest something better, bashm python or someother approach. Thanks

    Read the article

  • Compare output of program to correct program using bash script, without using text files

    - by Doug
    I've been trying to compare the output of a program to known correct output by using a bash script without piping the output of the program to a file and then using diff on the output file and a correct output file. I've tried setting the variables to the output and correct output and I believe it's been successful but I can't get the string comparison to work correctly. I may be wrong about the variable setting so it could be that. What I've been writing: TEST=`./convert testdata.txt < somesampledata.txt` CORRECT="some correct output" if [ "$TEST"!="$CORRECT" ]; then echo "failed" fi

    Read the article

  • Bash Script using Grep to search for a pattern in a file

    - by atif089
    I am writing a bash script to search for a pattern in a file using GREP. I am clueless for why it isnt working. This is the program echo "Enter file name..."; read fname; echo "Enter the search pattern"; read pattern if [ -f $fname ]; then result=`grep -i '$pattern' $fname` echo $result; fi Or is there different approach to do this ? Thanks

    Read the article

  • Bash variable kills script execusion

    - by Kyle Terry
    Sorry if this is better suited at serverfault, but I think it learns more towards the programming side of things. I have some code that's going into /etc/rc.local to detect what type of touch screen monitor is plugged in and changes out the xorg.conf before launching X. Here is a small snippet: CURRENT_MONITOR=`ls /dev/usb | grep 'egalax_touch\|quanta_touch'` case $CURRENT_MONITOR in '') CURRENT_MONITOR='none' ;; esac If one of those two touch screens is plugged in, it works just fine. If any other monitor is plugged in, it stops at the "CURRENT_MONITOR=ls /dev/usb | grep 'egalax_touch\|quanta_touch'." For testing I touched two files. One before creating CURRENT_MONITOR and one after CURRENT_MONITOR and only file touched before is created. I'm not a bash programmer so this might be something very obvious.

    Read the article

  • Bash variable kills script execution

    - by Kyle Terry
    Sorry if this is better suited at serverfault, but I think it learns more towards the programming side of things. I have some code that's going into /etc/rc.local to detect what type of touch screen monitor is plugged in and changes out the xorg.conf before launching X. Here is a small snippet: CURRENT_MONITOR=`ls /dev/usb | grep 'egalax_touch\|quanta_touch'` case $CURRENT_MONITOR in '') CURRENT_MONITOR='none' ;; esac If one of those two touch screens is plugged in, it works just fine. If any other monitor is plugged in, it stops at the "CURRENT_MONITOR=ls /dev/usb | grep 'egalax_touch\|quanta_touch'." For testing I touched two files. One before creating CURRENT_MONITOR and one after CURRENT_MONITOR and only file touched before is created. I'm not a bash programmer so this might be something very obvious.

    Read the article

  • stting environment variables in powershell by calling python script that prints $env:myVar=myvalue

    - by leeg
    I have some legacy python scripts that manage my shell environment for all the programs and plugins I am running on Linux (bash) and windows (cmd.exe). I want to port this to powershell. How do I set environment variables in powershell by calling python script that prints $env:myVar=myvalue and causes my environment variable to persist in the powershell. In Bash I can use a bash function to call my python script which prints export var=value to stdout and the function will set the environment variables in my shell. This will also work in windows cmd shell by calling a .bat file. I cannot figure out how to do this in powershell. I think it should be something like this: setvar.ps1: function SETVAR {c:\python26\python.exe varconfig.py } varconfig.py: import sys print >> sys.stdout, '$env:myVar=foo'

    Read the article

  • Intersection of two lists in Bash

    - by User1
    I'm trying to write a simple script that will list the contents found in two lists. To simplify, let's use ls as an example. Imagine "one" and "two" are directories. one=`ls one` two=`ls two` intersection $one $two I'm still quite green in bash, so feel free to correct how I am doing this. I just need some command that will print out all files in "one" and "two". They must exist in both. You might call this the "intersection" between "one" and "two".

    Read the article

  • Parallelize Bash Script

    - by thelsdj
    Lets say I have a loop in bash: for foo in `some-command` do do-something $foo done do-something is cpu bound and I have a nice shiny 4 core processor. I'd like to be able to run up to 4 do-something's at once. The naive approach seems to be: for foo in `some-command` do do-something $foo & done This will run all do-somethings at once, but there are a couple downsides, mainly that do-something may also have some significant I/O which performing all at once might slow down a bit. The other problem is that this code block returns immediately, so no way to do other work when all the do-somethings are finished. How would you write this loop so there are always X do-somethings running at once?

    Read the article

  • bash testing a group of directories for existence

    - by Jim Jones
    Have documents stored in a file system which includes "daily" directories, e.g. 20050610. In a bash script I want to list the files in a months worth of these directories. So I'm running a find command find <path>/200506* -type f >> jun2005.lst. Would like to check that this set of directories is not a null set before executing the find command. However, if I use if[ -d 200506* ] I get a "too many arguements error. How can I get around this?

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >