Search Results

Search found 6936 results on 278 pages for 'shell scripting'.

Page 67/278 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Bash: Read lines in a file scenario with sed or awk

    - by user105566
    I have this scenarios: File Content: 10.1.1.1 10.1.1.2 10.1.1.3 10.1.1.4 I want sed or awk so that when i cat the file every time new line is returned. like First iteration: cat ip | some magic 10.1.1.1 Second iteration returns 10.1.1.2 Third iteration returns 10.1.1.3 Fourth iteration returns 10.1.1.4 and after n number of iterations, it returns to line 1 Fifth iteration returns: 10.1.1.1 Can we do it using sed or awk.

    Read the article

  • WMI Remote connection objsWbemLocator.ConnectServer

    - by Sam
    I have an issue when connecting to remote machines using the following: Set objWMIService = objSWbemLocator.ConnectServer _ (sIP, "root\CIMV2", strUser, strPassword, "MS_409", "ntlmdomain:" + sDomain, 128) the problem is that some machines are not timing out and the process hangs. Is there a way to cancel the connect and continue with the next ip? I'm using vbscript. Thanks, Sam

    Read the article

  • How to automate changing my ip?

    - by callisto
    I am very new to OSX. I will use my MBP at work and home. I would like to be easily able to switch my ip when changing location. Thus far I have dabbled with the automator, hoping to do something like this: [pseudocode] If IP = 192.168.0.10 root# changeip 192.168.0.10 10.0.0.15 else root# changeip 10.0.0.15 192.168.0.10 The reason for this is that my IP from home will not allow me access at work and vice versa. I have friends and family who drop in now and then, multiple wireless devices set up for the home IP range. Changing all of that to accommodate one new device (the Macbook) would make me reconsider my foray into OSX. I'd rather have the MBP adapt to me than I to it.

    Read the article

  • Windows 2008 R2 Scheduled Task Not Running With Admin Privileges even if granted?

    - by j.rightly
    I have a scheduled task that is running as USER. I have checked the box "Run with highest privileges" in the scheduled task properties. The task is a powershell script that, among other things, reboots the system. The script executes and runs normally, but as a scheduled task, it fails to reboot the system. Here is the kicker: When I manually run the script as USER using the exact same command line as what's in the scheduled task, the script still runs but this time it actually reboots the system. I have UAC disabled and USER is a member of the local Admins group. The local Admins group has the right to shut down the system. Nothing in the event logs offers any clues. Why would the same script running under the same credentials work interactively but not as a scheduled task? UPDATE: This is too weird. When the task ran on schedule, everything worked normally.

    Read the article

  • How can I prevent tmux exiting with Ctrl-d?

    - by Cas
    I use tmux on my server and recently I found to my cost that ctrl-d will exit tmux and lose all the session information, my intention was to simply end the ssh session but failed to notice I was still in tmux until too late. I am aware that I should be careful in future when using ctrl-d but I wondered if there a way to prevent tmux for exiting when hitting ctrl-d by accident? A solution such as a prompt, confirmation or detaching would be fine.

    Read the article

  • How to add an iptables rule with source IP address

    - by ???
    I have a bash script that starts with this: if [[ $EUID -ne 0 ]]; then echo "Permission denied (are you root?)." exit 1 elif [ $# -ne 1 ] then echo "Usage: install-nfs-server <client network/CIDR>" echo "$ bash install-nfs-server 192.168.1.1/24" exit 2 fi; I then try to add the iptables rules for NFS as follows: iptables -A INPUT -i eth0 -p tcp -s $1 --dport 111 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 111 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p udp -s $1 --dport 111 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p udp --sport 111 -m state --state ESTABLISHED -j ACCEPT service iptables save service iptables restart I get the error: Try iptables -h' or 'iptables --help' for more information. Bad argument111' Try iptables -h' or 'iptables --help' for more information. Bad argument111' Saving firewall rules to /etc/sysconfig/iptables: ^[[60G[^[[0;32m OK ^[[0;39m]^M Flushing firewall rules: ^[[60G[^[[0;32m OK ^[[0;39m]^M Setting chains to policy ACCEPT: filter ^[[60G[^[[0;32m OK ^[[0;39m]^M Unloading iptables modules: ^[[60G[^[[0;32m OK ^[[0;39m]^M Applying iptables firewall rules: ^[[60G[^[[0;32m OK ^[[0;39m]^M Loading additional iptables modules: ip_conntrack_netbios_ns ^[[60G[^[[0;32m OK ^[[0;39m]^M When I open /etc/sysconfig/iptables these are the rules: # Generated by iptables-save v1.3.5 on Mon Mar 26 08:00:42 2012 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [466:54208] :RH-Firewall-1-INPUT - [0:0] -A INPUT -j RH-Firewall-1-INPUT -A FORWARD -j RH-Firewall-1-INPUT -A OUTPUT -o eth0 -p tcp -m tcp --sport 111 -m state --state ESTABLISHED -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --sport 111 -m state --state ESTABLISHED -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --sport 111 -m state --state ESTABLISHED -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --sport 111 -m state --state ESTABLISHED -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m state --state NEW -m udp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -i lo -j ACCEPT -A RH-Firewall-1-INPUT -p icmp -m icmp --icmp-type any -j ACCEPT -A RH-Firewall-1-INPUT -p esp -j ACCEPT -A RH-Firewall-1-INPUT -p ah -j ACCEPT -A RH-Firewall-1-INPUT -d 224.0.0.251 -p udp -m udp --dport 5353 -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m tcp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited COMMIT # Completed on Mon Mar 26 08:00:42 2012 ~ "/etc/sysconfig/iptables" 32L, 1872C I've also tried: iptables -I RH-Firewall-1-INPUT 1 -m state --state NEW -m tcp -p tcp --source $1 --dport 111 -j ACCEPT iptables -I RH-Firewall-1-INPUT 2 -m udp -p udp --source $1 --dport 111 -j ACCEPT

    Read the article

  • Ubuntu Upstart script hangs on start and stop

    - by sbwoodside
    I have an upstart script that will start a custom jetty server. When I do sudo start [myservice] nothing happens. Subsequently, sudo status [myservice] show it as: [myservice] start/killed, process 3586. Here's the script in /etc/init/[myservice].conf: description "[description]" author "[my name and email]" start on runlevel [2345] stop on runlevel [016] respawn expect fork script sudo -u www-data /path/to/grafserv-start.sh >> /tmp/upstart.log 2>&1 end-script And here is grafserv-start.sh: #!/bin/bash /usr/bin/java -Djetty.port=3070 -jar /path/to/grafserv/trunk/start.jar echo "Done starting GrafServ" I've tried redirecting the output of the script command to a tmp logfile, but that file is never created. When I start it, I just get a hang, until I ^C. Also, I tried running it with strace but that gave me a lot of stuff about sockets.

    Read the article

  • Is there a remote file transfer command that preserves nanosecond timestamps?

    - by Denver Gingerich
    I've tried transferring files using scp and rsync on Ubuntu 10.04, but neither of them preserves more than second precision. Here's an example: $ touch test1 $ scp -p test1 localhost:test2 $ ls -l --full-time test* -rw-r--r-- 1 user user 0 2011-01-14 18:46:06.579717282 -0500 test1 -rw-r--r-- 1 user user 0 2011-01-14 18:46:06.000000000 -0500 test2 $ cp -p test1 test2 $ ls -l --full-time test* -rw-r--r-- 1 user user 0 2011-01-14 18:46:06.579717282 -0500 test1 -rw-r--r-- 1 user user 0 2011-01-14 18:46:06.579717282 -0500 test2 $ A straight copy works fine, but scp truncates the timestamp. Are there any tools (preferably similar to scp or rsync in their usage) that do remote file transfers while preserving nanosecond timestamps? I could write a hacky script to do it, but I'd rather not.

    Read the article

  • Running scripts from another directory

    - by Desmond Hume
    Quite often, the script I want to execute is not located in my current working directory and I don't really want to leave it. Is it a good practice to run scripts (BASH, Perl etc.) from another directory? Will they usually find all the stuff they need to run properly? If so, what is the best way to run a "distant" script? Is it . /path/to/script or sh /path/to/script and how to use sudo in such cases? This, for example, doesn't work: sudo . /path/to/script

    Read the article

  • Secure method of changing a user's password via Python script/non-interactively

    - by Matthew Rankin
    I've created a Python script using Fabric to configure a freshly built Slicehost Ubuntu slice. In case you're not familiar with Fabric, it uses Paramiko, a Python SSH2 client, to provide remote access "for application deployment or systems administration tasks." One of the first things I have the Fabric script do is to create a new admin user and set their password. Unlike Pexpect, Fabric cannot handle interactive commands on the remote system, so I need to set the user's password non-interactively. At present, I'm using the chpasswd command to change the password. This transmits the password as clear text over SSH to the remote system. Questions Is my current method of setting the password a security concern? Currently, the drawback I see is that Fabric shows the password as clear text on my local system as follows: [xxx.xx.xx.xxx] run: echo "johnsmith:supersecretpassw0rd" | chpasswd. Since I only run the Fabric script from my laptop, I don't think this is a security issue, but I'm interested in others' input. Is there a better method for setting the user's password non-interactively? Another option, would be to use Pexpect from within the Fabric script to set the password. Current Code # Fabric imports and host configuration excluded for brevity root_password = getpass.getpass("Root's password given by SliceManager: ") admin_username = prompt("Enter a username for the admin user to create: ") admin_password = getpass.getpass("Enter a password for the admin user: ") env.user = 'root' env.password = root_password # Create the admin group and add it to the sudoers file admin_group = 'admin' run('addgroup {group}'.format(group=admin_group)) run('echo "%{group} ALL=(ALL) ALL" >> /etc/sudoers'.format( group=admin_group) ) # Create the new admin user (default group=username); add to admin group run('adduser {username} --disabled-password --gecos ""'.format( username=admin_username) ) run('adduser {username} {group}'.format( username=admin_username, group=admin_group) ) # Set the password for the new admin user run('echo "{username}:{password}" | chpasswd'.format( username=admin_username, password=admin_password) ) Local System Terminal I/O $ fab config_rebuilt_slice Root's password given by SliceManager: Enter a username for the admin user to create: johnsmith Enter a password for the admin user: [xxx.xx.xx.xxx] run: addgroup admin [xxx.xx.xx.xxx] out: Adding group `admin' (GID 1000) ... [xxx.xx.xx.xxx] out: Done. [xxx.xx.xx.xxx] run: echo "%admin ALL=(ALL) ALL" >> /etc/sudoers [xxx.xx.xx.xxx] run: adduser johnsmith --disabled-password --gecos "" [xxx.xx.xx.xxx] out: Adding user `johnsmith' ... [xxx.xx.xx.xxx] out: Adding new group `johnsmith' (1001) ... [xxx.xx.xx.xxx] out: Adding new user `johnsmith' (1000) with group `johnsmith' ... [xxx.xx.xx.xxx] out: Creating home directory `/home/johnsmith' ... [xxx.xx.xx.xxx] out: Copying files from `/etc/skel' ... [xxx.xx.xx.xxx] run: adduser johnsmith admin [xxx.xx.xx.xxx] out: Adding user `johnsmith' to group `admin' ... [xxx.xx.xx.xxx] out: Adding user johnsmith to group admin [xxx.xx.xx.xxx] out: Done. [xxx.xx.xx.xxx] run: echo "johnsmith:supersecretpassw0rd" | chpasswd [xxx.xx.xx.xxx] run: passwd --lock root [xxx.xx.xx.xxx] out: passwd: password expiry information changed. Done. Disconnecting from [email protected]... done.

    Read the article

  • Linux script to kill process listening on a particular port

    - by Evgeny
    I have a process that listens on a TCP port (?0003). From time to time it crashes - badly. It stops working, but continues hogging the port for some time, so I can't even restart it. I'm looking to automate this. What I do right now is: netstat -ntlp |grep -P "\*\:\d0003" To see what the PID is and then: kill -9 <pid> Does anyone have a script (or EXE for that matter) that would link the two steps together, ie. parse the PID from the first command and pass it to the second?

    Read the article

  • control a bash script with variables from an external file

    - by perler
    I would like to control a bash script like this: #!/bin/sh USER1=_parsefromfile_ HOST1=_parsefromfile_ PW1=_parsefromfile_ USER2=_parsefromfile_ HOST2=_parsefromfile_ PW2=_parsefromfile_ imapsync \ --buffersize 8192000 --nosyncacls --subscribe --syncinternaldates --IgnoreSizeErrors \ --host1 $HOST1 --user1 $USER1 --password1 $PW1 --ssl1 --port1 993 --noauthmd5 \ --host2 $HOST2 --user2 $USER2 --password2 $PW2 --ssl2 --port2 993 --noauthmd5 --allowsizemismatch with parameters from a control file like this: host1 user1 password1 host2 user2 password2 anotherhost1 anotheruser1 anotherpassword1 anotherhost2 anotheruser2 anotherpassword2 where each line represents one run of the script with the parameters extracted and made into variables. what would be the most elegant way of doing this? PAT

    Read the article

  • split command on Ubuntu command-line

    - by pedro
    I want to split a file into multiple files with at most 25 lines each. I'm using this: split -l 25 /etc/adduser.conf > /home/ubuntu/PL/trab3/rc_ But I do not get the files I expect. How can I get files with filenames like rc_01, rc_02, etc.?

    Read the article

  • script calling script as other user

    - by viktor tron
    Using CentOs, I want to run a script as user 'training' as a system service. I use daemontools to monitor the process, which needs a launcher script that is run as root: : #!/bin/bash exec >> /var/log/training_service.log 2>&1 setuidgid training training_command This last line is not good enough since for training_command, we need environment for training user to be set. : su - training -c 'training_command' gives 'standard in must be tty' as su making sure tty is present to potentially accept password. I know I could make this disappear by modifying /etc/sudoers a la Bash & 'su' script giving an error "standard in must be a tty" but i am reluctant and unsure of consequences. : runuser - training -c 'training_command' gives runuser: cannot set groups: Connection refused. I found no sense or resolution to this message. I am stuck. Is this something so hard to achieve? I appreciate all insight and guidance to best practice.

    Read the article

  • Rsync: General file/folder synchronization

    - by Rey Leonard Amorato
    I have a file server, which is in-charge of pulling a folder tree from multiple workstations on a daily basis. My current method for this is by using rsync, (which works pretty well provided directory names and/or files remain the same) however, when files are renamed or moved about within subdir1, rsync will copy them over to the server, creating duplicates. I have to manually find and delete extraneous files/folders that had been left on the server during previous syncs. Note that I cannot use rsync's --delete flag because any sync from a workstation will then mirror that particular folder tree, instead of merging them to the server. Visual diagram: Server: Workstation1 Workstation2 Workstation(n) Folder* Folder* Folder* Folder* -subdir1 -subdir1 -subdir1 -subdir(n) -file1 -file1 -file2 -file(n) -file2 -file(n) Is there a simple script (preferably in bash, nothing fancy) that can accomplish the deletion of the extraneous files/folders in the event a file is renamed or moved to a different subdir? Is there a different program, much like rsync that can accomplish this task autonomously and in a much simpler manner? I have looked at unison, but I did not like the fact that it keeps a local database for the syncing info. Any tips at all as to how I am supposed to tackle this? Thank you in advanced for your help. EDIT: I have tried unison just recently and I can safely say it is out of the question now. unison is a bi-directional synchronization tool and from my testing, it mirrors the files existing on the server to all workstations. - This is unwanted. preferably, i would want files/folders to stay within their respective workstations and just merge to the server. AKA uni-directional sync; but with renames/moves propagated to the server. I might have to look into Git/Mercurial/Bazaar as mentioned by kyle, but still unsure if they are fit for the job.

    Read the article

  • Bash Completion Script Help

    - by inxilpro
    So I'm just starting to learn about bash completion scripts, and I started to work on one for a tool I use all the time. First I built the script using a set list of options: _zf_comp() { local cur prev actions COMPREPLY=() cur="${COMP_WORDS[COMP_CWORD]}" prev="${COMP_WORDS[COMP_CWORD-1]}" actions="change configure create disable enable show" COMPREPLY=($(compgen -W "${actions}" -- ${cur})) return 0 } complete -F _zf_comp zf This works fine. Next, I decided to dynamically create the list of available actions. I put together the following command: zf | grep "Providers and their actions:" -A 100 | grep -P "^\s*\033\[36m\s*zf" | awk '{gsub(/[[:space:]]*/, "", $3); print $3}' | sort | uniq | awk '{sub("\-", "\\-", $1); print $1}' | tr \\n " " | sed 's/^ *\(.*\) *$/\1/' Which basically does the following: Grabs all the text in the "zf" command after "Providers and their actions:" Grabs all the lines that start with "zf" (I had to do some fancy work here 'cause the ZF command prints in color) Grab the second piece of the command and remove any spaces from it (the spaces part is probably not needed any more) Sort the list Get rid of any duplicates Escape dashes (I added this when trying to debug the problem—probably not needed) Trim all new lines Trim all leading and ending spaces The above command produces: $ zf | grep "Providers and their actions:" -A 100 | grep -P "^\s*\033\[36m\s*zf" | awk '{gsub(/[[:space:]]*/, "", $3); print $3}' | sort | uniq | awk '{sub("\-", "\\-", $1); print $1}' | tr \\n " " | sed 's/^ *\(.*\) *$/\1/' change configure create disable enable show $ So it looks to me like it's producing the exact same string as I had in my original script. But when I do: _zf_comp() { local cur prev actions COMPREPLY=() cur="${COMP_WORDS[COMP_CWORD]}" prev="${COMP_WORDS[COMP_CWORD-1]}" actions=`zf | grep "Providers and their actions:" -A 100 | grep -P "^\s*\033\[36m\s*zf" | awk '{gsub(/[[:space:]]*/, "", $3); print $3}' | sort | uniq | awk '{sub("\-", "\\-", $1); print $1}' | tr \\n " " | sed 's/^ *\(.*\) *$/\1/'` COMPREPLY=($(compgen -W "${actions}" -- ${cur})) return 0 } complete -F _zf_comp zf My autocompletion starts acting up. First, it won't autocomplete anything with an "n" in it, and second, when it does autocomplete ("zf create" for example) it won't let me backspace over my completed command. The first issue I'm completely stumped on. The second I'm thinking might have to do with escape characters from the colored text. Any ideas? It's driving me crazy!

    Read the article

  • Scripts on UNC paths take very long to run

    - by Álvaro G. Vicario
    I have several scripts in UNC paths (from Windows batch files to PHP scripts). No matter how I run them (double click on explorer, my editor's run command menu or Windows command prompt) they take really long to start running (like 14 seconds). Once they get started they run normally. This doesn't happen if I run them from mapped drives. I'm using Windows XP Professional SP3 inside an Active Directory domain and files are hosted in a Windows Server box (not sure about the version, it's an HP dedicated file server with bundled OS). Why does it happen? Is there a way to speed up things while using UNC paths?

    Read the article

  • Windows PowerShell ISE doesn't import PSCX 2.0 module

    - by Alexander
    Hi, i am using Powershell 2.0 with the PSCX 2.0 module. When writing PS scripts inside Windows PowerShell ISE no Cmdlets from the PSCX module are availible. For example running "Get-DriveInfo" from Windows PowerShell ISE would cause an error. Running "Get-DriveInfo" from Powershell works fine. I guess Windows PowerShell ISE doesn't load my PS profile (this would be mad). Does anyone know why and what to do to get it work?

    Read the article

  • redirect temporarily STDOUT to another file descriptor, but still to screen

    - by Carlos Campderrós
    I'm making a script that executes some commands inside, and these commands show some output on STDOUT (and STDERR as well, but that's no problem). I need that my script generates a .tar.gz file to STDOUT, so the output of some commands executed in the script also go to STDOUT and this ends with a not valid .tar.gz file in the output. So, in short, it's possible to output the first commands to the screen (as I still want to see the output) but not via STDOUT? Also I would like to keep the STDERR untouched so only error messages appear there. A simple example of what I mean. This would be my script: #!/bin/bash # the output of these commands shouldn't go to STDOUT, but still appear on screen some_cmd foo bar other_cmd baz #the following command creates a tar.gz of the "whatever" folder, #and outputs the result to STDOUT tar zc whatever/ I've tried messing with exec and the file descriptors, but I still can't get it to work: #!/bin/bash # save STDOUT to #3 exec 3>&1 # the output of these commands should go to #3 and screen, but not STDOUT some_cmd foo bar other_cmd baz # restore STDOUT exec 1>&3 # the output of this command should be the only one that goes to STDOUT tar zc whatever/ I guess I'm lacking closing STDOUT after the first exec and reopen it again or something, but I can't find the right way to do it (right now the result is the same as if I didn't add the execs

    Read the article

  • Implementing dry-run in bash scripts

    - by Apikot
    How would one implement a dry-run option in a bash script? I can think of either wrapping every single command in an if and echoing out the command instead of running it if the script is running with dry-run. Another way would be to define a function and then passing each command call through that function. Something like: function _run () { if [[ "$DRY_RUN" ]]; then echo $@ else $@ fi } _run mv /tmp/file /tmp/file2 DRY_RUN=true _run mv /tmp/file /tmp/file2 Is this just wrong and there is a much better way of doing it?

    Read the article

  • Windows file compare (FC) spurious differences

    - by user165568
    I'm getting differences like this: a.txt Betty Davis Cathy Edwards b.txt Betty Davis Cathy Edwards There are only two lines listed in the diff (which doesn't make sense). No CR/LF/Newline funnies. The difference just moves down if I delete lines. Same problem on Win7 and Win2K. The difference seems to go away if I remove all empty lines from the files. The empty lines are correctly terminiated too. Using /C /W (ignore case, ignore whitespace) Has anyone seen this before? What am I doing wrong? How can I fix it? There are real diffs in the file -missing, extra, or re-spelled names- but the files are byte-for-byte identical at the listed diff.

    Read the article

  • How do I parse file paths separated by a space in a string?

    - by user1130637
    Background: I am working in Automator on a wrapper to a command line utility. I need a way to separate an arbitrary number of file paths delimited by a single space from a single string, so that I may remove all but the first file path to pass to the program. Example input string: /Users/bobby/diddy dum/ding.mp4 /Users/jimmy/gone mia/come back jimmy.mp3 ... Desired output: /Users/bobby/diddy dum/ding.mp4 Part of the problem is the inflexibility on the Automator end of things. I'm using an Automator action which returns unescaped POSIX filepaths delimited by a space (or comma). This is unfortunate because: 1. I cannot ensure file/folder names will not contain either a space or comma, and 2. the only inadmissible character in Mac OS X filenames (as far as I can tell) is :. There are options which allow me to enclose the file paths in double or single quotes, or angle brackets. The program itself accepts the argument of the aforementioned input string, so there must be a way of separating the paths. I just do not have a keen enough eye to see how to do it with sed or awk. At first I thought I'll just use sed to replace every [space]/ with [newline]/ and then trim all but the first line, but that leaves the loophole open for folders whose names end with a space. If I use the comma delimiter, the same happens, just for a comma instead. If I encapsulate in double or single quotation marks, I am opening another can of worms for filenames with those characters. The image/link is the relevant part of my Automator workflow. -- UPDATE -- I was able to achieve what I wanted in a rather roundabout way. It's hardly elegant but here is working generalized code: path="/Users/bobby/diddy dum/ding.mp4 /Users/jimmy/gone mia/come back jimmy.mp3" # using colon because it's an inadmissible Mac OS X # filename character, perfect for separating # also, unlike [space], multiple colons do not collapse IFS=: # replace all spaces with colons colpath=$(echo "$path" | sed 's/ /:/g') # place words from colon-ized file path into array # e.g. three spaces -> three colons -> two empty words j=1 for word in $colpath do filearray[$j]="$word" j=$j+1 done # reconstruct file path word by word # after each addition, check file existence # if non-existent, re-add lost [space] and continue until found name="" for seg in "${filearray[@]}" do name="$name$seg" if [[ -f "$name" ]] then echo "$name" break fi name="$name " done All this trouble because the default IFS doesn't count "emptiness" between the spaces as words, but rather collapses them all.

    Read the article

  • Run script when a specific disk/memory card is mounted under OSX

    - by Max Rydahl Andersen
    How do I run a script when a drive is mounted under OSX ? My usecase is that I would like to automatically copy images from my USB memory/harddrive when it is inserted in my USB card reader, and when a DVD or CD is inserted I would like to copy it for storage in my media center. I've tried using Marcopolo but from what I can see it can only detect the presence of a certain USB device, not the presence of specific harddrive. (http://superuser.com/questions/65127/is-it-possible-to-run-an-automator-workflow-when-a-usb-device-is-connected)

    Read the article

  • Powershell script to delete sub folders and files if creation date is >7 days but maintain parent folders of sub folders and files <7 days old

    - by Mark
    I'm currently using the Powershell script below to delete all files directories and sub directories of "$dump_path" that are seven days or older based upon the creation date and not modified date. The problem with this script is this: If folder "A" is seven (or more) days old it will be deleted even if its sub folders and files are less then seven days old. What I would like this script to do is this: Delete all files from the root and in all sub folders of "$dump_path" that are seven or more days old but maintain the parent folder(s) of files and folders that are less than seven days old even if that means the parent folders are more than seven days old. If all subfolders and files are seven days or older than the parent folder then the parent can be deleted. Slightly obscure problem I know, but the intention is to have a 7 day retention period of all data in a 'sandbox' location of our shared areas. Also, an added bonus if it could generate a log of what it deletes and e-mails it out post deletion. Thank you for reading and I hope that all makes sense! Mark # set folder path $dump_path = "c:\temp" # set minimum age of files and folders $max_days = "-7" # get the current date $curr_date = Get-Date # determine how far back we go based on current date $del_date = $curr_date.AddDays($max_days) # delete the files and folders Get-ChildItem $dump_path | Where-Object { $_.CreationTime -lt $del_date } | Remove-Item -Recurse

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >