Search Results

Search found 5140 results on 206 pages for 'crazy bash'.

Page 52/206 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • odd system name showing up in terminal

    - by sam
    Ive been working on some command line stuff with an external developer through team viewer for work, to interact with th CL i use terminal on OSX, when working with the developer i was always watching what they were doing and i also have all the bash history. Usually upon opening terminal i get something like this Last login: Tue Sep 17 21:33:02 on ttys001 You have mail. unknown-5c:00:00:00:00:00:~ sam$ (note ive replaced some characters in the last line with 00) But today when i opened up terminal and i get this Last login: Mon Oct 21 16:49:35 on ttys000 You have mail. richies-ipad:~ sam$ Note it now says richies-ipad - any idea why this is ? I dont know any one called richie let alone let them have access to my machine. Is this something to be worried about - the fact that someone has enough access to change that ? Also what does the ttys001 part on the first line mean ?

    Read the article

  • Excel; exporting/importing different columns to different csv files

    - by Sisyphus
    Is there a way to batch export different columns to different csv files in excel on an OSX, I'm thinking something along the lines of possibly automator, applescript or bash. I've had a look play around with automator and so far no look. The best I have accomplished export the whole sheet, then use sed to strip out what I don't need, however this is terribly inefficient. Also, is there a method, to batch import multiple csv files into columns. Thanks in advance && sorry I didn't tag excel correctly it wouldn't allow me to create the excel:mac tag

    Read the article

  • mv directory to overwrite another

    - by gbjbaanb
    I may be going bonkers here, but I'm trying to move a directory to a new location, overwriting the contents (on Linux, using bash). Everytime I try it, it responds with "mv: cannot move `./src' to a subdirectory of itself" eg. I have: /src /new/dir/src /$ mv src/ new/dir/ If I delete the destination dir, then it works. I know I can move the contents of the source dir to overwrite the destination, but I'd like to use the same command to overwrite the destination if it already exists, or move the source if it doesn't.

    Read the article

  • Trying to script rsync using pam_exec

    - by Ricky-Rose
    I'm trying to write a bash script that will execute rsync when called by pam_exec. I've tried a couple different ways, and I'm not sure what I'm doing wrong. When I try to run the script at login by adding session optional pam_exec.so /usr/bin/local/sync.sh to my sshd file, it gives me an exit code of 12. if I log in and then manually run my script, it allows me to connect to the remote server, and it lists my files, but it doesn't actually sync anything. I have tried the code below using buth $USER and $PAM_USER. $PAM_USER doesn't work at all. #!/bin/sh rsync -azv -e ssh $USER@remote_server:/home/html/$USER/ /home/html/$USER

    Read the article

  • username and password for rsync in script

    - by sims
    I'm creating a cron job to keep two dirs in sync. I'm using rsync. I'm running an rsync daemon. I read the manual and it says: RSYNC_PASSWORD Setting RSYNC_PASSWORD to the required password allows you to run authenticated rsync connections to an rsync daemon without user intervention. Note that this does not supply a password to a shell transport such as ssh. USER or LOGNAME The USER or LOGNAME environment variables are used to determine the default username sent to an rsync daemon. If neither is set, the username defaults to 'nobody' I have something like: #!/bin/bash USER=name RSYNC_PASSWORD=pass DEST="server::module" /usr/bin/rsync -rltvvv . $DEST I also tried exporting (dangerous, I know) USER and RSYNC_PASSWORD. I also tried with LOGNAME. Nothing works. Am I doing this correctly?

    Read the article

  • Using a script that uses Duplicity + S3 excluding large files

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. If i run the script duplicity gives an error. However if I copy and paste the same command generated by the script everything works... Here is the script #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="gpgkey" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= When the script is run I get the error; Command line error: Expected 2 args, got 6 Where am i going wrong??

    Read the article

  • crontab environment

    - by Adamski
    I have written various scripts to launch Java server applications, which are typically run for 24 hours before being shut down (by invoking the same script with a different parameter). The script relies on environment variables defined in a file: ~/<user>.env, which I source from .bashrc. This works fine when invoking the script from the command line but if I want to add the script as a crontab entry I run into the problem where .bashrc isn't read. My question: What is the best practice approach for solving this problem? I realise I could define a crontab entry such as: * * * * 1-5 /usr/bin/bash -c '. /home/myuser/myuser.env && /home/myuser/scripts/myscript.sh' ... but this seems plain ugly. Alternatively I could source myuser.env at the beginning of every script, but this would become a nightmare to maintain. Any help appreciated.

    Read the article

  • How do I reference the value of a constructed environment variable in a loop?

    - by Rob Spieldenner
    What I'm trying to do is loop over environment variables. I have a number of installs that change and each install has 3 IPs to push files to and run scripts on, and I want to automate this as much as possible (so that I only have to modify a file that I'll source with the environment variables). The following is a simplified version that once I figure out I can solve my problem. So given in my.props: COUNT=2 A_0=foo B_0=bar A_1=fizz B_1=buzz I want to fill in the for loop in the following script #!/bin/bash . <path>/my.props for ((i=0; i < COUNT; i++)) do <script here> done So that I can get the values from the environment variables. Like the following(but that actually work): echo $A_$i $B_$i or A=A_$i B=B_$i echo $A $B returns foo bar then fizz buzz

    Read the article

  • Problem with script that excludes large files using Duplicity and Amazon S3

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. If i run the script duplicity gives an error. However if i copy and paste the same command generated by the script everything works... Here is the script #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="gpgkey" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= When the script is run I get the error; Command line error: Expected 2 args, got 6 Where am i going wrong??

    Read the article

  • Debian wheezy keyboard shortcut for both opening and closing a terminal

    - by Peter
    I recently installed tilda and I would like to open it and close with the same keyboard shortcut. I wrote little something in bash that closes tilda if it is open and opens tilda when there is no such a process in ps -ef. It looks like this: a=ps -ef | fgrep -i tilda | cut -d' ' -f4 | head -1;if [ $a ] ; then kill $a; else tilda; fi It seems to be working (at least partially) when I commit this in terminal, but when I assign this command to specific keyboard shortcut (for example alt+1) it does nothing. Any suggestions? btw. is it possible to assign this shortcut for button '`' like in Quake?

    Read the article

  • How to get ~/foo from /home/user1/foo?

    - by Claudius
    The Bash prompt supports the \w escape sequence, documented as \w the current working directory, with $HOME abbreviated with a tilde (uses the value of the PROMPT_DIRTRIM variable) Is there any way to get a similar abbreviation for an arbitrary string? That is, is there a general command that does something like the following, provided that HOME=/home/user1 /home/user1 ? ~ /home/user1/a/1 ? ~/a/1 /home/user2/b/2 ? ~user2/b/2 /root ? ~root Sure, I could try something ugly with sed, but that is unlikely to give me the result I want in any case. :-) The movitation behind this is that I would like to keep the titles in the tabs of my terminals as short as possible, hence abbreviate working directories where possible.

    Read the article

  • Cygwin Syntax Trouble

    - by mkrouse
    I'm on windows 7 using Cygwin. My script and text file are located in the same directory. #!/bin/bash while read name; do echo "Name read from file - $name" done < /home/Matt/servers.txt I get this error and I don't know why because this is correct while loop syntax..? u0146121@U0146121-TPD-A ~/Matt $ ./script.sh ./script.sh: line 4: syntax error near unexpected token `done' ./script.sh: line 4: `done < /home/Matt/servers.txt' Can anybody tell me what I'm doing wrong? I think it's because I'm on windows and using Cygwin.

    Read the article

  • How to parse pipe with multiple commands independently?

    - by yarun can
    How can I parse output of a single command by multiple commands without truncating at each step? For example ls -al|grep -i something will pass every line that has "something" in it to the next pipe which is fine, but that also means every other line in the pipe is discarded since there wont be matching the condition. What I want is to be able to operate on single pipe by many commands independently. In this case it a pipe from Mutt which passes the whole message body. I want to get grep, sed, delete and assign each of these to bash variables maybe. Initially what I want is to be able to assign "message id" to a variable, "subject" to another variable etc Then pass those into proper commands arguments. Here is how it will be MessageBodyFromMutt|grep something -Ax -Bx |grep another thing from the original message| sed some stuff from the original message| cut from here to there Obviously the above line does not do what I want. I want all these commands to operate on the original message body. I hope it makes sense

    Read the article

  • Why doesn't this script work?

    - by Devin
    I've been using this bash script: for i in $(ls); do mv $i a$i; done to prepend all filenames in a folder with the letter a. I'm afraid that at some point I'll accidentally use this script in the wrong directory and prepend a ton of filenames that I don't want prepended. So I decided to explicitly cite the path to the file. So now my script looks like this: for i in $(ls /cygdrive/c/Users/path/to/Images); do mv /cygdrive/c/Users/path/to/Images/$i /cygdrive/c/Users/path/to/Images/a$i; done It does prepend the filename with the letter a, but it also appends the filename with this ? symbol. Any ideas why it would do that? If it helps any, I'm using cygwin on a Windows 7 box. Thanks for the help!

    Read the article

  • List/remove files, with filenames containing string that's "more than a month ago"?

    - by Martin Tóth
    I store some data in files which follow this naming convention: /interesting/data/filename-YYYY-MM-DD-HH-MM How do I look for the ones with date in file name < now - 1 month and delete them? Files may have changed since they were created, so searching according to last modification date is not good. What I'm doing now, is filter-ing them in python: prefix = '/interesting/data/filename-' import commands names = commands.getoutput('ls {0}*'.format(prefix)).splitlines() from datetime import datetime, timedelta all_files = map(lambda name: { 'name': name, 'date': datetime.strptime(name, '{0}%Y-%m-%d-%H-%M'.format(prefix)) }, names) month = datetime.now() - timedelta(days = 30) to_delete = filter(lambda item: item['date'] < month, all_files) import os map(os.remove, to_delete) Is there a (oneliner) bash solution for this?

    Read the article

  • Long string insertion with sed

    - by Luis Varca
    I am trying to use this expression to insert the contents of one text file into another after a give string. This is a simple bash script: TEXT=`cat file1.txt` sed -i "/teststring/a \ $TEXT" file2.txt This returns an error, "sed: -e expression #1, char 37: unknown command: `M'" The issue is in the fact that the contents of file1.txt are actually a private certificate so it's a large amount of text and unusual characters which seems to be causing an issue. If I replace $TEXT with a simple ASCII value it works but when it reads the large content of file1.txt it fails with that error. Is there some way to carry out this action? Is my syntax off with sed or my quote placement wrong?

    Read the article

  • Grub: Legacy 'ask' parameter no longer supported

    - by leeand00
    I'm trying to change the resolution on my base shell (the Ctrl+Alt+1) shell in Debian so that it supports my ViewSonic monitor. The shell appears really fuzzy when it is displayed on my lcd monitor, but GRUB looks fine when it's displayed. In I tried changing part of the GRUB_CMDLINE_LINUX_DEFAULT to 'vga=ask', and now I get the error on booting up 'Legacy 'ask' parameter no longer supported' Has this 'vga=ask' value been changed to something else? Note, I tried setting it to 'vga=782' after finding a list of screen modes here and the shell font got real huge for a few seconds during boot up, and then switched back to it's awful fuzzy self again, when I went to use the Debian Bash shell. UPDATE Tried suggestion in this question, it works without fuzziness until the last resolution change which displays the user login to the shell.

    Read the article

  • Ubuntu/ Gnome : Open an application is a specific workspace

    - by bguiz
    How do tell an application to open in a specific workspace? More info: I like to have my C++ IDE in workspace 2, my Java IDE in workspace 3, and my email, browser and miscellaneous in workspace four. I also use a shell script that executes upon log in: #!/bin/bash gnome-terminal & # WS 1 netbeans-6-9-1 & # WS2 qtcreator-2-0-1 & # WS 3 firefox & # WS 4 thunderbird & # WS 4 Of course currently it all opens in the curent workspace... Is there a way for me to specify which workspace each command should start in? Thanks in advance!

    Read the article

  • Can't use command line – "command not found" after editing PATH

    - by MEM
    I'm running OS X Mavericks and was trying to install MAMP PRO 2.2. I was trying to configure the PATH variable to have the PHP binaries of MAMP PRO. I added the following line on my ~/.bash_profile file: export PATH=/Applications/MAMP PRO/bin/php/php5.5.3/bin:$PATH As you may notice, since I have MAMP PRO and not just MAMP, I've added a space. As a consequence, I know have the following error each time I run the terminal: -bash: export: `PRO/bin/php/php5.5.3/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin': not a valid identifier Worst: I can't get any command to run, like: ls, clear etc. I always get: "command not found" I don't even know the absolute path for ls. How can I make the commands work again, so that I can properly fix the path I was trying to setup on the .bash_profile file?

    Read the article

  • How to delete files on the command line with regular expressions?

    - by Jack
    Lets say I have 20 files named FOOXX, where XX is the number of the file, eg 01, 02 etc. At the moment, if I want to delete all files lower than the number 10, this is easy and I just use a wildcard, eg rm FOO0* However, if I want to delete specific files ina range, eg 13-15, this becomes more difficult. rm FPP[13-15] does not work, and asks me if I wish to delete all files. Likewse rm FOO1[3-5] wishes to delete all files that begin with FOO1 So, what is the best way to delete ranges of files like this? I have tried with both bash and zsh, and I don't think they differ so much for such a basic task?

    Read the article

  • CDPATH in windows command prompt?

    - by barlop
    The accepted answer of this question Fast Ways of Cd'ing on *nix? mentions bash having CDPATH is there an equivalent in windows? so from any directory e.g. c:\windows I could do c:\windowscd compbar* and it'd take me to m:\a\b\c\d\e\compbar what if there are many compbar directories? well, the CDPATH solution is one solution, I suppose you order them it'd search through the CDPATH environment variable and choose the first. I'd like that for windows.

    Read the article

  • How to start a service at boot time in ubuntu 12.04, run as a different user?

    - by Alex
    I have a server ClueReleaseManager which I have installed on a Ubuntu 12.04 system from a separate user (named pypi), and I want to be able to start this server at startup. I already have tried to create a simple bash script with some commands (login as user pypi, use a virtual python environment, start the server), but this does not work properly. Either the terminal crashes or when I try to ask the status of the service it is started and I am logged in as user pypi ...? So, here the question: What are the steps to take to make sure the ClueReleaseManager service properly starts up on boot time, and which I can control (start/stop/..) during runtime, while the service is running from a user pypi? Additional information and constraints: I want to do this as simple as possible Without any other packages/programs to be installed I am not familiar with the Ubuntu 12.04 init structure All the information I found on the web is very sparse, confusing, incorrect or does not apply to my case of running a service as a different user from root.

    Read the article

  • All terminal commands (like ls, cd, edit, open) are returning errors on my Mac

    - by park
    From what I can tell from reading other questions/answers is that my .bash_profile file may be corrupt. If I type echo $PATH in terminal the result is: /usr/local/git/bin From what I've read, that's not what the result is supposed to be. But I also can't get any of the commands (like edit or subl, for Sublime Text 2) to open the .bash_profile file to edit it. I was able to open the file in TextEdit using "cmd-shift-.", and here's what's in the file: [[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm" PATH=$PATH:~/bin export PATH export PATH=/usr/local/git/bin But the file is LOCKED, so I can't edit it there either. I'm very new to programming and in the middle of trying to install everything on my Mac to go through a Ruby on Rails tutorial. I can't even check my version of ruby, since even ruby -v returns -bash: ruby: command not found Any help would be greatly appreciated. Thanks.

    Read the article

  • add script on boot linux machine

    - by user1546679
    I have one script to start a service on my ubuntu. I added it on boot machine using "# update-rc.d projeto defaults". But it still doesn't start with the boot machine. I think is because I am using other user to start the script "su - www-data -c ...". But I am not sure, because I run the update-rc.d command as root. When I execute the script from a terminal, it asks the password of the user www-data. Does anyone know what is happening? Thanks a lot! Felipe #!/bin/bash # /var/www/boinc/projeto/bin/start function action { su - www-data -c "/var/www/boinc/projeto/bin/$1" } case $1 in start|stop|status) action $1 ;; *) echo "ERRO: usar $0 (start|stop|status)" exit 1 ;; esac

    Read the article

  • what to do when ctrl-c can't kill a process?

    - by Dustin Boswell
    Ctrl-c doesn't always work to kill the current process (for instance, if that process is busy in certain network operations). In that case, you just see "^C" by your cursor, and can't do much else. What's the easiest way to force that process to die now without losing my terminal? Summary of answers below: Usually, you can Ctrl-z to put the process to sleep, and then do "kill -9 process-pid", where you find the process's pid with 'ps' and other tools. On Bash (and possibly other shells) you can do "kill -9 %1" (or '%N' in general) which is easier. If Ctrl-z doesn't work, you'll have to open another terminal and kill from there.

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >