Search Results

Search found 7607 results on 305 pages for 'bash profile'.

Page 62/305 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • Can 'screen' grab an existing process and tie itself to it?

    - by warren
    Scenario: Started a process that's going to take "a while" to complete outside of screen. Need to leave the terminal / netowrk hiccups Process lost Would be nice if: Started a process outside of screen Realize error Run screen <magic-goes-here> and it grabs the active process to itself From the man pages and --help info, I don't see a way this can be done. Is this possible directly with screen? If not, is it possible to change the owning shell of a process, so that the bash (or other shell of your choosing) instance inside screen can have a command run which will change the parent shell of the initial process to itself from the originator?

    Read the article

  • How to subscribe to a youtube feed from linux command line?

    - by Tim
    I want to subscribe to a youtube channel and automatically download new videos to my linux machine. I know I could do this e.g. with miro, but I will not watch the videos using Miro, want to choose the quality and would like to run it as a cronjob. It should be able to: know which feed entries are new and not download old entries resume (or at least redownload) failed/incomplete downloads from older sessions Are there any complete solutions for this? If not it would be enough for me (maybe even preferable) to just have a command line rss reader that remembers which entries have already been there and writes the new video urls (e.g. http://www.youtube.com/watch?v=FodYFMaI4vQ&feature=youtube_gdata from http://gdata.youtube.com/feeds/api/users/tedxtalks/uploads) into a file. I could then accomplish the rest using a bash script and youtube-dl. What would be programs usable for this purpose?

    Read the article

  • Why do I see the weird backspace behaviour on my shell sometimes?

    - by Lazer
    I use bash shell and sometimes all of a sudded, my Backspace key stops working (when this happens Ctrl + Backspace still works fine) I am not sure why this happens, but it also carries over to any vim sessions that I use from the shell. To my surprise, getting a fresh shell does not help, and the problem seems to go away as abruptly as it started. This is what the typed characters look like, each Backspace keypress is shown by a ^? on the shell $ cat filem^?namr^?e Does anybody have a clue what might be happening? How can I restore the normal behaviour?

    Read the article

  • How can I allow a linux subversion user to only execute svnserve?

    - by sbleon
    I've got a user that I'd like to only be able to use subversion. We like to use svn+ssh:// URLs sometimes (for public keys and whatnot), so I need them to be able to connect over ssh and run only the svnserve command. When using a svn+ssh URL, svn ssh'es in and passes the arguments "-c svnserve -t". I wrote a custom shell as follows to filter the commands that can be run. This works, but it's not passing the input to svnserve, so when I try to "svn up" I get "svn: Connection closed unexpectedly". #!/bin/bash if [ "$1" == "-c" ] && [ "$2" == "svnserve" ] && [ "$3" == "-t" ] && [ "$4" == ""] ; then exec svnserve -t else echo "Access denied. User may only run svnserve." fi

    Read the article

  • OS X Lion - Installing Oracle 10g Standard Edition

    - by Cellze
    Im trying to install oracle 10g on to OS X Lion, I have previous achieved this on snow leopard with the following http://blog.rayapps.com/2009/09/14/how-to-install-oracle-database-10g-on-mac-os-x-snow-leopard/ The issue im having is that the ulimit settings in the oracle/.bash_profile cannot be modified. I have the following in the bash_profile: export DISPLAY=:0.0 export ORACLE_BASE=$HOME umask 022 # must match `sysctl kern.maxprocperuid` ulimit -Hu 512 ulimit -Su 512 # must match `sysctl kern.maxfilesperproc` ulimit -Hn 10240 ulimit -Sn 10240 Upon applying the bash_profile settings . ~/.bash_profile i get the following error: -bash: ulimit: max user processes: cannot be modify limit: Invalid argument This then results in $ sqlplus / as sysdba not functioning correctly with a Segmentation fault: 11 The output of $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 10240 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 512 virtual memory (kbytes, -v) unlimited If any one knows how I can apply these ulimit settings to the oracle user I have created to allow me to install sqlplus and therefore create a db, that would be great.

    Read the article

  • Editing .bash_profile file not taking effect

    - by Sandeepan Nath
    I need to put export PATH=$PATH:/opt/lampp/bin to my ~/.bash_profile file so that mysql from command line works on my system. Please check mysql command line not working for further details on that. I am working on a fedora system and logged in as root user. If I run locate .bash_profile then I get these:- /etc/skel/.bash_profile /home/sam/.bash_profile /home/sohil/.bash_profile /home/windows/.bash_profile /root/.bash_profile So, I modified the /root/.bash_profile file like this:- from PATH=$PATH:$HOME/bin export PATH to PATH=$PATH:/opt/lampp/bin export PATH But, still the change is not taking effect - Opening a new console and running mysql again says bash: mysql: command not found. However running export PATH=$PATH:/opt/lampp/bin in console makes it work for that session. So, I am doing something wrong with the .bash_profile file. May be editing incorrect one or doing the edit incorrectly.

    Read the article

  • odd system name showing up in terminal

    - by sam
    Ive been working on some command line stuff with an external developer through team viewer for work, to interact with th CL i use terminal on OSX, when working with the developer i was always watching what they were doing and i also have all the bash history. Usually upon opening terminal i get something like this Last login: Tue Sep 17 21:33:02 on ttys001 You have mail. unknown-5c:00:00:00:00:00:~ sam$ (note ive replaced some characters in the last line with 00) But today when i opened up terminal and i get this Last login: Mon Oct 21 16:49:35 on ttys000 You have mail. richies-ipad:~ sam$ Note it now says richies-ipad - any idea why this is ? I dont know any one called richie let alone let them have access to my machine. Is this something to be worried about - the fact that someone has enough access to change that ? Also what does the ttys001 part on the first line mean ?

    Read the article

  • In *nix, how to determine which filesystem a particular file is on?

    - by smokris
    In a generic, modern unix environment (say, GNU/Linux, GNU/Solaris, or Mac OS X), is there a good way to determine which mountpoint and filesystem-type a particular absolute file path is on? I suppose I could execute the mount command and manually parse the output of that and string-compare it with my file path, but before I do that I'm wondering if there's a more elegant way. I'm developing a BASH script that makes use of extended attributes, and want to make it Do The Right Thing (to the small extent that it is possible) for a variety of filesystems and host environments.

    Read the article

  • mv directory to overwrite another

    - by gbjbaanb
    I may be going bonkers here, but I'm trying to move a directory to a new location, overwriting the contents (on Linux, using bash). Everytime I try it, it responds with "mv: cannot move `./src' to a subdirectory of itself" eg. I have: /src /new/dir/src /$ mv src/ new/dir/ If I delete the destination dir, then it works. I know I can move the contents of the source dir to overwrite the destination, but I'd like to use the same command to overwrite the destination if it already exists, or move the source if it doesn't.

    Read the article

  • Excel; exporting/importing different columns to different csv files

    - by Sisyphus
    Is there a way to batch export different columns to different csv files in excel on an OSX, I'm thinking something along the lines of possibly automator, applescript or bash. I've had a look play around with automator and so far no look. The best I have accomplished export the whole sheet, then use sed to strip out what I don't need, however this is terribly inefficient. Also, is there a method, to batch import multiple csv files into columns. Thanks in advance && sorry I didn't tag excel correctly it wouldn't allow me to create the excel:mac tag

    Read the article

  • Trying to script rsync using pam_exec

    - by Ricky-Rose
    I'm trying to write a bash script that will execute rsync when called by pam_exec. I've tried a couple different ways, and I'm not sure what I'm doing wrong. When I try to run the script at login by adding session optional pam_exec.so /usr/bin/local/sync.sh to my sshd file, it gives me an exit code of 12. if I log in and then manually run my script, it allows me to connect to the remote server, and it lists my files, but it doesn't actually sync anything. I have tried the code below using buth $USER and $PAM_USER. $PAM_USER doesn't work at all. #!/bin/sh rsync -azv -e ssh $USER@remote_server:/home/html/$USER/ /home/html/$USER

    Read the article

  • username and password for rsync in script

    - by sims
    I'm creating a cron job to keep two dirs in sync. I'm using rsync. I'm running an rsync daemon. I read the manual and it says: RSYNC_PASSWORD Setting RSYNC_PASSWORD to the required password allows you to run authenticated rsync connections to an rsync daemon without user intervention. Note that this does not supply a password to a shell transport such as ssh. USER or LOGNAME The USER or LOGNAME environment variables are used to determine the default username sent to an rsync daemon. If neither is set, the username defaults to 'nobody' I have something like: #!/bin/bash USER=name RSYNC_PASSWORD=pass DEST="server::module" /usr/bin/rsync -rltvvv . $DEST I also tried exporting (dangerous, I know) USER and RSYNC_PASSWORD. I also tried with LOGNAME. Nothing works. Am I doing this correctly?

    Read the article

  • Using a script that uses Duplicity + S3 excluding large files

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. If i run the script duplicity gives an error. However if I copy and paste the same command generated by the script everything works... Here is the script #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="gpgkey" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= When the script is run I get the error; Command line error: Expected 2 args, got 6 Where am i going wrong??

    Read the article

  • crontab environment

    - by Adamski
    I have written various scripts to launch Java server applications, which are typically run for 24 hours before being shut down (by invoking the same script with a different parameter). The script relies on environment variables defined in a file: ~/<user>.env, which I source from .bashrc. This works fine when invoking the script from the command line but if I want to add the script as a crontab entry I run into the problem where .bashrc isn't read. My question: What is the best practice approach for solving this problem? I realise I could define a crontab entry such as: * * * * 1-5 /usr/bin/bash -c '. /home/myuser/myuser.env && /home/myuser/scripts/myscript.sh' ... but this seems plain ugly. Alternatively I could source myuser.env at the beginning of every script, but this would become a nightmare to maintain. Any help appreciated.

    Read the article

  • How do I reference the value of a constructed environment variable in a loop?

    - by Rob Spieldenner
    What I'm trying to do is loop over environment variables. I have a number of installs that change and each install has 3 IPs to push files to and run scripts on, and I want to automate this as much as possible (so that I only have to modify a file that I'll source with the environment variables). The following is a simplified version that once I figure out I can solve my problem. So given in my.props: COUNT=2 A_0=foo B_0=bar A_1=fizz B_1=buzz I want to fill in the for loop in the following script #!/bin/bash . <path>/my.props for ((i=0; i < COUNT; i++)) do <script here> done So that I can get the values from the environment variables. Like the following(but that actually work): echo $A_$i $B_$i or A=A_$i B=B_$i echo $A $B returns foo bar then fizz buzz

    Read the article

  • Problem with script that excludes large files using Duplicity and Amazon S3

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. If i run the script duplicity gives an error. However if i copy and paste the same command generated by the script everything works... Here is the script #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="gpgkey" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= When the script is run I get the error; Command line error: Expected 2 args, got 6 Where am i going wrong??

    Read the article

  • Debian wheezy keyboard shortcut for both opening and closing a terminal

    - by Peter
    I recently installed tilda and I would like to open it and close with the same keyboard shortcut. I wrote little something in bash that closes tilda if it is open and opens tilda when there is no such a process in ps -ef. It looks like this: a=ps -ef | fgrep -i tilda | cut -d' ' -f4 | head -1;if [ $a ] ; then kill $a; else tilda; fi It seems to be working (at least partially) when I commit this in terminal, but when I assign this command to specific keyboard shortcut (for example alt+1) it does nothing. Any suggestions? btw. is it possible to assign this shortcut for button '`' like in Quake?

    Read the article

  • How to get ~/foo from /home/user1/foo?

    - by Claudius
    The Bash prompt supports the \w escape sequence, documented as \w the current working directory, with $HOME abbreviated with a tilde (uses the value of the PROMPT_DIRTRIM variable) Is there any way to get a similar abbreviation for an arbitrary string? That is, is there a general command that does something like the following, provided that HOME=/home/user1 /home/user1 ? ~ /home/user1/a/1 ? ~/a/1 /home/user2/b/2 ? ~user2/b/2 /root ? ~root Sure, I could try something ugly with sed, but that is unlikely to give me the result I want in any case. :-) The movitation behind this is that I would like to keep the titles in the tabs of my terminals as short as possible, hence abbreviate working directories where possible.

    Read the article

  • Cygwin Syntax Trouble

    - by mkrouse
    I'm on windows 7 using Cygwin. My script and text file are located in the same directory. #!/bin/bash while read name; do echo "Name read from file - $name" done < /home/Matt/servers.txt I get this error and I don't know why because this is correct while loop syntax..? u0146121@U0146121-TPD-A ~/Matt $ ./script.sh ./script.sh: line 4: syntax error near unexpected token `done' ./script.sh: line 4: `done < /home/Matt/servers.txt' Can anybody tell me what I'm doing wrong? I think it's because I'm on windows and using Cygwin.

    Read the article

  • How to parse pipe with multiple commands independently?

    - by yarun can
    How can I parse output of a single command by multiple commands without truncating at each step? For example ls -al|grep -i something will pass every line that has "something" in it to the next pipe which is fine, but that also means every other line in the pipe is discarded since there wont be matching the condition. What I want is to be able to operate on single pipe by many commands independently. In this case it a pipe from Mutt which passes the whole message body. I want to get grep, sed, delete and assign each of these to bash variables maybe. Initially what I want is to be able to assign "message id" to a variable, "subject" to another variable etc Then pass those into proper commands arguments. Here is how it will be MessageBodyFromMutt|grep something -Ax -Bx |grep another thing from the original message| sed some stuff from the original message| cut from here to there Obviously the above line does not do what I want. I want all these commands to operate on the original message body. I hope it makes sense

    Read the article

  • Why doesn't this script work?

    - by Devin
    I've been using this bash script: for i in $(ls); do mv $i a$i; done to prepend all filenames in a folder with the letter a. I'm afraid that at some point I'll accidentally use this script in the wrong directory and prepend a ton of filenames that I don't want prepended. So I decided to explicitly cite the path to the file. So now my script looks like this: for i in $(ls /cygdrive/c/Users/path/to/Images); do mv /cygdrive/c/Users/path/to/Images/$i /cygdrive/c/Users/path/to/Images/a$i; done It does prepend the filename with the letter a, but it also appends the filename with this ? symbol. Any ideas why it would do that? If it helps any, I'm using cygwin on a Windows 7 box. Thanks for the help!

    Read the article

  • Long string insertion with sed

    - by Luis Varca
    I am trying to use this expression to insert the contents of one text file into another after a give string. This is a simple bash script: TEXT=`cat file1.txt` sed -i "/teststring/a \ $TEXT" file2.txt This returns an error, "sed: -e expression #1, char 37: unknown command: `M'" The issue is in the fact that the contents of file1.txt are actually a private certificate so it's a large amount of text and unusual characters which seems to be causing an issue. If I replace $TEXT with a simple ASCII value it works but when it reads the large content of file1.txt it fails with that error. Is there some way to carry out this action? Is my syntax off with sed or my quote placement wrong?

    Read the article

  • List/remove files, with filenames containing string that's "more than a month ago"?

    - by Martin Tóth
    I store some data in files which follow this naming convention: /interesting/data/filename-YYYY-MM-DD-HH-MM How do I look for the ones with date in file name < now - 1 month and delete them? Files may have changed since they were created, so searching according to last modification date is not good. What I'm doing now, is filter-ing them in python: prefix = '/interesting/data/filename-' import commands names = commands.getoutput('ls {0}*'.format(prefix)).splitlines() from datetime import datetime, timedelta all_files = map(lambda name: { 'name': name, 'date': datetime.strptime(name, '{0}%Y-%m-%d-%H-%M'.format(prefix)) }, names) month = datetime.now() - timedelta(days = 30) to_delete = filter(lambda item: item['date'] < month, all_files) import os map(os.remove, to_delete) Is there a (oneliner) bash solution for this?

    Read the article

  • Grub: Legacy 'ask' parameter no longer supported

    - by leeand00
    I'm trying to change the resolution on my base shell (the Ctrl+Alt+1) shell in Debian so that it supports my ViewSonic monitor. The shell appears really fuzzy when it is displayed on my lcd monitor, but GRUB looks fine when it's displayed. In I tried changing part of the GRUB_CMDLINE_LINUX_DEFAULT to 'vga=ask', and now I get the error on booting up 'Legacy 'ask' parameter no longer supported' Has this 'vga=ask' value been changed to something else? Note, I tried setting it to 'vga=782' after finding a list of screen modes here and the shell font got real huge for a few seconds during boot up, and then switched back to it's awful fuzzy self again, when I went to use the Debian Bash shell. UPDATE Tried suggestion in this question, it works without fuzziness until the last resolution change which displays the user login to the shell.

    Read the article

  • Ubuntu/ Gnome : Open an application is a specific workspace

    - by bguiz
    How do tell an application to open in a specific workspace? More info: I like to have my C++ IDE in workspace 2, my Java IDE in workspace 3, and my email, browser and miscellaneous in workspace four. I also use a shell script that executes upon log in: #!/bin/bash gnome-terminal & # WS 1 netbeans-6-9-1 & # WS2 qtcreator-2-0-1 & # WS 3 firefox & # WS 4 thunderbird & # WS 4 Of course currently it all opens in the curent workspace... Is there a way for me to specify which workspace each command should start in? Thanks in advance!

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >