Search Results

Search found 24505 results on 981 pages for 'bash script'.

Page 95/981 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • Do background processes get a SIGHUP when logging off?

    - by Massimo
    This is a followup to this question. I've run some more tests; looks like it really doesn't matter if this is done at the physical console or via SSH, neither does this happen only with SCP; I also tested it with cat /dev/zero > /dev/null. The behaviour is exactly the same: Start a process in the background using & (or put it in background after it's started using CTRL-Z and bg); this is done without using nohup. Log off. Log on again. The process is still there, running happily, and is now a direct child of init. I can confirm both SCP and CAT quits immediately if sent a SIGHUP; I tested this using kill -HUP. So, it really looks like SIGHUP is not sent upon logoff, at least to background processes (can't test with a foreground one for obvious reasons). This happened to me initially with the service console of VMware ESX 3.5 (which is based on RedHat), but I was able to replicate it exactly on CentOS 5.4. The question is, again: shouldn't a SIGHUP be sent to processes, even if they're running in background, upon logging off? Why is this not happening? Edit I checked with strace, as per Kyle's answer. As I was expecting, the process doesn't get any signal when logging off from the shell where it was launched. This happens both when using the server's console and via SSH.

    Read the article

  • Ubuntu + Unable to Edit .bashrc file because of ReadOnly

    - by Napster
    To Remove Issue of WARNING: Unable to verify SSL certificate for api.heroku.com To disable SSL verification, run with HEROKU_SSL_VERIFY=disable By Googling I got few solution. One of them is added HEROKU_SSL_VERIFY=disable to .bashrc. Unfortunately, I am not able to edit that file, gives an error of 'readonly' option is set (add ! to override) !wq is used in place of :wq, but no response. Please suggest me to resolve this issue... Thanks

    Read the article

  • How to get the pid of a running process using a single command that parse the output of ps?

    - by Sorin Sbarnea
    I am looking for a single line that does return the pid of a running process. Currently I have: ps -A -o pid,cmd|grep xxx|head -n 1 And this returns the fist pid, command. I need only the first number from the output and ignore the rest. I suppose sed or awk would help here but my experience with them is limited. Also, this has another problem, it will return the pid of grep if the xxx is not running. It's really important to have a single line, as I want to reuse the output for doing something else, like killing that process.

    Read the article

  • SSH to remote host (edgemarc 4200 or 4500 series routers) and pull arp data

    - by MaQleod
    I've been trying to think of a method to do this for days, but have not come up with anything yet. Ideally, this is what I'm looking to do: From a windows XP machine, I need to open an SSH connection to a remote host, send the arp command, and pull the text results of the command back for use on the client. I will need to parse this data and preferably produce a 2D array of IPs and MAC addresses. There will be no shared keys, this is all done with a username and password that will always be different, they will need to be fed into the command via variables that will be pulled from a database using an autoit script based on the WAN ip of the remote host. Now the actual parsing of the data and creation of the array will be easy if I can just get the text of the arp table. Is there any way to ssh to a remote host, run a command and return the data from that command to the client in a batch script or perl script (it is ok if it writes the text to a file, I can read it out of the file later, I just need it to get to the client)?

    Read the article

  • How to remove a tagged block of text in a file?

    - by EmpireJones
    How can I remove all instances of tagged blocks of text in a file with sed, grep, or another program? If I have a file which contains: random text // START TEXT internal text // END TEXT more random // START TEXT asdf // END TEXT text how can I remove all blocks of text within the start/end lines, produce the following? random text more random text

    Read the article

  • Add entire 300 GB filesystem to Git Annex repository?

    - by Ryan Lester
    By default, I get an error that I have too many open files from the process. If I lift the limit manually, I get an error that I'm out of memory. For whatever reason, it seems that Git Annex in its current state is not optimised for this sort of task (adding thousands of files to a repository at once). As a possible solution, my next thought was to do something like: cd / find . -type d | git annex add --$NONRECURSIVELY find . -type f | git annex add # Need to add parent directories of each file first or adding files fails The problem with this solution is that there doesn't seem from the documentation to be a way to non-recursively add a directory in Git Annex. Is there something I'm missing or a workaround for this? If my proposed solution is a dead end, are there other ways that people have solved this problem?

    Read the article

  • Use msysgit/"Git for Windows" to navigate Windows shortcuts?

    - by Darthfett
    I use msysgit on Windows to use git, but I often want to navigate through a Windows-style *.lnk shortcut. I typically manage my file structure through Windows' explorer, so using a different type of shortcut (such as creating hard or soft link in git) isn't feasible. How would I navigate through this type of shortcut? For example: PCUser@PCName ~ $ cd Desktop PCUser@PCName ~/Desktop $ ls Scripts.lnk PCUser@PCName ~/Desktop $ cd Scripts.lnk sh.exe": cd: Scripts.lnk: Not a directory Is it possible to change this behavior, so that instead of getting an error, it just goes to the location of the directory? Alternatively, is there a command to get the path in a *.lnk file?

    Read the article

  • Linux file copy with ETA?

    - by bobby
    I'm copying a large amount of files between disks. There's approximately 16 GB of data. I'd like to see progress information, and even an estimated time of completion from the command line. Any advice?

    Read the article

  • Enabling Shell colours through Putty SSH

    - by Jon
    I have set a number of configurations in my .bashrc file to set the appearance of shell on my Redhat machine. However, when I login as root using Putty, the colours are not shown. I can enable them again by typing 'su', which simply puts me back to root like I was when I logged into putty, but that isn't exaclty ideal. Is there some configuration file or something I can use to enable shell colours when I login with Putty? Thanks

    Read the article

  • Over writing output to a text file

    - by Naveen Gamage
    I'm trying to write wget command's output to a text file, but it always appends to the text file. #!/bin/sh download() { local url=$1 echo -n " " wget --progress=dot $url 2>&1 | grep --line-buffered "%" | \ sed -u -e "s,\.,,g" | awk '{printf("\b\b\b\b%4s", $2)}' echo " DONE" } file="$1" echo -n "Downloading $file:" download "$file" > file.log I tried using using > won't work, where am I doing wrong?

    Read the article

  • looping .mpeg dump

    - by Matt Cook
    Need to dump an MPEG2 file in a loop, either to stdout or a named pipe. This works: $ { while : ; do cat myLoop.mpg; done; } | vlc - This works on a text file containing "1234\n": $ mkfifo myPipe $ cat test.txt > myPipe & < myPipe tee -a myPipe | cat - (it correctly loops, outputting "1234" on every line). Why does the following NOT work? $ cat myLoop.mpg > myPipe & < myPipe tee -a myPipe | vlc myPipe I'm primarily interested in re-writing the first statement to remove the improper "cat myLoop.mpg" statement. Will be inputting into VLC, or into FFMPEG and then piped into VLC.

    Read the article

  • How can I remove old log entries from a log file and archive them somewhere else in Linux?

    - by Mike B
    CentOS 4.x I apologize in advance if this is not the appropriate place to ask this question. It pertains to a linux server / IT admin task. I've got a log file on an old CentOS 4.x server and I want to remove log entries older than a certain date and place them in a new file for archive. Here's an example of the log format: 2012-06-07 22:32:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:03,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:04,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:32:10,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:12,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:15,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:32:40,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:58,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:33:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:33:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:33:02,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| Essentially, I'm looking for a one-liner that will do the following: Find any events older than a provided YYYY-MM-DD and remove them from the primary log file. Take the deleted events from step 1 and put them in a new log file (Optional) Compress the new archive log file holding the deleted events. I'm aware that there are log rotate tools that do this but this should just be a one-time task so I'd prefer not to set that up. Additional notes: If the date part it tricky or too resource intensive, an alternative would be to just keep the last X number of lines and move the rest. I was originally thinking of something like tail -n 10000 > newfile.txt but that would mean moving the "good" logs to a new file and then doing a name swap... and then I'd still need to remove the "good" entries from the archive. This particular log file is pretty large (1 GB) so I'd prefer the task to be as resource and time efficient as possible. The extra pipes in the log concern me and I'm not sure if I'd need extra protection in the commands to avoid that from causing problems.

    Read the article

  • cut text from each line in a txt file

    - by bboyreason
    i have a text file where each line looks like this: <img border=0 width=555 height=555 src=http://websitelinkimagelinkhere> each line is like that for like 1500 lines, i want to sort of 'grep' (i dont think that will work because it returns the whole line) each line for 'http://websiteimagelinkhere' output file should have newlines or tabs after each image link, like the original file. or if someone only knows a way to do this with each element being in a cell of the same column that would be okay too.

    Read the article

  • How do I change the .bash_history file location?

    - by Brian Graham
    I'm running CentOS 6.x and want to move the .bash_history to a different location. The home directories of my users are (because I run a VPS) in /var/www/vhost/<domain>.<tld> which is FTP accessible (and it should be). Because of this, I have changed the AuthorizedKeysFile for SSH connections out of the normal ~/.ssh/authorized_keys since FTP connections would easily be able to locate them. At the same time I want to move the .bash_history file to /home/%u/.bash_history where %u is the current user.

    Read the article

  • Passing multiple sets of arguments to a command

    - by Alec
    instances contains several whitespace separated strings, as does snapshots. I want to run the command below, with each instance-snapshot pair. ec2-attach-volume --instance $instances --device /dev/sdf $snapshots For example, if instances contains A B C, and snapshots contains 1 2 3, I want the command to be called like so: ec2-attach-volume -C cert.pem -K pk.pem --instance A --device /dev/sdf 1 ec2-attach-volume -C cert.pem -K pk.pem --instance B --device /dev/sdf 2 ec2-attach-volume -C cert.pem -K pk.pem --instance C --device /dev/sdf 3 I can do either one or the other with xargs -n 1, but how do I do both?

    Read the article

  • How do I read multiple lines from STDIN into a variable?

    - by The Wicked Flea
    I've been googling this question to no avail. I'm automating a build process here at work, and all I'm trying to do is get version numbers and a tiny description of the build which may be multi-line. The system this runs on is OSX 10.6.8. I've seen everything from using CAT to processing each line as necessary. I can't figure out what I should use and why. Attempts read -d '' versionNotes Results in garbled input if the user has to use the backspace key. Also there's no good way to terminate the input as ^D doesn't terminate and ^C just exits the process. read -d 'END' versionNotes Works... but still garbles the input if the backspace key is needed. while read versionNotes do echo " $versionNotes" >> "source/application.yml" done Doesn't properly end the input (because I'm too late to look up matching against an empty string).

    Read the article

  • Using watch with pipes

    - by Tom
    Hi! I'd like to run this command: watch -n 1 tail -n 200 log/site_dev.log | grep Doctrine But it does not run, because "I think" that the grep tries to run on the watch instead of the tail... Is there a way to do something like watch -n 1 (tail -n 200 log/site_dev.log | grep Doctrine) Thanks a lot!

    Read the article

  • Is there a unix command to output time elapsed during a command?

    - by Olivier Lacan
    I love using time to find out how long a command took to execute but when dealing with commands that execute sub-commands internally (and provide output that allows you to tell when each of those sub-commands start running) it would be really great to be able to tell after what number of seconds (or milliseconds) a specific sub-command started running. When I say sub-command, really the only way to distinguish these from the outside is anything printed to standard out. Really this seems like it should be an option to time.

    Read the article

  • Can't access Terminal anymore, only shows a cursor

    - by user138304
    I run OS X. Following these directions (Installing MySQL on Mac OS X) I added a file to /usr and the contents were PATH=/usr/local/mysql/bin:$PATH Actually I was trying to get the mysql command to work now I cannot access terminal. All I get is a cursor but no command line. I also cannot find the file I created in the Finder. I used command shift G to find the folder /usr and the file is not there. Edit: I Solved the problem by restarting my computer. I am really not sure what the problem was. I got the idea because Could not open a new pseudo-tty. appeared in my terminal after following slhck directions to remove my .profile file. I then searched google and found this; http://blogs.oreilly.com/digitalmedia/2008/03/fixing-terminal-tty-errors.html. Thanks

    Read the article

  • UDISKS instead of HAL

    - by MeJ
    Does anybody have some expirence with udisks, because HAL won't be longer supported on the most linux distribution, so I am thinking of to use udisks for UDI in $(hal-find-by-property --key storage.bus --string usb) do HAL_TMP=`hal-get-property --udi $UDI --key storage.removable.media_available` if [ "$HAL_TMP" = "true" ]; then HAL_DEV=$(hal-get-property --udi $UDI --key block.device) HAL_SIZE=$(hal-get-property --udi $UDI --key storage.removable.media_size) HAL_TYPE=$(hal-get-property --udi $UDI --key storage.drive_type) How do I have to adapt the above mentioned commands but use udisks instead of hal Thanks!

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >