Search Results

Search found 24505 results on 981 pages for 'bash script'.

Page 95/981 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • How to get the pid of a running process using a single command that parse the output of ps?

    - by Sorin Sbarnea
    I am looking for a single line that does return the pid of a running process. Currently I have: ps -A -o pid,cmd|grep xxx|head -n 1 And this returns the fist pid, command. I need only the first number from the output and ignore the rest. I suppose sed or awk would help here but my experience with them is limited. Also, this has another problem, it will return the pid of grep if the xxx is not running. It's really important to have a single line, as I want to reuse the output for doing something else, like killing that process.

    Read the article

  • Determine hostname of connected ethernet switch

    - by Beastcraft
    I've a bonding on two interfaces. I'd like to monitor wether they are connected to different switches (the switches have hostnames). ethX should be connected to switchX and ethY to switchY. Currently I'm checking this with following command: tcpdump -vv -s0 -i ethX ether host 01:00:0c:cc:cc:cc After a minute it prints out the hostname (and much more information) from the switch. Are there any other solutions to monitor this? Greeting

    Read the article

  • Do background processes get a SIGHUP when logging off?

    - by Massimo
    This is a followup to this question. I've run some more tests; looks like it really doesn't matter if this is done at the physical console or via SSH, neither does this happen only with SCP; I also tested it with cat /dev/zero > /dev/null. The behaviour is exactly the same: Start a process in the background using & (or put it in background after it's started using CTRL-Z and bg); this is done without using nohup. Log off. Log on again. The process is still there, running happily, and is now a direct child of init. I can confirm both SCP and CAT quits immediately if sent a SIGHUP; I tested this using kill -HUP. So, it really looks like SIGHUP is not sent upon logoff, at least to background processes (can't test with a foreground one for obvious reasons). This happened to me initially with the service console of VMware ESX 3.5 (which is based on RedHat), but I was able to replicate it exactly on CentOS 5.4. The question is, again: shouldn't a SIGHUP be sent to processes, even if they're running in background, upon logging off? Why is this not happening? Edit I checked with strace, as per Kyle's answer. As I was expecting, the process doesn't get any signal when logging off from the shell where it was launched. This happens both when using the server's console and via SSH.

    Read the article

  • Where to put unix sockets

    - by James Willson
    I am new to this, so sorry if its obvious. I am running a debian server and installing the likes of UWSGI, NGinx etc on there. The configurations keep talking about pointing to "sockets". In the build options I seem to be able to specify where the sockets for each program go. By default it looks like most of them go in /tmp/ (not all of them). Is this a good place for them to go? Im trying to keep things as organised as possible but just bunging them in my tmp directory doesnt seem like the best option.

    Read the article

  • Enabling Shell colours through Putty SSH

    - by Jon
    I have set a number of configurations in my .bashrc file to set the appearance of shell on my Redhat machine. However, when I login as root using Putty, the colours are not shown. I can enable them again by typing 'su', which simply puts me back to root like I was when I logged into putty, but that isn't exaclty ideal. Is there some configuration file or something I can use to enable shell colours when I login with Putty? Thanks

    Read the article

  • Add entire 300 GB filesystem to Git Annex repository?

    - by Ryan Lester
    By default, I get an error that I have too many open files from the process. If I lift the limit manually, I get an error that I'm out of memory. For whatever reason, it seems that Git Annex in its current state is not optimised for this sort of task (adding thousands of files to a repository at once). As a possible solution, my next thought was to do something like: cd / find . -type d | git annex add --$NONRECURSIVELY find . -type f | git annex add # Need to add parent directories of each file first or adding files fails The problem with this solution is that there doesn't seem from the documentation to be a way to non-recursively add a directory in Git Annex. Is there something I'm missing or a workaround for this? If my proposed solution is a dead end, are there other ways that people have solved this problem?

    Read the article

  • How to remove a tagged block of text in a file?

    - by EmpireJones
    How can I remove all instances of tagged blocks of text in a file with sed, grep, or another program? If I have a file which contains: random text // START TEXT internal text // END TEXT more random // START TEXT asdf // END TEXT text how can I remove all blocks of text within the start/end lines, produce the following? random text more random text

    Read the article

  • Use msysgit/"Git for Windows" to navigate Windows shortcuts?

    - by Darthfett
    I use msysgit on Windows to use git, but I often want to navigate through a Windows-style *.lnk shortcut. I typically manage my file structure through Windows' explorer, so using a different type of shortcut (such as creating hard or soft link in git) isn't feasible. How would I navigate through this type of shortcut? For example: PCUser@PCName ~ $ cd Desktop PCUser@PCName ~/Desktop $ ls Scripts.lnk PCUser@PCName ~/Desktop $ cd Scripts.lnk sh.exe": cd: Scripts.lnk: Not a directory Is it possible to change this behavior, so that instead of getting an error, it just goes to the location of the directory? Alternatively, is there a command to get the path in a *.lnk file?

    Read the article

  • Linux file copy with ETA?

    - by bobby
    I'm copying a large amount of files between disks. There's approximately 16 GB of data. I'd like to see progress information, and even an estimated time of completion from the command line. Any advice?

    Read the article

  • SSH to remote host (edgemarc 4200 or 4500 series routers) and pull arp data

    - by MaQleod
    I've been trying to think of a method to do this for days, but have not come up with anything yet. Ideally, this is what I'm looking to do: From a windows XP machine, I need to open an SSH connection to a remote host, send the arp command, and pull the text results of the command back for use on the client. I will need to parse this data and preferably produce a 2D array of IPs and MAC addresses. There will be no shared keys, this is all done with a username and password that will always be different, they will need to be fed into the command via variables that will be pulled from a database using an autoit script based on the WAN ip of the remote host. Now the actual parsing of the data and creation of the array will be easy if I can just get the text of the arp table. Is there any way to ssh to a remote host, run a command and return the data from that command to the client in a batch script or perl script (it is ok if it writes the text to a file, I can read it out of the file later, I just need it to get to the client)?

    Read the article

  • Passing multiple sets of arguments to a command

    - by Alec
    instances contains several whitespace separated strings, as does snapshots. I want to run the command below, with each instance-snapshot pair. ec2-attach-volume --instance $instances --device /dev/sdf $snapshots For example, if instances contains A B C, and snapshots contains 1 2 3, I want the command to be called like so: ec2-attach-volume -C cert.pem -K pk.pem --instance A --device /dev/sdf 1 ec2-attach-volume -C cert.pem -K pk.pem --instance B --device /dev/sdf 2 ec2-attach-volume -C cert.pem -K pk.pem --instance C --device /dev/sdf 3 I can do either one or the other with xargs -n 1, but how do I do both?

    Read the article

  • How can I remove old log entries from a log file and archive them somewhere else in Linux?

    - by Mike B
    CentOS 4.x I apologize in advance if this is not the appropriate place to ask this question. It pertains to a linux server / IT admin task. I've got a log file on an old CentOS 4.x server and I want to remove log entries older than a certain date and place them in a new file for archive. Here's an example of the log format: 2012-06-07 22:32:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:03,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:04,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:32:10,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:12,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:15,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:32:40,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:58,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:33:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:33:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:33:02,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| Essentially, I'm looking for a one-liner that will do the following: Find any events older than a provided YYYY-MM-DD and remove them from the primary log file. Take the deleted events from step 1 and put them in a new log file (Optional) Compress the new archive log file holding the deleted events. I'm aware that there are log rotate tools that do this but this should just be a one-time task so I'd prefer not to set that up. Additional notes: If the date part it tricky or too resource intensive, an alternative would be to just keep the last X number of lines and move the rest. I was originally thinking of something like tail -n 10000 > newfile.txt but that would mean moving the "good" logs to a new file and then doing a name swap... and then I'd still need to remove the "good" entries from the archive. This particular log file is pretty large (1 GB) so I'd prefer the task to be as resource and time efficient as possible. The extra pipes in the log concern me and I'm not sure if I'd need extra protection in the commands to avoid that from causing problems.

    Read the article

  • cut text from each line in a txt file

    - by bboyreason
    i have a text file where each line looks like this: <img border=0 width=555 height=555 src=http://websitelinkimagelinkhere> each line is like that for like 1500 lines, i want to sort of 'grep' (i dont think that will work because it returns the whole line) each line for 'http://websiteimagelinkhere' output file should have newlines or tabs after each image link, like the original file. or if someone only knows a way to do this with each element being in a cell of the same column that would be okay too.

    Read the article

  • Is there a unix command to output time elapsed during a command?

    - by Olivier Lacan
    I love using time to find out how long a command took to execute but when dealing with commands that execute sub-commands internally (and provide output that allows you to tell when each of those sub-commands start running) it would be really great to be able to tell after what number of seconds (or milliseconds) a specific sub-command started running. When I say sub-command, really the only way to distinguish these from the outside is anything printed to standard out. Really this seems like it should be an option to time.

    Read the article

  • looping .mpeg dump

    - by Matt Cook
    Need to dump an MPEG2 file in a loop, either to stdout or a named pipe. This works: $ { while : ; do cat myLoop.mpg; done; } | vlc - This works on a text file containing "1234\n": $ mkfifo myPipe $ cat test.txt > myPipe & < myPipe tee -a myPipe | cat - (it correctly loops, outputting "1234" on every line). Why does the following NOT work? $ cat myLoop.mpg > myPipe & < myPipe tee -a myPipe | vlc myPipe I'm primarily interested in re-writing the first statement to remove the improper "cat myLoop.mpg" statement. Will be inputting into VLC, or into FFMPEG and then piped into VLC.

    Read the article

  • Passing the output of the last command to sed as an argument

    - by neurolysis
    Hi, Basically, I'm wanting to automate adding something to xorg.conf in the right place, I've used some commands to get the line number of the line I want to manipulate, but I'm not really sure how to go about passing this line number (as an argument and NOT something to be manipulated) to sed. I have been told about xargs and looked at the docs on it, but after some reading and experimentation I can't seem to get it to work. In case anyone can think of a better method entirely, the process I want to automate is just finding the line containing both "Identifier" and "Monitor0" (there will only be one) and adding a line below it. The problem with just finding Monitor0 and manipulating that line is that there are multiple lines with Monitor0 in. I've got this far: fgrep -n "Monitor0" </etc/X11/xorg.conf | fgrep "Identifier" | cut -f1 -d: This gives out the line number which I'm wanting to pass to sed, but I'm not really sure how to do it. ...or is there a simpler way which I'm not seeing? Thanks. :)

    Read the article

  • How do I change the .bash_history file location?

    - by Brian Graham
    I'm running CentOS 6.x and want to move the .bash_history to a different location. The home directories of my users are (because I run a VPS) in /var/www/vhost/<domain>.<tld> which is FTP accessible (and it should be). Because of this, I have changed the AuthorizedKeysFile for SSH connections out of the normal ~/.ssh/authorized_keys since FTP connections would easily be able to locate them. At the same time I want to move the .bash_history file to /home/%u/.bash_history where %u is the current user.

    Read the article

  • Can't access Terminal anymore, only shows a cursor

    - by user138304
    I run OS X. Following these directions (Installing MySQL on Mac OS X) I added a file to /usr and the contents were PATH=/usr/local/mysql/bin:$PATH Actually I was trying to get the mysql command to work now I cannot access terminal. All I get is a cursor but no command line. I also cannot find the file I created in the Finder. I used command shift G to find the folder /usr and the file is not there. Edit: I Solved the problem by restarting my computer. I am really not sure what the problem was. I got the idea because Could not open a new pseudo-tty. appeared in my terminal after following slhck directions to remove my .profile file. I then searched google and found this; http://blogs.oreilly.com/digitalmedia/2008/03/fixing-terminal-tty-errors.html. Thanks

    Read the article

  • How do I read multiple lines from STDIN into a variable?

    - by The Wicked Flea
    I've been googling this question to no avail. I'm automating a build process here at work, and all I'm trying to do is get version numbers and a tiny description of the build which may be multi-line. The system this runs on is OSX 10.6.8. I've seen everything from using CAT to processing each line as necessary. I can't figure out what I should use and why. Attempts read -d '' versionNotes Results in garbled input if the user has to use the backspace key. Also there's no good way to terminate the input as ^D doesn't terminate and ^C just exits the process. read -d 'END' versionNotes Works... but still garbles the input if the backspace key is needed. while read versionNotes do echo " $versionNotes" >> "source/application.yml" done Doesn't properly end the input (because I'm too late to look up matching against an empty string).

    Read the article

  • Setting up ssh config file with id_rsa through tunnel

    - by Rubens
    I've been struggling to set up a valid configuration to open a connection with a second machine, passing through another one, and using an id_rsa (which requests me a password) to connect to the third machine. I've asked this question in another forum, but I've received no answer that could be considered very helpful. The problem, better described, goes as follows: Local machine: user1@localhost Intermediary machine: user1@inter Remote target: user2@final I'm able to do the entire connection using pseudo-tty: ssh -t inter ssh user2@final (this will ask me the password for the id_rsa file I have in machine "inter") However, for speeding things up, I'd like to set my .ssh/config file, so that I can simply connect to machine "final" using: ssh final What I've got so far -- which does not work -- is, in my .ssh/config file: Host inter User user1 HostName inter.com IdentityFile ~/.ssh/id_rsa Host final User user2 HostName final.com IdentityFile ~/.ssh/id_rsa_2 ProxyCommand ssh inter nc %h %p The id_rsa file is used to connect to the middle machine (this requires me no password typing), and id_rsa_2 file is used to connect to machine "final" (this one requests a password). I've tried mixing up some LocalForward and/or RemoteForward fields, and putting the id_rsa files in both first and second machines, but I could not seem to succeed with no configuration whatsoever. Hope somebody can help me here! Regards! P.S.: the thread I've tried to get some help from: http://www.linuxquestions.org/questions/linux-general-1/proxycommand-on-ssh-config-file-4175433750/

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >