Search Results

Search found 3730 results on 150 pages for 'bash'.

Page 66/150 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • How to rename multiple files by replacing word in file name geting from the shell script variables?

    - by fy6877
    This question like this thread. How to rename multiple files by replacing word in file name? My example is more complex than the above topic. The two variables are $name and $ newname getting from the shell script other location. $name and $ newname may have the unicode words or special symbles like []<?...etc,so could anyone help me to provide a method to add a part of script in shell scrit to solve file name replacing question. BTW,I try to type two kind of commands to change the part of file name, but it can't work. rename.ul '$name' '$newname' /home/fy6877/test/final/* ls /home/fy6877/test/final/|xargs -I$ rename.ul '$name' '$newname' $

    Read the article

  • Compare Error logs across service fleet, find unique errors

    - by neuroelectronic
    I'm looking for a tool where I can list the servers to check, the location of the file and it would return a list of the most common errors across those servers (say like 2 or 3 servers for report brevity) and get a report something like this Server.A Server.B Server.C -------- -------- -------- 42 error.X 39 error.X 61 error.X 21 error.Y 7 error.Y 5 error.A 17 error.B 6 error.A 4 error.Y 4 error.A 2 error.R 3 error.S 3 error.R 1 error.S 1 error.R Of course, excluding timestamps and other error details and just grepping out the common sub-strings and listing them like so. I'd be able to look at the table and see that error.B is unique to Server.A and conclude that there is something up with Server.A. Does something like this already exist? Is this something I'll have to code myself? I'm not necessarily looking for this specific report, just the functionality to find unique errors across a set of error logs.

    Read the article

  • How to configure in crontab with condition statement for checks

    - by chz
    We like to monitor the NAS storage mounted on a linux box. We only like to be notified via mail when the usage exceeds a certain number say 80. We have only seen in linux books where most of them are calling shell scripts at certain times. How do we write inside crontab to only mail us if it exceeds 80 ? Usual eg 2 2 * * * /home/someUser/script.sh 2&1 | mail [email protected] Looking for solution like below 2 2 * * * if [ someNumber "80" ] ; then /home/someUser/script.sh | mail [email protected] Sincerely

    Read the article

  • How to get the PID of a process started by /bin/su -c

    - by crash3k
    I'm writing a init.d-script for an java-app. But the java-app should be run by another user. (The OS I'm using is Debian Squeeze.) I already got this: /bin/su - $USER - c "cd $PATH;echo $PASSWORD | $JAVA -Xmx256m -jar $PATH/app.jar -d > /dev/null" & PID=$! /bin/su - $USER - c "echo $PID > $PIDFILE" But this will of course only save the pid of the "/bin/su"-process instead of the pid of the created java-process.

    Read the article

  • Using www-data through SSH

    - by Fluidbyte
    For development purposes I'm using www-data (on an ubuntu 11.10 server) to ssh in and fire git commands and basic stuff against the webroot. I don't have things like command history, coloring, etc like I do when I ssh in as any other user, so I'm curious how to get this working. I'm assuming I need a `.bashrc' file, but I'm not sure what to include or (more importantly since I could just copy the one from another user) where it goes.

    Read the article

  • How can I tell what command is running on the remote end of an ssh connection?

    - by user268385
    Tl;dr - how do I find the name of the command (eg $BASH_COMMAND) running on the remote end of an ssh connection? ... My example setup is two tmux vertical panes, LH pane runs a local vim session with vertical split, RH pane runs an ssh session running vim, again with a vertical split. Using tmux-navigator I can navigate from left to right over the first 3 vim buffers, but the 4th (far right hand one) is inaccessible. The reason for this is that tmux-navigator tests the value of 'pane_current_command' and compares it to 'vim' before deciding which keystrokes to dispatch. On the right hand tmux pane, the current command is 'ssh' and not 'vim'. What I want to do is test for (pane_current_command =~ 'ssh'), and if so, examine the command that is running on the far side of the connection? I cannot find a way to get hold of this, so any suggestions would be welcome? For information, the problem is almost the same as this one, but without the nested tmux sessions: https://github.com/christoomey/vim-tmux-navigator/issues/12

    Read the article

  • Sending mail from command line if body not empty

    - by cdecker
    I'd like to write a simple script that alerts me if a log changes. For this I'm using grep to find the lines I'm interested in. Right now it works like this: grep line /var/log/file | mail -s Log [email protected] Problem is that this sends a mail even if no matching lines are found. The mail utility from mailutils seems to have no switch telling it to drop mails that have an empty body. Is there a quick and easy way to do so?

    Read the article

  • Running 'dd' command at startup?

    - by Usman Ajmal
    Hi, I have set a script to run at Linux startup. The script contains a following line of code dd if=/dev/sda2 of=/dev/sda5 ?> result.txt Now, when my Linux Desktop appear, result.txt contain dd: opening '/dev/sda2': Permission denied If I prefix the dd command with sudo as: sudo dd if=/dev/sda2 of=/dev/sda5 ?> result.txt the result.txt contains sudo: no tty present and no askpass program specified Is there a way I can get around this problem? What I want is to copy 2nd parititon to 5th when a user logs in no matter if he is root, admin, Desktop or an unprivileged user. Thanks a lot as always.

    Read the article

  • Is there a "pattern" or a group that defines *rc files in *nix environments?

    - by Somebody still uses you MS-DOS
    I'm starting to use command line a little more, and I see there are a lot of ways to configure some config files in my $HOME. This is good, since you can customize it the way you really like. Unfortunately, for begginners, having too many options is a little confusing. For example, I created .bash_alias for some alias I'm using. I didn't even know this option existed, I'm used to simply edit .bashrc. Do exist a pattern, a "good practice", envisioning flexibility and modularity in terms of rc files structure? Do exist a standardization group for this, or every body just creates it's own configuration setup?

    Read the article

  • Backup all plesk MySQL Databases to individual files

    - by Michael
    Hy, Because I'm new to shell scripting I need a hand. I currently backup all mydatabases to a single file, thing that makes the restore preaty hard. The second problem that my MySQL password dosen't work because of a Plesk bug and i get the password from "/etc/psa/.psa.shadow". Here is the code that I use to backup all my databases to a single file. mysqldump -uadmin -p`cat /etc/psa/.psa.shadow` --all-databases | bzip2 -c > /root/21.10.2013.sql.bz2 I found some scripts on the web that backup each database to individual files but I don't know how to make them work for my situation. Here is a example script: for db in $(mysql -e 'show databases' -s --skip-column-names); do mysqldump $db | gzip > "/backups/mysqldump-$(hostname)-$db-$(date +%Y-%m-%d-%H.%M.%S).gz"; done Can someone help me make the script above work for my situation? Requirements: Backup each database to individual file using plesk password location.

    Read the article

  • Moving a file using PuTTY

    - by Paul Trotter
    I am newbie struggling to move a file on a Linux VPS using PuTTY. I can log in with a user in PuTTY at this point I can navigate to see the file I wish to move (~/servers/apache-solr-3.6.2/example/webapps/solr.war). By using cd .. a couple of times from the directory I begin at when I first log in to PuTTY I can then navigate to the location I wish to move the file to: usr/local/jakarta/apache-tomcat-5.5.36/webapps/ I know that I need to use cp to copy the file and have tried variations on: cp ~/servers/apache-solr-3.6.2/example/webapps/solr.war usr/local/jakarta/apache-tomcat-5.5.36/webapps However each time I get 'No such file or directory' I have tried excluding the ~/ and the start and I have tried specifying solr.war at the end of the command. Please excuse the newbie question, but I would really appreciate some advice on what I am doing wrong here.

    Read the article

  • Find directories that do not contain a directory?

    - by erikcw
    I'm trying to figure out how to use the linux "find" command (or another command that will get the job done) to return a list of file paths/directories that do not contain a directory of a certain name. ~/web/domain1.com/public_html/bar ~/web/domain2.com/public_html/ ~/web/domain3.com/public_html/bar ~/web/domain4.com/public_html/ I want all of the paths that don't contain the directory named "bar" (domain2.com and domain4.com). Any idea how I can get find to output such a list? Thanks!

    Read the article

  • Enter response once prompt returns?

    - by mjb
    It's neither a secure idea nor one I'd recommend elsewhere, but I have a situation when occasionally it takes a while for my Ansible ad-hoc command to respond. I'd love to pipe or args or whatever is needed to push the required text into the prompt so I can walk away and know it will finish. Ex: $ ansible all -m shell -a "reboot" --ask-pass Password: blah blah blah it worked I'd love to send an argument or << or something to get the password in. Is that possible?

    Read the article

  • Parallel shell loops

    - by brubelsabs
    Hi, I want to process many files and since I've here a bunch of cores I want to do it in parallel: for i in *.myfiles; do do_something $i `derived_params $i` other_params; done I know of a Makefile solution but my commands needs the arguments out of the shell globbing list. What I found is: > function pwait() { > while [ $(jobs -p | wc -l) -ge $1 ]; do > sleep 1 > done > } > To use it, all one has to do is put & after the jobs and a pwait call, the parameter gives the number of parallel processes: > for i in *; do > do_something $i & > pwait 10 > done But this doesn't work very well, e.g. I tried it with e.g. a for loop converting many files but giving me error and left jobs undone. I can't belive that this isn't done yet since the discussion on zsh mailing list is so old by now. So do you know any better?

    Read the article

  • cp -u is illegal on mac. What are the alternatives?

    - by Barnabas Szabolcs
    I have a MacbookPro Lion, and I have tried to archive my files that is tried to copy and overwrite if the source is newer than the destination. I tried the following command cp -u source destination but it says, -u is illegal. I also did not find --update or -u in the man cp. Can you please help, what can I do in this situation? [I have the question moved over here from SO, so feel free to answer it once more. I hope this is the right way of dealing with this]

    Read the article

  • set a global environment variable in linux that sticks when going root

    - by Scott
    When I SSH into a linux box, I want to have the /etc/profile file save the results of the whoami command to a global environment variable. if I were to go root with the command sudo su -, I do not want that command to run again when gonig root, I want it to stick with the result of whoami as my regular username from before I went root, and I need to access that variable as the root user even though it will run the /etc/profile file again when I go root. What can I do to only run that command once in the /etc/profile command?

    Read the article

  • How to run a command in a process that is not a child of the current process?

    - by amicitas
    I am having a library conflict issue with calling an external program from within a interpreted programming environment (IDL). The issue seems to be that since the program I am calling ends up as a child of IDL, libraries are not being reloaded. From within IDL I can launch sub-processes either directly or using a shell. Is there a good way that I can cause my program to be run without ending up as a child process? The only solution I have found so far is to use ssh localhost my_program. This works perfectly but I would like a more direct solution.

    Read the article

  • Is there a "pattern" or a group that defines *rcs files in *nix environments?

    - by Somebody still uses you MS-DOS
    I'm starting to use command line a little more, and I see there are a lot of ways to configure some config files in my $HOME. This is good, since you can customize it the way you really like. Unfortunately, for begginners, having too many options is a little confusing. For example, I created .bash_alias for some alias I'm using. I didn't even know this option existed, I'm used to simply edit .bashrc. Do exist a pattern, a "good practice", envisioning flexibility and modularity in terms of rc files structure? Do exist a standardization group for this, or every body just creates it's own configuration setup?

    Read the article

  • Automate hashing for each file in a folder?

    - by Kennie R.
    I have quite a few FTP folders, and I add a few each month and prefer to leave some sort of method of verifying their integrity, for example the files MD5SUMS, SHA256SUMS, ... which I could create using a script. Take for example: find ./ -type f -exec md5sum $1 {} \; This works fine, but when I run it each time for each shaxxx sum afterwards, it creates a sum of the MD5SUMs file which is really not wanted. Is there a simpler way, or script, or common way of hashing all the files in to their sums file without causing problems like that? I could really use a better option.

    Read the article

  • Run shell script on Linux box from a shortcut/app in Android?

    - by melat0nin
    I have an Ubuntu box which runs XBMC, which crashes occasionally. Since I have no keyboard connected,I have to SSH in, kill xinit, then restart it. I was wondering if there's an elegant way of doing this from my Android tablet, so I don't have to go to my desktop PC. I've used ConnectBot and can log in, but typing is laborious, even using the edit keys to scroll back up through the buffer. It seems as though it should be possible to script this so that I can execute a shortcut, or at least select a predefined script to be executed. This would seem to have plenty of applications, and there could be a site of scripts - restart webserver, reboot, email logs etc

    Read the article

  • Use test to check for condition with find and execdir option

    - by slosd
    I think I can keep my question short. Why does the following command produce no output? find /usr/share/themes -mindepth 1 -maxdepth 1 -type d -execdir test -d {}/gnome-shell \; I expected it to print all folders in /usr/share/themes that contain a folder gnome-shell. Several websites suggest that this usage of test as a command in exec/execdir is possible. From man find: -exec command ; Execute command; true if 0 status is returned. [...]

    Read the article

  • ffmpeg - join / merge on top of each other

    - by AisIceEyes
    I'm trying to join together two videos on top of each other. I already did these two ffmpeg commands ffmpeg -i 2_Out_of_Control.VOB -aspect 16:9 \ -vf "yadif=0:-1:0,crop=w=714:h=476:x=6:y=0,scale=1280:720,boxblur=lp=13" \ -c:v libx264 -preset medium \ -c:a copy \ '2(blurred)Out_of_Control.mp4' ffmpeg -i 2_Out_of_Control.VOB \ -vf "yadif=0:-1:0,crop=w=714:h=476:x=6:y=0,scale=1080:720" \ -c:v libx264 -preset medium \ -c:a copy \ '2(clear)Out_of_Control.mp4' I'm currently stuck on making the "clear" version on top of the "blurred" version. I'm not sure how to do that. Can anybody help please? Been googling for around 2 days already. Only achieved it by using OpenShot but yeah, would prefer if there is an ffmpeg command to merge the two videos on top of each other. Edit: I want the "clear" video to be at the center at the top of the "blurred" video Edit2: console output would be the same as above: ffmpeg -i 2(blurred)Out_of_Control.mp4 \ -i 2(clear)Out_of_Control.mp4 \ -aspect 16:9 \ -vf <just something that will join the two together: the blurred at the bottom, clear at top that is centered> \ -c:v libx264 -preset medium \ -c:a copy \ '2_Out_of_Control_VOB.mp4' Edit3: here is the output when I used ffmpeg -i 2_Out_of_Control.VOB $ ffmpeg -i 2_Out_of_Control.VOB ffmpeg version git-2013-10-03-c7fe2a3 Copyright (c) 2000-2013 the FFmpeg developers built on Oct 4 2013 05:22:06 with gcc 4.6 (Ubuntu/Linaro 4.6.3-1ubuntu5) configuration: --prefix=/home/username/ffmpeg_build --extra-cflags=-I/home/username/ffmpeg_build/include --extra-ldflags=-L/home/username/ffmpeg_build/lib --bindir=/home/username/bin --extra-libs=-ldl --enable-gpl --enable-libass --enable-libfdk-aac --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-nonfree --enable-x11grab libavutil 52. 46.100 / 52. 46.100 libavcodec 55. 34.100 / 55. 34.100 libavformat 55. 19.100 / 55. 19.100 libavdevice 55. 3.100 / 55. 3.100 libavfilter 3. 88.101 / 3. 88.101 libswscale 2. 5.100 / 2. 5.100 libswresample 0. 17.103 / 0. 17.103 libpostproc 52. 3.100 / 52. 3.100 Input #0, mpeg, from '2_Out_of_Control.VOB': Duration: 00:05:00.01, start: 0.500000, bitrate: 4574 kb/s Stream #0:0[0x1e0]: Video: mpeg2video (Main), yuv420p(tv, smpte170m), 720x480 [SAR 8:9 DAR 4:3], max. 9334 kb/s, 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc Stream #0:1[0x80]: Audio: ac3, 48000 Hz, stereo, fltp, 384 kb/s At least one output file must be specified $

    Read the article

  • Most effective way to change Linux command prompt for all users?

    - by incredimike
    I have several machines and the hostnames are really long.. i.e. companyname-ux-staging-web1.companyname.com. So my prompt looks something like [root@mycompany-ux-staging-web1 ~]# I'd like to shorten that up for all users on all machines with the least amount of work. From what I read I have a couple options, but they all have their drawbacks. I could change the hostname, but that would likely affect applications. Not a great choice. I could alter also $PS1 at login for all users by editing all .bashrc for existing users, and edit /etc/skel/.bashrc for potential new users. That's a lot of work across 10 machines. What's my best option or what have I overlooked?

    Read the article

  • Kill all currently running cron jobs

    - by Adelphia
    For some reason my cron job scripts aren't exiting cleanly and they're backing up my server. There are currently a couple hundred processes running for one of my users. I can use the following command to kill all processes by that user, but how can I simplify this to kill only crons? pgrep -U username | while read id ; do kill -6 $id ; done It would be dangerous to run the above command as is, correct? Wouldn't that kill mysql and other important things?

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >