Search Results

Search found 4783 results on 192 pages for 'bash completion'.

Page 76/192 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • How to extract all IDs accessed from a mysql general log using the linux commandline?

    - by shlomoid
    This should be a trivial question for anyone who's good with bash/sed/awk. Unfortunately, I'm not, yet :) I've got a general log from MySQL which contains some queries that have a common parameter, they query on a specific id field. The queries look like update tbl set col='binary_values' where id=X; I need to process the log and extract all the IDs that these queries touched, each in it's own line. The purpose of this is to figure out how many times each ID is accessed. Eventually I'd group and count the values. The binary values are indeed binary junk, so they kinda messed up some things I've been trying to do. Eventually we solved the problem temporarily using a python script, but I'm sure the linux command line tool set can do it too. How would you do it?

    Read the article

  • Ending tail -f started in a shell script

    - by rangalo
    I have the following. A Java process writing logs to the stdout A shell script starting the Java process Another shell script which executes the previous one and redirects the log I check the log file with the tail -f command for the success message. Even if I have exit 0 in the code I cannot end the tail -f process. Which doesn't let my script to finish. Is there any other way of doing this in Bash? The code looks like the following. function startServer() { touch logfile startJavaprocess > logfile & tail -f logfile | while read line do if echo $line | grep -q 'Started'; then echo 'Server Started' exit 0 fi done }

    Read the article

  • More efficient way to find & tar millions of files

    - by Stu Thompson
    I've got a job running on my server at the command line prompt for a two days now: find data/ -name filepattern-*2009* -exec tar uf 2008.tar {} ; It is taking forever, and then some. Yes, there are millions of files in the target directory. But just running... find data/ -name filepattern-*2009* -print > filesOfInterest.txt ...takes only two hours or so. At the rate my job is running, it won't be finished for a couple of weeks.. That seems unreasonable. Is there a more efficient to do this? Maybe with a more complicated bash script? A secondary questions is "why is my current approach so slow?"

    Read the article

  • Combining DROP USER and DROP DATABASE with SELECT .. WHERE query?

    - by zsero
    I'd like to make a very simple thing, replicate the functionality of mysql's interactive mysql_secure_installation script. My question is that is there a simple, built-in way in MySQL to combine the output of a SELECT query with the input of a DROP user or DROP database script? For example, if I'd like to drop all users with empty passwords. How could I do that with DROP USER statement? I know an obvious solution would be to run everything for example from a Python script, run a query with mysql -Bse "select..." parse the output with some program construct the drop query run it. Is there an easy way to do it in a simple SQL query? I've seen some example here, but I wouldn't call it simple: http://stackoverflow.com/a/12097567/518169 Would you recommend making a combined query, or just to parse the output using for example Python or bash scripts/sed?

    Read the article

  • Making archive from files with same names in different directories

    - by Tim
    Hi, I have some files with same names but under different directories. For example, path1/filea, path1/fileb, path2/filea, path2/fileb,.... What is the best way to make the files into an archive? Under these directories, there are other files that I don't want to make into the archive. Off the top of my head, I think of using Bash, probably ar, tar and other commands, but am not sure how exactly to do it. Renaming the files seems to make the file names a little complicated. I tend to keep the directory structure inside the archive. Or I might be wrong. Other ideas are welcome! Thanks and regards!

    Read the article

  • Shell script for testing

    - by Helltone
    I want a simple testing shell script that launches a program N times in parallel, and saves each different output to a different file. I have made a start that launches the program in parallel and saves the output, but how can I keep only the outputs that are different? Also how can I actually make the echo DONE! indicate the end? #!/bin/bash N=10 for((i=1; j<=$N; ++i)); do ./test > output-$N & done echo DONE!

    Read the article

  • How can I convert a file full of unix time strings to human readable dates?

    - by skymook
    I am processing a file full of unix time strings. I want to convert them all to human readable. The file looks like so: 1153335401 1153448586 1153476729 1153494310 1153603662 1153640211 Here is the script: #! /bin/bash FILE="test.txt" cat $FILE | while read line; do perl -e 'print scalar(gmtime($line)), "\n"' done This is not working. The output I get is Thu Jan 1 00:00:00 1970 for every line. I think the line breaks are being picked up and that is why it is not working. Any ideas? I'm using Mac OSX is that makes any difference.

    Read the article

  • Why does this script work in the current directory but fail when placed in the path?

    - by kiloseven
    I wish to replace my failing memory with a very small shell script. #!/bin/sh if ! [ –a $1.sav ]; then mv $1 $1.sav cp $1.sav $1 fi nano $1 is intended to save the original version of a script. If the original has been preserved before, it skips the move-and-copy-back (and I use move-and-copy-back to preserve the original timestamp). This works as intended if, after I make it executable with chmod I launch it from within the directory where I am editing, e.g. with ./safe.sh filename However, when I move it into /usr/bin and then I try to run it in a different directory (without the leading ./) it fails with: *-bash: /usr/bin/safe.sh: /bin/sh: bad interpreter: Text file busy* My question is, when I move this script into the path (verified by echo $PATH) why does it then fail? D'oh? Inquiring minds want to know how to make this work.

    Read the article

  • processing a file full of unix time strings to human readble

    - by skymook
    I am processing a file full of unix time strings. I want to convert them all to human readable. The file looks like so: 1153335401 1153448586 1153476729 1153494310 1153603662 1153640211 Here is the script: #! /bin/bash FILE="test.txt" cat $FILE | while read line; do perl -e 'print scalar(gmtime($line)), "\n"' done This is not working. The output I get is Thu Jan 1 00:00:00 1970 for every line. I think the line breaks are being picked up and that is why it is not working. Any ideas? I'm using Mac OSX is that makes any difference.

    Read the article

  • Stacking standard output of `su`

    - by Kristopher Ives
    I've got some code that I wrote that uses a combination of bash and PHP command line scripting. The script is ran as root and then uses su to become various uses. I start a session like this: $result = `su SomeUser ./dothis.php` Here ./dothis.php is a script that may generate some output being stored in $result, but the problem is that there is usually output that doesn't get caught and makes it hard for me to read my script output. How can I make sure that the output is being captured within this su stacking?

    Read the article

  • Alter Git prompt on Windows

    - by kko
    I'm using Git on Windows, installed through GitExtensions with MSysGit (latest) having selected "do not modify my Windows prompt" during installation. Now, I would like to be able to modify the default prompt (which by default shows just the branch name to also show me how much time, and how many local commits since I last pushed to origin (or specifically origin/master, whichever is easier). So say instead of: me@myPC /c/myRepo (master) I would see something along the lines of: me@myPC /c/myRepo (master) 5 | 10:20 meaning I have last pushed 10h 20min ago and I have made 5 local commits since. Before you mention it, I am aware there are ways of doing it with PowerShell, but I don't want to use it. I want my standard git bash we all know and love. I found a few solutions to that, with modifying PS1 variable in .bashrc file, but (excuse my poor Unix konwledge) they seem to be not working, (for example accepted answer to this question). So there you have it. Is this possible?

    Read the article

  • How to kill all asynchronous processes

    - by Arko
    Suppose we have a BASH script running some commands in the background. At some time we want to kill all of them, whether they have finished their job or not. Here's an example: function command_doing_nothing () { sleep 10 echo "I'm done" } for (( i = 0; i < 3; i++ )); do command_doing_nothing & done echo "Jobs:" jobs sleep 1 # Now we want to kill them How to kill those 3 jobs running in the background?

    Read the article

  • Check if a symlink has changed

    - by BCS
    I have a daemon that, when it's started, loads its data from a directory that happens to be a symlink. Periodically, new data is generated and the symlink updated. I want a bash script that will check if the current symlink is the same as the old one (that the daemon started with) and if not, restart the daemon. My current thought is: if [[ ! -e $old_dir || $(readlink "$data_dir") == $(readlink "$old_dir") ]]; then echo restart ... ln "$(readlink "$data_dir")" "$old_dir" -sf else echo no restart fi The abstract requirement is: each time the script runs, it needs to check if a symlink on a given path now points to a something other than it did the last time and if so do something. (The alternative would be to check if the data at the path has changed but I don't see that being any cleaner.) My questions: Is this a good approach? Does anyone have a better idea? Where should I put $old_dir?

    Read the article

  • Calculating statistics directly from a CSV file

    - by User1
    I have a transaction log file in CSV format that I want use to run statistics. The log has the following fields: date: Time/date stamp salesperson: The username of the person who closed the sale promo: sum total of items in the sale that were promotions. amount: grand total of the sale I'd like to get the following statistics: salesperson: The username of the salesperson being analyzed. minAmount: The smallest grand total of this salesperson's transaction. avgAmount: The mean grand total.. maxAmount: The largest grand total.. minPromo: The smallest promo amount by the salesperson. avgPromo: The mean promo amount... I'm tempted to build a database structure, import this file, write SQL, and pull out the stats. I don't need anything more from this data than these stats. Is there an easier way? I'm hoping some bash script could make this easy.

    Read the article

  • How do I arbitrarily reorder lines in a text file using a Unix shell?

    - by Tim Bellis
    I've got a text file with an arbitrary number of lines, e.g.: one line some other line an additional line one more here I'd like to write a script to reorder those lines based on a given order. e.g. An input of 2 1 3 4 would swap the first and second lines. An input of 3 1 2 4 would put the 3rd line first, the 1st line second, the 2nd line third and keep the 4th line fourth. I could hack something together, but I'm wondering if there's an elegant solution? I can use either bash or ksh.

    Read the article

  • Question about regex in linux commands.

    - by smwikipedia
    I ran the following command at linux bash: apt-cache search hex.*(view|edit) My intention was to find any software packages whose name/description contains the pattern 'hex.*(view|edit)'. But among the results I got this: kipi-plugins - image manipulation/handling plugins for KIPI aware programs How could this be in the results list? I didn't see any matching string in this result. Is this a bug of the apt-cache search command? Or do I mis-understand how the regex is used by this command? Many thanks.

    Read the article

  • Controlling rsync with Python?

    - by Cheesemold
    I've been wanting to write a python script that would run several instances of rsync in sequence for backing up data to a different computer. At the moment I just have this text file with the commands I use and I've just been copy-pasting them into the terminal, and it seems kinda silly. I want to be able to use python to do this for me. I know very vaguely how to use subprocess.popen, but I have no clue how to get python to interact with rsync directly, like for entering my password for me. Can python do that? Something like: if theProccess.proccessResponse == "Password:" : theProccess.respond(string) Or is the best that I can do is just have it, or even a bash script, just run the rsyncs in sequence and have to type my password in over and over again? Thanks in advance.

    Read the article

  • Removing old directories with logs

    - by Mcgiwer
    My IM stores the logs according to the contact name. I have created a file with the list of active contacts. My problem is following: I would like to create a bash script with read the active contacts names from the file and compare it with the directories. If the directory name wouldn't be found on the list, it would be moved to another directory (let's call it "archive"). I try to visualise it for you. content of the list: contact1 contact2 content of the dir contact1 contact2 contact3 contact4 after running of the script, the content fo the dir: contact1 contact2 contact3 == ../archive contact4 == ../archive

    Read the article

  • How can I do a 'where' clause in Linux shell?

    - by Hoa
    I have a CSV file and I would like to filter all the lines where the 19th column has two or more characters. I know the individual pieces but can't figure out how to glue them together. First I have to cat the file. The following prints the 19th column awk -F "," '{print $19}' file.txt awk also has length and ifs And I know it all has to be glued together using pipes. I'm just getting stuck at the exact syntax since I have not done much bash programming before.

    Read the article

  • Shell Prompt Line Wrapping Issue

    - by Rob
    I've done something to break my Bash Shell Prompt in OS X (10.5.7) Terminal. This is the PS1 that I had configured: PS1='\[\e[1;32m\]\h\[\e[0m\]:\[\e[1;34m\]\w\[\e[0m\]\$ ' As far as I can tell I have the color commands escaping correctly. However when I scroll up and down in my command history I often get line wrapping issues if the historic commands wrap onto multiple lines. I simplified my prompts to the following: PS1='\[\e[1m\]\h:\w\$ \[\e[0m\]' PS2='> ' And I still see something like: localhost:~/Library/Application Support/Firefox/Profiles/knpmxpup.Defau lt/extensions/{1A2D0EC4-75F5-4c91-89C4-3656F6E44B68}$ expocd \{1A2D0EC4-7 5F5-4c91-89C4-3656F6E export PS1="\[ \e[1;32m\]\h\[\e[0m\]: cd Library/Appl ication\ Support/ I've also tried \033 instead of \e. I just included PS2 up there for information, I haven't changed that from the install default. If I completely remove the color codes then everything works fine, any ideas?

    Read the article

  • "tail -f" alternate which doesn't scroll the terminal window

    - by Jagtesh Chadha
    I want to check a file at continuous intervals for contents which keep changing. "tail -f" doesn't suffice as the file doesn't grow in size. I could use a simple while loop in bash to the same effect: while [ 1 ]; do cat /proc/acpi/battery/BAT1/state ; sleep 10; done It works, although it has the unwanted effect of scrolling my terminal window. So now I'm wondering, is there a linux/shell command that would display the output of this file without scrolling the terminal?

    Read the article

  • How to move many files in multiple different directories (on Linux)

    - by user1335982
    My problem is that I have too many files in single directory. I cannot "ls" the directory, cos is too large. I need to move all files in better directory structure. I'm using the last 3 digits from ID as folders in reverse way. For example ID 2018972 will gotta go in /2/7/9/img_2018972.jpg. I've created the directories, but now I need help with bash script. I know the IDs, there are in range 1,300,000 - 2,000,000. But I can't handle regular expressions. I wan't to move all files like this: /images/folder/img_2018972.jpg -> /images/2/7/9/img_2018972.jpg I will appreciate any help on this subject. Thanks!

    Read the article

  • Replace url() relative path with full domain in css files

    - by deepwell
    I'd like to run a script on release that replaces all url() declarations in a css file with the full domain path, because images are hosted on a static web server. Example Current: background-image: url(/images/menu.gif); Desired: background-image: url(http://example.com/images/menu.gif); Current: background-image: url('/images/menu.gif'); Desired: background-image: url('http://example.com/images/menu.gif'); Current: background-image: url("/images/menu.gif"); Desired: background-image: url("http://example.com/images/menu.gif"); I have concocted a bash script using sed to do just that, but it does not handle url with quotes url(''), or urls that already have a full path. STATIC_HOST="http://example.com" sed -i '' "s|url(\([^)]*\)|url($STATIC_HOST\1|g" main.css

    Read the article

  • Why does using set -e cause my script to fail when called in crontab

    - by SDGuero
    I have a bash script that performs several file operations. When any user runs this script, it executes successfully and outputs a few lines of text but when I try to cron it there are problems. It seems to run (I see an entry in cron log showing it was kicked off) but nothing happens, it doesn't output anything and doesn't do any of its file operations. It also doesn't appear in the running processes anywhere so it appears to be exiting out immediately. After some troubleshooting I found that removing "set -e" resolved the issue, it now runs from the system cron without a problem. So it works, but I'd rather have set -e enabled so the script exits if there is an error. Does anyone know why "set -e" is causing my script to exit? Thanks for the help, Ryan

    Read the article

  • bashscript for file search and replace!

    - by D3orn
    Hey I try to write a littel bash script. This should copy a dir and all files in it. Then it should search each file and dir in this copied dir for a String (e.g @ForTestingOnly) and then this save the line number. Then it should go on and count each { and } as soon as the number is equals it should save againg the line number. = it should delete all the lines between this 2 numbers. I'm trying to make a script which searchs for all this annotations and then delete the method which is directly after this ano. Thx for help...

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >