Search Results

Search found 720 results on 29 pages for 'sed'.

Page 10/29 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Using perl to parse a file and insert specific values into a database

    - by Sean
    Disclaimer: I'm a newbie at scripting in perl, this is partially a learning exercise (but still a project for work). Also, I have a much stronger grasp on shell scripting, so my examples will likely be formatted in that mindset (but I would like to create them in perl). Sorry in advance for my verbosity, I want to make sure I am at least marginally clear in getting my point across I have a text file (a reference guide) that is a Word document converted to text then swapped from Windows to UNIX format in Notepad++. The file is uniform in that each section of the file had the same fields/formatting/tables. What I have planned to do, in a basic way is grab each section, keyed by unique batch job names and place all of the values into a database (or maybe just an excel file) so all the fields can be searched/edited for each job much easier than in the word file and possibly create a web interface later on. So what I want to do is grab each section by doing something like: sed -n '/job_name_1_regex/,/job_name_2_regex/' file.txt --how would this be formatted within a perl script? (grab the section in total, then break it down further from there) To read the file in the script I have open FORMAT_FILE, 'test_format.txt'; and then use foreach $line (<FORMAT_FILE>) to parse the file line by line. --is there a better way? My next problem is that since I converted from a word doc with tables, which looks like: Table Heading 1 Table Heading 2 Heading 1/Value 1 Heading 2/Value 1 Heading 1/Value 2 Heading 2/Value 2 but the text file it looks like: Table Heading 1 Table Heading 2Heading 1/Value 1Heading 1/Value 2Heading 2/Value 1Heading 2/Value 2 So I want to have "Heading 1" and "Heading 2" as a columns name and then put the respective values there. I just am not sure how to get the values in relation to the heading from the text file. The values of Heading 1 will always be the line number of Heading 1 plus 2 (Heading 1, Heading 2, Values for heading 1). I know this can be done in awk/sed pretty easily, just not sure how to address it inside a perl script. After I have all the right values and such, linking it up to a database may be an issue as well, I haven't started looking at the way perl interacts with DBs yet. Sorry if this is a bit scatterbrained...it's still not fully formed in my head.

    Read the article

  • Looking for regex to extract email addresses from /etc/passwd

    - by Brent
    Most of my users have email addresses associated with their profile in /etc/passwd. They are always in the 5th field, which I can grab, but they appear at different places within a comma-separated list in the 5th field. Can somebody give me a regex to grab just the email address (delimeted by commas) from a line in this file? (I will be using grep and sed from a bash script) Sample lines from file: user1:x:1147:5005:User One,Department,,,[email protected]:/home/directory:/bin/bash user2:x:1148:5002:User Two,Department2,[email protected],:/home/directory:/bin/bash

    Read the article

  • awk - Remove line if field is duplicate

    - by Kyle
    Looking for an awk (or sed) one-liner to remove lines from the output if the first field matches. An example for removing duplicate lines I've seen is: awk 'a !~ $0; {a=$0}' Tried using it for a basis with no luck (I thought changing the $0's to $1's would do the trick, but didn't seem to work).

    Read the article

  • Can this be done by sed?

    - by SpawnCxy
    Hiall,I need to deal with a file which seems as follows: 1234 4343 5345345 53453 4343 And what I want to do is to execute follow command to the number of each line: grep $num1 ./somepath #get num1_res Then write $num1 and $num1_res to another file which will be: 1234 32 4343 234 5345345 349 53453 78 #...etc Any good solution by sed?Or some other simple way? Thanks.

    Read the article

  • Remove a special character and Insert that to a line

    - by Kraj
    How to remove a special character(#) from a big file and insert that character to a particular line for example input.tsv $22 23 24 25 26 33 33 34 35 36 44 45 46 47 48 ID ID1 ID2 ID3 ID4 Output.tsv 22 23 24 25 26 33 33 34 35 36 44 45 46 47 48 $ID ID1 ID2 ID3 ID4 I've used the sed -e 's/#//g' input.tsv file to remove the '$' then how can I include '$' to the line starting with ID

    Read the article

  • How to search a file for a pattern and get a new file from the match point to end of file?

    - by WilliamKF
    I need to take a file and find the first occurrence of a literal string pattern as a complete line of the file: Acknowledgments: And then I wish to create a new file from the line of match all the way to the end of the file. I expect perl is a good way to do this, but I'm not much of a perl person, alternatively maybe sed is a good way? Please suggest a simple way to reliably accomplish this in Unix.

    Read the article

  • Find and replace date within a file

    - by user1629011
    My apologies if my title is not descriptive enough, I believe the following will be. I have 3 files which are just plain text, within each file, is a date Date: 2012-08-31 for example I would like to get a command/script to find this and update to the current date, but the date will be ever changing and may not be known going in (without viewing the contents of the file Knowing what the date is, its simple enough with sed, but how can I do this knowing the syntax of the line I want to modify, but not the specific values. ("Date: " at least is unchanging)

    Read the article

  • Extract IDs from CSS

    - by nosuchip
    I've the CSS file with many entry like id1, #id2, #id3, #id4 { ... } id3, #id2 { ... } id2, #id4 { ... } I want to extract list of unique IDs using command line tools (msys). Unique means any entry in list presented only once. How? PS: I know how doing it using python, but what about awk/sed/cat?

    Read the article

  • Killing HTML nodes from shell

    - by hendry
    Need a solution to kill nodes like <footer>foobar</footer> and <div class="nav"></div> from many several HTML files. I want to dump a site to disk without the menus and footers and what not. Ideally I would accomplish this task using basic unix tools like sed. Since it's not XML I can't use xmlstarlet. Could anyone please suggest recipes, so I can ideally have a script running kill-node.sh 'div class="toplinks"' *.html to prune the bits I don't want. Thank you,

    Read the article

  • Replace string in one file with contents of a second file

    - by jag7720
    I have two files: fileA: date >> /root/kvno.out kvno serverXXX\$ >> /root/kvno.out fileB: foobar I need to create a new file, fileC, with the same contents as fileA, except with the string XXX being replaced with the contents of fileB: date >> /root/kvno.out kvno serverfoobar\$ >> /root/kvno.out I'd like to do this using sed. I tried some of the examples I found but I only get the contents of fileB in fileC.

    Read the article

  • Parsing the first column of a csv file to a new file.

    - by S1syphus
    Operating System: OSX Method: From the command line, so using sed, cut, gawk, although preferably no installing modules. Essentially I am trying to take the first column of a csv file and parse it to a new file. Example input file EXAMPLEfoo,60,6 EXAMPLEbar,30,6 EXAMPLE1,60,3 EXAMPLE2,120,6 EXAMPLE3,60,6 EXAMPLE4,30,6 Desire output EXAMPLEfoo EXAMPLEbar EXAMPLE1 EXAMPLE2 EXAMPLE3 EXAMPLE4 So I want the first column. Here is what I have tried so far: awk -F"," '{print $1}' in.csv > out.txt awk -F"," '{for (i=2;i<=NF;i++)}' in.csv > out.txt awk -F"," 'BEGIN { OFS="," }' '{print $1}' in.csv > out.txt cat in.csv | cut -d \, -f 1 > out.txt None seem to work, either they just print the first line or nothing at all, so I would assume it's failing to read line by line.

    Read the article

  • Adding line with text between pattern and next occurence of the same pattern in bash

    - by kasper
    I am writing a bash script that modifies a file that looks like this: --- usr1 --- data data data data data data data data data data data data --- usr2 --- data data data data data data data data --- usr3 --- data data data data --- endline --- One question is: How to add next user line --- usrn --- after last user data lines? Second one is: How to delete specific user data lines (data lines and --- userx ---) i.e. I would like to delete usr2 with all his data set. It must work on bash 2.05 :) and I think it will use awk or sed, but I'm not sure.

    Read the article

  • How do I get Git's latest stable release version number?

    - by MattDiPasquale
    I'm writing a git-install.sh script: http://gist.github.com/419201 To get Git's latest stable release version number, I do: LSR_NUM=$(curl -silent http://git-scm.com/ | sed -n '/id="ver"/ s/.*v\([0-9].*\)<.*/\1/p') 2 Questions: Refactor my code: Is there a better way programmatically to do this? This works now, but it's brittle: if the web page at http://git-scm.com/ changes, the line above may stop working. PHP has a reliable URL for getting the latest release version: http://stackoverflow.com/questions/288206/is-there-a-site-which-simply-outputs-the-latest-stable-version-numbers-of-php-and Is there something like this for Git? This comes close: http://www.kernel.org/pub/software/scm/git/

    Read the article

  • Using AWK, treate files

    - by Mat
    Hi all, I have something to do that it must be finished before 4.00PM. I want create a batch file with awk, grep or sed that keep all lines beginning with 'INSERT' and delete the other lines. After this, i want replace a string "change)" by "servicechange)" when the 3rd word in the treated line is "donextsit". I don't know how to do this before my deadline (4.00 PM). Please HELP ME!! Thx for your answers. And Sorry for my english ;)

    Read the article

  • Replace CR/LF in a text file only after a certain column

    - by Olav
    I have a large text file I would like to put on my ebook-reader, but the formatting becomes all wrong because all lines are hard wrapped at or before column 80 with CR/LF, and paragraphs/headers are not marked differently, only a single CR/LF there too. What I would like is to replace all CR/LF's after column 75 with a space. That would make most paragraphs continuous. (Not a perfect solution, but a lot better to read.) Is it possible to do this with a regex? Preferably a (linux) perl or sed oneliner, alternatively a Notepad++ regex.

    Read the article

  • How to extract all IDs accessed from a mysql general log using the linux commandline?

    - by shlomoid
    This should be a trivial question for anyone who's good with bash/sed/awk. Unfortunately, I'm not, yet :) I've got a general log from MySQL which contains some queries that have a common parameter, they query on a specific id field. The queries look like update tbl set col='binary_values' where id=X; I need to process the log and extract all the IDs that these queries touched, each in it's own line. The purpose of this is to figure out how many times each ID is accessed. Eventually I'd group and count the values. The binary values are indeed binary junk, so they kinda messed up some things I've been trying to do. Eventually we solved the problem temporarily using a python script, but I'm sure the linux command line tool set can do it too. How would you do it?

    Read the article

  • Regular Expression - Capture and Replace Select Sequences

    - by Chad
    Take the following file... ABCD,1234,http://example.com/mpe.exthttp://example/xyz.ext EFGH,5678,http://example.com/wer.exthttp://example/ljn.ext Note that "ext" is a constant file extension throughout the file. I am looking for an expression to turn that file into something like this... ABCD,1234,http://example.com/mpe.ext ABCD,1234,http://example/xyz.ext EFGH,5678,http://example.com/wer.ext EFGH,5678,http://example/ljn.ext In a nutshell I need to capture everything up to the urls. Then I need to capture each URL and put them on their own line with the leading capture. I am working with sed to do this and I cannot figure out how to make it work correctly. Any ideas?

    Read the article

  • substitute string in php from another file

    - by Gjergj Sheldija
    i have some old php files which i'd like to convert to use gettext. those files have a content like this : $LD = 'Some String'; $Another = 'some other ~n~ string'; i have to substitute all the $LD, $Another in the files where they are declared with something like : _('Some string'); hacking a bit a created some sort of regexp to find the declarations, my aim was to use sed and awk to do the replaces..but i don't have any clue on how to do those substitutions ... any help ...

    Read the article

  • Replace delimited block of text in file with the contents of another file

    - by rmarimon
    I need to write a simple script to replace a block of text in a configuration file with the contents of another file. Let's assume with have the following simplified files: server.xml <?xml version='1.0' encoding='UTF-8'?> <Server port="8005" shutdown="SHUTDOWN"> <Service name="Catalina"> <Connector port="80" protocol="HTTP/1.1"/> <Engine name="Catalina" defaultHost="localhost"> <!-- BEGIN realm --> <sometags/> <sometags/> <!-- END realm --> <Host name="localhost" appBase="webapps"/> </Engine> </Service> </Server> realm.xml <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> I want to run a script and have realm.xml replace the contents between the <!-- BEGIN realm --> and <!-- END realm --> lines. If realm.xml changes then whenever the script is run again it will replace the lines again with the new contents of realm.xml. This is intended to be run in /etc/init.d/tomcat on startup of the service on multiple installations on which the realm is going to be different. I'm not so sure how can I do this simply with awk or sed.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >