Search Results

Search found 464 results on 19 pages for 'grammar nazi'.

Page 4/19 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Need Help Customizing a Grammar Checking Replace Rule in Java

    - by user567785
    Hello, I am currently adding the Khmer (Cambodian) language to LanguageTool, an opensource grammar checker for OpenOffice (http://www.languagetool.org). I don't know enough Java to customize one of the scripts and wanted to make a request here asking if anyone would be willing to customize it for me (I can put link to your website at http://www.sbbic.org/lang/en-us/volunteer/ if you help). Here is the script that needs customization KhmerWordCoherencyRule.java: /* LanguageTool, a natural language style checker * Copyright (C) 2005 Daniel Naber (http://www.danielnaber.de) * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 * USA */ package de.danielnaber.languagetool.rules.km; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Locale; import java.util.Map; import java.util.ResourceBundle; import de.danielnaber.languagetool.AnalyzedSentence; import de.danielnaber.languagetool.AnalyzedToken; import de.danielnaber.languagetool.AnalyzedTokenReadings; import de.danielnaber.languagetool.JLanguageTool; import de.danielnaber.languagetool.tools.StringTools; import de.danielnaber.languagetool.rules.Category; import de.danielnaber.languagetool.rules.RuleMatch; /** * A Khmer rule that matches words or phrases which should not be used and suggests * correct ones instead. Loads the relevant words from * <code>rules/km/coherency.txt</code>, where km is a code of the language. * * @author Andriy Rysin */ public abstract class KhmerWordCoherencyRule extends KhmerRule { private static final String FILE_ENCODING = "utf-8"; private Map<String, String> wrongWords; // e.g. "????? -> "?????" private static final String FILE_NAME = "/km/coherency.txt"; public abstract String getFileName(); public String getEncoding() { return FILE_ENCODING; } /** * Indicates if the rule is case-sensitive. Default value is <code>true</code>. * @return true if the rule is case-sensitive, false otherwise. */ //in Khmer there is no case public boolean isCaseSensitive() { return false; } /** * @return the locale used for case conversion when {@link #isCaseSensitive()} is set to <code>false</code>. */ public Locale getLocale() { return Locale.getDefault(); } public KhmerWordCoherencyRule(final ResourceBundle messages) throws IOException { if (messages != null) { super.setCategory(new Category(messages.getString("category_misc"))); } wrongWords = loadWords(JLanguageTool.getDataBroker().getFromRulesDirAsStream(getFileName())); } public String getId() { return "KM_WORD_COHERENCY"; } public String getDescription() { return "Checks for wrong words/phrases"; } public String getSuggestion() { return " does not match your previous spelling of the word, use "; } public String getShort() { return "Use a consistant spelling throughout"; } public final RuleMatch[] match(final AnalyzedSentence text) { final List<RuleMatch> ruleMatches = new ArrayList<RuleMatch>(); final AnalyzedTokenReadings[] tokens = text.getTokensWithoutWhitespace(); for (int i = 1; i < tokens.length; i++) { final String token = tokens[i].getToken(); final String origToken = token; final String replacement = isCaseSensitive()?wrongWords.get(token):wrongWords.get(token.toLowerCase(getLocale())); if (replacement != null) { final String msg = token + getSuggestion() + replacement; final int pos = tokens[i].getStartPos(); final RuleMatch potentialRuleMatch = new RuleMatch(this, pos, pos + origToken.length(), msg, getShort()); if (!isCaseSensitive() && StringTools.startsWithUppercase(token)) { potentialRuleMatch.setSuggestedReplacement(StringTools.uppercaseFirstChar(replacement)); } else { potentialRuleMatch.setSuggestedReplacement(replacement); } ruleMatches.add(potentialRuleMatch); } } return toRuleMatchArray(ruleMatches); } private Map<String, String> loadWords(final InputStream file) throws IOException { final Map<String, String> map = new HashMap<String, String>(); InputStreamReader isr = null; BufferedReader br = null; try { isr = new InputStreamReader(file, getEncoding()); br = new BufferedReader(isr); String line; while ((line = br.readLine()) != null) { line = line.trim(); if (line.length() < 1) { continue; } if (line.charAt(0) == '#') { // ignore comments continue; } final String[] parts = line.split(";"); if (parts.length != 2) { throw new IOException("Format error in file " + JLanguageTool.getDataBroker().getFromRulesDirAsUrl(getFileName()) + ", line: " + line); } map.put(parts[0], parts[1]); } } finally { if (br != null) { br.close(); } if (isr != null) { isr.close(); } } return map; } public void reset() { } } Here is what I need the SimpleReplaceRule.java to do: 1 - Be able to have more than two spelling variations in the coherency.txt file (right now it can only be Word1;Word2). 2 - Find the first use of ANY of the spelling variations in a document that are found in coherency.txt and then make sure only that spelling is used throughout the document (ex. in the coherency.txt I have Word1;Word2;Word3 then in my document on the first line I write Word2. then on next line I write Word1 and Word 3 - then the grammar checker will flag Word1 and Word3 saying that I should use the spelling "Word2" instead...etc.). If anyone can help I would be grateful! Thanks for your time, Nathan

    Read the article

  • ANTLRv3 How to set a custom Lexer and Parser Class names

    - by Filip
    Is there a way to specify custom class name (meaning independent of the grammar name) for the ANTLRv3 generated Parser and Lexer classes? So in the case of grammar MDD; //other stufff Automtaically it would crate MDDParser and MDDLexer, but i would like to have them for example as MDDBaseParser and MDDLexer.

    Read the article

  • Convert regular expression to CFG

    - by user242581
    How can I convert some regular language to its equivalent Context Free Grammar(CFG)? Whether the DFA corresponding to that regular expression is required to be constructed or is there some rule for the above conversion? For example, considering the following regular expression 01+10(11)* How can I describe the grammar corresponding to the above RE?

    Read the article

  • Treetop: parsing single node returns nil

    - by Matchu
    I'm trying to get the basic of Treetop parsing. Here's a very simple bit of grammar so that I can say ArithmeticParser.parse('2+2').value == 4. grammar Arithmetic rule additive first:number '+' second:number { def value first.value + second.value end } end rule number [1-9] [0-9]* { def value text_value.to_i end } end end Parsing 2+2 works correctly. However, parsing 2 or 22 returns nil. What did I miss?

    Read the article

  • How to get all captures of subgroup matches with preg_match_all()?

    - by hakre
    Update/Note: I think what I'm probably looking for is to get the captures of a group in PHP. Referenced: PCRE regular expressions using named pattern subroutines. (Read carefully:) I have a string that contains a variable number of segments (simplified): $subject = 'AA BB DD '; // could be 'AA BB DD CC EE ' as well I would like now to match the segments and return them via the matches array: $pattern = '/^(([a-z]+) )+$/i'; $result = preg_match_all($pattern, $subject, $matches); This will only return the last match for the capture group 2: DD. Is there a way that I can retrieve all subpattern captures (AA, BB, DD) with one regex execution? Isn't preg_match_all suitable for this? This question is a generalization. Both the $subject and $pattern are simplified. Naturally with such the general list of AA, BB, .. is much more easy to extract with other functions (e.g. explode) or with a variation of the $pattern. But I'm specifically asking how to return all of the subgroup matches with the preg_...-family of functions. For a real life case imagine you have multiple (nested) level of a variant amount of subpattern matches. Example This is an example in pseudo code to describe a bit of the background. Imagine the following: Regular definitions of tokens: CHARS := [a-z]+ PUNCT := [.,!?] WS := [ ] $subject get's tokenized based on these. The tokenization is stored inside an array of tokens (type, offset, ...). That array is then transformed into a string, containing one character per token: CHARS -> "c" PUNCT -> "p" WS -> "s" So that it's now possible to run regular expressions based on tokens (and not character classes etc.) on the token stream string index. E.g. regex: (cs)?cp to express one or more group of chars followed by a punctuation. As I now can express self-defined tokens as regex, the next step was to build the grammar. This is only an example, this is sort of ABNF style: words = word | (word space)+ word word = CHARS+ space = WS punctuation = PUNCT If I now compile the grammar for words into a (token) regex I would like to have naturally all subgroup matches of each word. words = (CHARS+) | ( (CHARS+) WS )+ (CHARS+) # words resolved to tokens words = (c+)|((c+)s)+c+ # words resolved to regex I could code until this point. Then I ran into the problem that the sub-group matches did only contain their last match. So I have the option to either create an automata for the grammar on my own (which I would like to prevent to keep the grammar expressions generic) or to somewhat make preg_match working for me somehow so I can spare that. That's basically all. Probably now it's understandable why I simplified the question. Related: pcrepattern man page Get repeated matches with preg_match_all()

    Read the article

  • How to generate random html document

    - by karramba
    I'd like to generate completely random piece of html source, possibly from a grammar. I want to do this in python but I'm not sure how to proceed -- is there a library that takes a grammar and just randomly follows its rules, printing the path? Ideas?

    Read the article

  • Why does ANTLR not parse the entire input?

    - by Martin Wiboe
    Hello, I am quite new to ANTLR, so this is likely a simple question. I have defined a simple grammar which is supposed to include arithmetic expressions with numbers and identifiers (strings that start with a letter and continue with one or more letters or numbers.) The grammar looks as follows: grammar while; @lexer::header { package ConFreeG; } @header { package ConFreeG; import ConFreeG.IR.*; } @parser::members { } arith: term | '(' arith ( '-' | '+' | '*' ) arith ')' ; term returns [AExpr a]: NUM { int n = Integer.parseInt($NUM.text); a = new Num(n); } | IDENT { a = new Var($IDENT.text); } ; fragment LOWER : ('a'..'z'); fragment UPPER : ('A'..'Z'); fragment NONNULL : ('1'..'9'); fragment NUMBER : ('0' | NONNULL); IDENT : ( LOWER | UPPER ) ( LOWER | UPPER | NUMBER )*; NUM : '0' | NONNULL NUMBER*; fragment NEWLINE:'\r'? '\n'; WHITESPACE : ( ' ' | '\t' | NEWLINE )+ { $channel=HIDDEN; }; I am using ANTLR v3 with the ANTLR IDE Eclipse plugin. When I parse the expression (8 + a45) using the interpreter, only part of the parse tree is generated: http://imgur.com/iBaEC.png Why does the second term (a45) not get parsed? The same happens if both terms are numbers. Thank you, Martin Wiboe

    Read the article

  • Learning Treetop

    - by cmartin
    I'm trying to teach myself Ruby's Treetop grammar generator. I am finding that not only is the documentation woefully sparse for the "best" one out there, but that it doesn't seem to work as intuitively as I'd hoped. On a high level, I'd really love a better tutorial than the on-site docs or the video, if there is one. On a lower level, here's a grammar I cannot get to work at all: grammar SimpleTest rule num (float / integer) end rule float ( (( '+' / '-')? plain_digits '.' plain_digits) / (( '+' / '-')? plain_digits ('E' / 'e') plain_digits ) / (( '+' / '-')? plain_digits '.') / (( '+' / '-')? '.' plain_digits) ) { def eval text_value.to_f end } end rule integer (( '+' / '-' )? plain_digits) { def eval text_value.to_i end } end rule plain_digits [0-9] [0-9]* end end When I load it and run some assertions in a very simple test object, I find: assert_equal @parser.parse('3.14').eval,3.14 Works fine, while assert_equal @parser.parse('3').eval,3 raises the error: NoMethodError: private method `eval' called for # If I reverse integer and float on the description, both integers and floats give me this error. I think this may be related to limited lookahead, but I cannot find any information in any of the docs to even cover the idea of evaluating in the "or" context A bit more info that may help. Here's pp information for both those parse() blocks. The float: SyntaxNode+Float4+Float0 offset=0, "3.14" (eval,plain_digits): SyntaxNode offset=0, "" SyntaxNode+PlainDigits0 offset=0, "3": SyntaxNode offset=0, "3" SyntaxNode offset=1, "" SyntaxNode offset=1, "." SyntaxNode+PlainDigits0 offset=2, "14": SyntaxNode offset=2, "1" SyntaxNode offset=3, "4": SyntaxNode offset=3, "4" The Integer... note that it seems to have been defined to follow the integer rule, but not caught the eval() method: SyntaxNode+Integer0 offset=0, "3" (plain_digits): SyntaxNode offset=0, "" SyntaxNode+PlainDigits0 offset=0, "3": SyntaxNode offset=0, "3" SyntaxNode offset=1, "" Update: I got my particular problem working, but I have no clue why: rule integer ( '+' / '-' )? plain_digits { def eval text_value.to_i end } end This makes no sense with the docs that are present, but just removing the extra parentheses made the match include the Integer1 class as well as Integer0. Integer1 is apparently the class holding the eval() method. I have no idea why this is the case. I'm still looking for more info about treetop.

    Read the article

  • OSLO, ANTLR or other parser grammar, for parsing QUERY EXPRESSION

    - by Jay Allard
    Greetings I'm working on a project that requires me to write queries in text form, then convert them to some easily processed nodes to be processed by some abiguous repository. Of everything there, the part I'm least interested is the part that converts the text to nodes. I'm hoping it's already done somewhere. Because I'm making stuff up as I go, I chose to use a LINQish expression syntax. from m in Movie select m.A, m.B I started parsing it manually and got the basics, but it's pretty cheesy. I'm looking for the better solution. I made some progress using MGrammar, but it would be nice if such a thing already existed. Does anyone know of anything that already does this? I looked for existing ANTLR templates, but no luck. Thanks for the help.

    Read the article

  • grammar in views

    - by jspooner
    I'm displaying a string like "I'm an Actor" or "I'm a skateboarder" and need to use a or an correctly. Is there a nifty rails view helper to see if a word starts with a vowel? <p>I'm an <%= @user.skills %></p> <p>I'm a <%= @user.skills %></p>

    Read the article

  • Transform data in FMPXMLRESULT grammar into a "Content Standard for Digital Geospatial Metadata (CS

    - by Andrew Igbo
    I have a problem in FileMaker; I wish to link the METADATA element/FIELD element “NAME” attribute to its corresponding data in the RESULTSET element/COL element. However, I also wish to map the METADATA element/FIELD element “NAME” to "Content Standard for Digital Geospatial Metadata (CSDGM)" metadata elements Sample XML Metadata Record with CSDGM Essential Elements Louisiana State University Coastal Studies Institute 20010907 Geomorphology and Processes of Land Loss in Coastal Louisiana, 1932 – 1990 A raster GIS file that identifies the land loss process and geomorphology associated with each 12.5 meter pixel of land loss between 1932 and 1990. Land loss processes are organized into a hierarchical classification system that includes subclasses for erosion, submergence, direct removal, and undetermined. Land loss geomorphology is organized into a hierarchical classification system that includes subclasses for both shoreline and interior loss. The objective of the study was to determine the land loss geomorphologies associated with specific processes of land loss in coastal Louisiana.

    Read the article

  • Python: How best to parse a simple grammar?

    - by Rosarch
    Ok, so I've asked a bunch of smaller questions about this project, but I still don't have much confidence in the designs I'm coming up with, so I'm going to ask a question on a broader scale. I am parsing pre-requisite descriptions for a course catalog. The descriptions almost always follow a certain form, which makes me think I can parse most of them. From the text, I would like to generate a graph of course pre-requisite relationships. (That part will be easy, after I have parsed the data.) Some sample inputs and outputs: "CS 2110" => ("CS", 2110) # 0 "CS 2110 and INFO 3300" => [("CS", 2110), ("INFO", 3300)] # 1 "CS 2110, INFO 3300" => [("CS", 2110), ("INFO", 3300)] # 1 "CS 2110, 3300, 3140" => [("CS", 2110), ("CS", 3300), ("CS", 3140)] # 1 "CS 2110 or INFO 3300" => [[("CS", 2110)], [("INFO", 3300)]] # 2 "MATH 2210, 2230, 2310, or 2940" => [[("MATH", 2210), ("MATH", 2230), ("MATH", 2310)], [("MATH", 2940)]] # 3 If the entire description is just a course, it is output directly. If the courses are conjoined ("and"), they are all output in the same list If the course are disjoined ("or"), they are in separate lists Here, we have both "and" and "or". One caveat that makes it easier: it appears that the nesting of "and"/"or" phrases is never greater than as shown in example 3. What is the best way to do this? I started with PLY, but I couldn't figure out how to resolve the reduce/reduce conflicts. The advantage of PLY is that it's easy to manipulate what each parse rule generates: def p_course(p): 'course : DEPT_CODE COURSE_NUMBER' p[0] = (p[1], int(p[2])) With PyParse, it's less clear how to modify the output of parseString(). I was considering building upon @Alex Martelli's idea of keeping state in an object and building up the output from that, but I'm not sure exactly how that is best done. def addCourse(self, str, location, tokens): self.result.append((tokens[0][0], tokens[0][1])) def makeCourseList(self, str, location, tokens): dept = tokens[0][0] new_tokens = [(dept, tokens[0][1])] new_tokens.extend((dept, tok) for tok in tokens[1:]) self.result.append(new_tokens) For instance, to handle "or" cases: def __init__(self): self.result = [] # ... self.statement = (course_data + Optional(OR_CONJ + course_data)).setParseAction(self.disjunctionCourses) def disjunctionCourses(self, str, location, tokens): if len(tokens) == 1: return tokens print "disjunction tokens: %s" % tokens How does disjunctionCourses() know which smaller phrases to disjoin? All it gets is tokens, but what's been parsed so far is stored in result, so how can the function tell which data in result corresponds to which elements of token? I guess I could search through the tokens, then find an element of result with the same data, but that feel convoluted... What's a better way to approach this problem?

    Read the article

  • What is '=>'? (C# Grammar Question)

    - by Daniel
    I was watching a Silverlight tutorial video, and I came across an unfamiliar expression in the example code. what is = ? what is its name? could you please provide me a link? I couldn't search for it because they are special characters. code: var ctx = new EventManagerDomainContext(); ctx.Events.Add(newEvent); ctx.SubmitChanges((op) => { if (!op.HasError) { NavigateToEditEvent(newEvent.EventID); } }, null);

    Read the article

  • Is this grammar SLR?

    - by Mike
    E - A | B A - a | c B - b | c My answer is no because it has a reduce/reduce conflict, can anyone else verify this? Also I gained my answer through constructing the transition diagram, is there a simpler way of finding this out? Thanks for the help! P.S Would a Recursive Descent be able to parse this?

    Read the article

  • CSS grammar not work under IE

    - by Eric
    The CSS Grammer as following works well under firefox but doesn't work under IE browser,Why and how to let the css only be affect on the elemnts directly under the parent element? CSS: div{font:18px} .boxdivdiv{font:12px;} -------------------------- -- HTML: level1 level2 level3 level3 level2 level3 level3

    Read the article

  • How to coach a developer with dyslexia to improve his spelling and grammar capabilities?

    - by Uwe Keim
    Just having read this question regarding developers with dyslexia, I still have some open questions on how to deal with it: I am working on a project sinc approx. 6 month with a new developer who just finished university. I see that the code quality is high, what he's missing is the ability to write texts (even short ones) in an error-free manner (both, syntax and grammar errors). He is working on some UI stuff (VS.NET 2010, ASP.NET 4) and, beside coding, has to write short text for labels, buttons, grid view headers, page titles, etc. Since even those texts have errors inside, no matter how much I try to discuss the need for a professional, text-error-free UI, he seems to not manage to get this right, although he really tries. So my questions are: Are there any hints how he (or I) should proceed to enhance the text quality? Do you know any tools (like inline spell checkers) for VS.NET to highlight syntax and grammar errors? (We are working on a German-only UI, if this is important to know)

    Read the article

  • Shift-reduce: when to stop reducing?

    - by Joey Adams
    I'm trying to learn about shift-reduce parsing. Suppose we have the following grammar, using recursive rules that enforce order of operations, inspired by the ANSI C Yacc grammar: S: A; P : NUMBER | '(' S ')' ; M : P | M '*' P | M '/' P ; A : M | A '+' M | A '-' M ; And we want to parse 1+2 using shift-reduce parsing. First, the 1 is shifted as a NUMBER. My question is, is it then reduced to P, then M, then A, then finally S? How does it know where to stop? Suppose it does reduce all the way to S, then shifts '+'. We'd now have a stack containing: S '+' If we shift '2', the reductions might be: S '+' NUMBER S '+' P S '+' M S '+' A S '+' S Now, on either side of the last line, S could be P, M, A, or NUMBER, and it would still be valid in the sense that any combination would be a correct representation of the text. How does the parser "know" to make it A '+' M So that it can reduce the whole expression to A, then S? In other words, how does it know to stop reducing before shifting the next token? Is this a key difficulty in LR parser generation?

    Read the article

  • How to create an AST with ANTLR from a hierarchical key-value syntax

    - by Brabster
    I've been looking at parsing a key-value data format with ANTLR. Pretty straightforward, but the keys represent a hierarchy. A simplified example of my input syntax: /a/b/c=2 /a/b/d/e=3 /a/b/d/f=4 In my mind, this represents a tree structured as follows: (a (b (= c 2) (d (= e 3) (= f 4)))) The nearest I can get is to use the following grammar: /* Parser Rules */ start: (component NEWLINE?)* EOF -> (component)*; component: FORWARD_SLASH ALPHA_STRING component -> ^(ALPHA_STRING component) | FORWARD_SLASH ALPHA_STRING EQUALS value -> ^(EQUALS ALPHA_STRING value); value: ALPHA_STRING; /* Lexer Rules */ NEWLINE : '\r'? '\n'; ALPHA_STRING : ('a'..'z'|'A'..'Z'|'0'..'9')+; EQUALS : '='; FORWARD_SLASH : '/'; Which produces: (a (b (= c 2))) (a (b (d (= e 3)))) (a (b (d (= f 4)))) I'm not sure whether I'm asking too much from a generic tool such as ANTLR here, and this is as close I can get with this approach. That is, from here I consume the parts of the tree and create the data structure I want by hand. So - can I produce the tree structure I want directly from a grammar? If so, how? If not, why not - is it a technical limitation in ANTLR or is it something more CS-y to do with the type of language involved?

    Read the article

  • How can I execute an ANTLR parser action for each item in a rule that can match more than one item?

    - by Chris Farmer
    I am trying to write an ANTLR parser rule that matches a list of things, and I want to write a parser action that can deal with each item in the list independently. Some example input for these rules is: $(A1 A2 A3) I'd like this to result in an evaluator that contains a list of three MyIdentEvaluator objects -- one for each of A1, A2, and A3. Here's a snippet of my grammar: my_list returns [IEvaluator e] : { $e = new MyListEvaluator(); } '$' LPAREN op=my_ident+ { /* want to do something here for each 'my_ident'. */ /* the following seems to see only the 'A3' my_ident */ $e.Add($op.e); } RPAREN ; my_ident returns [IEvaluator e] : IDENT { $e = new MyIdentEvaluator($IDENT.text); } ; I think my_ident is defined correctly, because I can see the three MyIdentEvaluators getting created as expected for my input string, but only the last my_ident ever gets added to the list (A3 in my example input). How can I best treat each of these elements independently, either through a grammar change or a parser action change? It also occurred to me that my vocabulary for these concepts is not what it should be, so if it looks like I'm misusing a term, I probably am. EDIT in response to Wayne's comment: I tried to use op+=my_ident+. In that case, the $op in my action becomes an IList (in C#) that contains Antlr.Runtime.Tree.CommonTree instances. It does give me one entry per matched token in $op, so I see my three matches, but I don't have the MyIdentEvaluator instances that I really want. I was hoping I could then find a rule attribute in the ANTLR docs that might help with this, but nothing seemed to help me get rid of this IList. Result... Based on chollida's answer, I ended up with this which works well: my_list returns [IEvaluator e] : { $e = new MyListEvaluator(); } '$' LPAREN (op=my_ident { $e.Add($op.e); } )+ RPAREN ; The Add method gets called for each match of my_ident.

    Read the article

  • Fixed number format in ANTLR

    - by sarav
    How to specify a fixed digit number in antlr grammar? I want to parse a line which contains fields of fixed number of characters. Each field is a number. 0034|9056|4567|0987|-2340| +345|1000 The above line is a sample line. | indicates field boundaries. The fields can include blank characters +/-

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >