Search Results

Search found 91 results on 4 pages for 'lexical'.

Page 1/4 | 1 2 3 4  | Next Page >

  • Need an end of lexical scope action which can die normally

    - by Schwern
    I need the ability to add actions to the end of a lexical block where the action might die. And I need the exception to be thrown normally and be able to be caught normally. Unfortunately, Perl special cases exceptions during DESTROY both by adding "(in cleanup)" to the message and making them untrappable. For example: { package Guard; use strict; use warnings; sub new { my $class = shift; my $code = shift; return bless $code, $class; } sub DESTROY { my $self = shift; $self->(); } } use Test::More tests => 2; my $guard_triggered = 0; ok !eval { my $guard = Guard->new( #line 24 sub { $guard_triggered++; die "En guarde!" } ); 1; }, "the guard died"; is $@, "En guarde! at $@ line 24\n", "with the right error message"; is $guard_triggered, 1, "the guard worked"; I want that to pass. Currently the exception is totally swallowed by the eval. This is for Test::Builder2, so I cannot use anything but pure Perl. The underlying issue is I have code like this: { $self->setup; $user_code->(); $self->cleanup; } That cleanup must happen even if the $user_code dies, else $self gets into a weird state. So I did this: { $self->setup; my $guard = Guard->new(sub { $self->cleanup }); $user_code->(); } The complexity comes because the cleanup runs arbitrary user code and it is a use case where that code will die. I expect that exception to be trappable and unaltered by the guard. I'm avoiding wrapping everything in eval blocks because of the way that alters the stack.

    Read the article

  • Big problem with regular expression in Lex (lexical analyzer)

    - by Nazgulled
    Hi, I have some content like this: author = "Marjan Mernik and Viljem Zumer", title = "Implementation of multiple attribute grammar inheritance in the tool LISA", year = 1999 author = "Manfred Broy and Martin Wirsing", title = "Generalized Heterogeneous Algebras and Partial Interpretations", year = 1983 author = "Ikuo Nakata and Masataka Sassa", title = "L-Attributed LL(1)-Grammars are LR-Attributed", journal = "Information Processing Letters" And I need to catch everything between double quotes for title. My first try was this: ^(" "|\t)+"title"" "*=" "*"\"".+"\"," Which catches the first example, but not the other two. The other have multiple lines and that's the problem. I though about changing to something with \n somewhere to allow multiple lines, like this: ^(" "|\t)+"title"" "*=" "*"\""(.|\n)+"\"," But this doesn't help, instead, it catches everything. Than I though, "what I want is between double quotes, what if I catch everything until I find another " followed by ,? This way I could know if I was at the end of the title or not, no matter the number of lines, like this: ^(" "|\t)+"title"" "*=" "*"\""[^"\""]+"," But this has another problem... The example above doesn't have it, but the double quote symbol (") can be in between the title declaration. For instance: title = "aaaaaaa \"X bbbbbb", And yes, it will always be preceded by a backslash (\). Any suggestions to fix this regexp?

    Read the article

  • Error compiling flex (the lexical analyzer)

    - by Maulrus
    I'm trying to install flex on my Windows computer. I have MSYS installed. I untar flex, ./configure it, but when I try to make it, I get this error: In file included from ccl.c:34: flexdef.h:94:19: error: regex.h: No such file or directory In file included from ccl.c:34: flexdef.h:1195: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'regex_linedir' flexdef.h:1197: error: expected ')' before '*' token flexdef.h:1198: error: expected ')' before '*' token flexdef.h:1199: error: expected ')' before '*' token flexdef.h:1200: error: expected ')' before '*' token flexdef.h:1201: error: expected ')' before '*' token flexdef.h:1202: error: expected ')' before '*' token make[2]: *** [ccl.o] Error 1 make[1]: *** [all-recursive] Error 1 make: *** [all] Error 2 Until recently, I've only ever installed things using an .exe, so I'm pretty confused by this. Installing bison and m4 both went smoothly, and I'm wondering why this isn't. Any ideas?

    Read the article

  • Theory: "Lexical Encoding"

    - by _ande_turner_
    I am using the term "Lexical Encoding" for my lack of a better one. A Word is arguably the fundamental unit of communication as opposed to a Letter. Unicode tries to assign a numeric value to each Letter of all known Alphabets. What is a Letter to one language, is a Glyph to another. Unicode 5.1 assigns more than 100,000 values to these Glyphs currently. Out of the approximately 180,000 Words being used in Modern English, it is said that with a vocabulary of about 2,000 Words, you should be able to converse in general terms. A "Lexical Encoding" would encode each Word not each Letter, and encapsulate them within a Sentence. // An simplified example of a "Lexical Encoding" String sentence = "How are you today?"; int[] sentence = { 93, 22, 14, 330, QUERY }; In this example each Token in the String was encoded as an Integer. The Encoding Scheme here simply assigned an int value based on generalised statistical ranking of word usage, and assigned a constant to the question mark. Ultimately, a Word has both a Spelling & Meaning though. Any "Lexical Encoding" would preserve the meaning and intent of the Sentence as a whole, and not be language specific. An English sentence would be encoded into "...language-neutral atomic elements of meaning ..." which could then be reconstituted into any language with a structured Syntactic Form and Grammatical Structure. What are other examples of "Lexical Encoding" techniques? If you were interested in where the word-usage statistics come from : http://www.wordcount.org

    Read the article

  • What follows after lexical analysis?

    - by madflame991
    I'm working on a toy compiler (for some simple language like PL/0) and I have my lexer up and running. At this point I should start working on building the parse tree, but before I start I was wondering: How much information can one gather from just the string of tokens? Here's what I gathered so far: One can already do syntax highlighting having only the list of tokens. Numbers and operators get coloured accordingly and keywords also. Autoformatting (indenting) should also be possible. How? Specify for each token type how many white spaces or new line characters should follow it. Also when you print tokens modify an alignment variable (when the code printer reads "{" increment the alignment variable by 1, and decrement by 1 for "}". Whenever it starts printing on a new line the code printer will align according to this alignment variable) In languages without nested subroutines one can get a complete list of subroutines and their signature. How? Just read what follows after the "procedure" or "function" keyword until you hit the first ")" (this should work fine in a Pascal language with no nested subroutines) In languages like Pascal you can even determine local variables and their types, as they are declared in a special place (ok, you can't handle initialization as well, but you can parse sequences like: "var a, b, c: integer") Detection of recursive functions may also be possible, or even a graph representation of which subroutine calls who. If one can identify the body of a function then one can also search if there are any mentions of other function's names. Gathering statistics about the code, like number of lines, instructions, subroutines EDIT: I clarified why I think some processes are possible. As I read comments and responses I realise that the answer depends very much on the language that I'm parsing.

    Read the article

  • Python serialize lexical closures?

    - by dsimcha
    Is there a way to serialize a lexical closure in Python using the standard library? pickle and marshal appear not to work with lexical closures. I don't really care about the details of binary vs. string serialization, etc., it just has to work. For example: def foo(bar, baz) : def closure(waldo) : return baz * waldo return closure I'd like to just be able to dump instances of closure to a file and read them back. Edit: One relatively obvious way that this could be solved is with some reflection hacks to convert lexical closures into class objects and vice-versa. One could then convert to classes, serialize, unserialize, convert back to closures. Heck, given that Python is duck typed, if you overloaded the function call operator of the class to make it look like a function, you wouldn't even really need to convert it back to a closure and the code using it wouldn't know the difference. If any Python reflection API gurus are out there, please speak up.

    Read the article

  • difference between a lexical, morphological and semantic mistakes?

    - by AnH
    Hi there, I just want to make sure that I understood the differences between a lexical, morphological and semantic mistakes correctly. If am not mistaken semantic mistakes deal with problems concerning meanings, for example writing a sentence that is correct grammar wise but doesn't make any sense is a semantic mistake, morphological mistakes deals more or less on how the word should look like, for example childrenS is a morphological mistake, so that leaves lexical mistakes, what are those exactly?? can someone sum up the differences between those 3 mistakes please so that I may know for sure that I got them down correctly? Thank you in advance

    Read the article

  • 'Lexical' scoping of type parameters in C#

    - by leppie
    I have 2 scenarios. This fails: class F<X> { public X X { get; set; } } error CS0102: The type 'F' already contains a definition for 'X' This works: class F<X> { class G { public X X { get; set; } } } The only logical explanation is that in the second snippet the type parameter X is out of scope, which is not true... Why should a type parameter affect my definitions in a type? IMO, for consistency, either both should work or neither should work. Any other ideas? PS: I call it 'lexical', but it probably is not not the correct term.

    Read the article

  • Lexical Analyzer(Scanner) for Language G by using C/C++

    - by udsha
    int a = 20; int b =30; float c; c = 20 + a; if(c) { a = c*b + a; } else { c = a - b + c; } use C++ / C to Implement a Lexer. 1. Create Unambiguous grammer for language G. 2. Create Lexical Analyzer for Language G. 3. It should identified tokens and lexemes for that language. 4. create a parse tree. 5. to use attribute grammer on a parse tree the values of the intrinsic attributes should be available on the symbol table.

    Read the article

  • Possible typos in ECMAScript 5 specification?

    - by Andy West
    Does anybody know why, at the end of section 7.6 of the ECMA-262, 5th Edition specification, the nonterminals UnicodeLetter, UnicodeCombiningMark, UnicodeDigit, UnicodeconnectorPunctuation, and UnicodeEscapeSequence are not followed by two colons? From section 5.1.6: Nonterminal symbols are shown in italic type. The definition of a nonterminal is introduced by the name of the nonterminal being defined followed by one or more colons. (The number of colons indicates to which grammar the production belongs.) Since lexical productions are distinguished by having two colons, and this is under "Lexical Conventions", I'm assuming that they meant to put the colons in. Does that sound right? Just making sure that these really are nonterminals and they really are part of the lexical grammar. EDIT: I noticed there have been votes to close this. Just to make my case about why this is programming-related, it is relevant to anyone wanting to implement an ECMAScript interpreter.

    Read the article

  • Fuzzy Regex, Text Processing, Lexical Analysis?

    - by justinzane
    I'm not quite sure what terminology to search for, so my title is funky... Here is the workflow I've got: Semi-structured documents are scanned to file. The files are OCR'd to text. The text is parsed into Python objects The objects are serialized (to SQL, JSON, whatever) for use. The documents are structures like this: HEADER blah blah, Page ### blah Garbage text... 1. Question Text... continued until now. A. Choice text... adsadsf. B. Another Choice... 2. Another Question... I need to extract the questions and choices. The problem is that, because the text is OCR output, there are occasional strange substitutions like '2' - 'Z' which makes ordinary regular expressions useless. I've tried the Levenshtein module and it helps, but it requires prior knowledge of what edit distance is to be expected. I don't know whether I'm looking to create a parser? a lexer? something else? This has lead me down all kinds of interesting but nonrelevant paths. Guidance would be greatly appreciated. Oh, also, the text is generally from specific technical domains, so general spelling tools are not so helpful. Regarding the structure of the documents, there is no clear visual pattern -- like line breaks or indentation -- with the exception of the fact that "questions" usually begin a line. Crap on the document can cause characters to appear before the actual beginning of the line, which means that something along the lines of r'^[0-9]+' does not reliably work. Though the "questions" always begin with an int, a period and a space; the OCR can substitute other characters or skip characters. This is not so much a problem with Tesseract or Cunieform, rather with the poor quality of the paper documents. # Note: for the project in question, it was decided that having a human prep the OCR'd text was better that spending the time coding a solution. I'd still love good pointers, however.

    Read the article

  • Lexical and dynamic scoping in Mathematica: Local variables with Module, With, and Block

    - by dreeves
    The following code returns 14 as you'd expect: Block[{expr}, expr = 2 z; f[z_] = expr; f[7]] But if you change that Block to a Module then it returns 2*z. It seems to not matter what other variables besides expr you localize. I thought I understood Module, Block, and With in Mathematica but I can't explain the difference in behavior between Module and Block in this example. Related resources: Tutorial on Modularity and the Naming of Things from the Mathematica documentation Excerpt from a book by Paul R. Wellin, Richard J. Gaylord, and Samuel N. Kamin Explanation from Dave Withoff on the Mathematica newsgroup

    Read the article

  • lexical analysis gives only one output?

    - by Caffè
    I tested this example(lexe.java), but it gave me only one output. I gave this text as a reader: public class LexeTest{ private int a = 14; } And the nextToken() function is : public Category nextToken () { if (inp.findWithinHorizon (tokenPat, 0) == null) return Category.EOF; else { lastLexeme = inp.match ().group (0); if (inp.match ().start (1) != -1) return nextToken (); else if (inp.match ().start (2) != -1) return Category.IDENT; else if (inp.match ().start (3) != -1) return Category.NUMERAL; Category result = tokenMap.get (lastLexeme); if (result == null) return Category.ERROR; else return result; } } Isdie the main method: System.out.println(lexeObject.nextToken()); output is : IDENT Why? but the textfile contains multiple keywords? Anyone know what's the problem?

    Read the article

  • How do i implement If statement in Flex/bison

    - by Imran
    Hallo, I need help in flex/bison. Im a beginner in flex/bison, and i hav already looked out these programs and somethings i inderstood, but im learning. My problem is, i want to implement a If-statement via Flex/Bison and i dont know how to start how to do, someone any idea, im very thankful for all your help. here is an example i want to implement: :L1 IF FLAG AND X"0001" EVT 23; ELSE WAIT 500 ms; JMP L1; END IF; how do i implement JMP (jump), when JMP comes it has to jump to the Label L1.

    Read the article

  • Bison input analyzer - basic question on optional grammer and input interpretation

    - by kumar_m_kiran
    Hi All, I am very new to Flex/Bison, So it is very navie question. Pardon me if so. May look like homework question - but I need to implement project based on below concept. My question is related to two parts, Question 1 In Bison parser, How do I provide rules for optional input. Like, I need to parse the statment Example : -country='USA' -state='INDIANA' -population='100' -ratio='0.5' -comment='Census study for Indiana' Here the ratio token can be optional. Similarly, If I have many tokens optional, then How do I provide the grammer in the parser for the same? My code looks like, %start program program : TK_COUNTRY TK_IDENTIFIER TK_STATE TK_IDENTIFIER TK_POPULATION TK_IDENTIFIER ... where all the tokens are defined in the lexer. Since there are many tokens which are optional, If I use "|" then there will be many different ways of input combination possible. Question 2 There are good chance that the comment might have quotes as part of the input, so I have added a token -tag which user can provide to interpret the same, Example : -country='USA' -state='INDIANA' -population='100' -ratio='0.5' -comment='Census study for Indiana$'s population' -tag=$ Now, I need to reinterpret Indiana$'s as Indiana's since -tag=$. Please provide any input or related material for to understand these topic. Thanks for your input in advance.

    Read the article

  • Convert C++Builder AnsiString to std::string via boost::lexical_cast

    - by David Klein
    For a school assignment I have to implement a project in C++ using Borland C++ Builder. As the VCL uses AnsiString for all GUI Components I have to convert all of my std::strings to AnsiString for the sake of displaying. std::string inp = "Hello world!"; AnsiString outp(inp.c_str()); works of course but is a bit tedious to write and code duplication I want to avoid. As we use Boost in other contexts I decided to provide some helper functions go get boost::lexical_cast to work with AnsiString. Here is my implementation so far: std::istream& operator>>(std::istream& istr, AnsiString& str) { istr.exceptions(std::ios::badbit | std::ios::failbit | std::ios::eofbit); std::string s; std::getline(istr,s); str = AnsiString(s.c_str()); return istr; } In the beginning I got Access Violation after Access Violation but since I added the .exceptions() stuff the picture gets clearer. When the conversion is performed I get the following Exception: ios_base::eofbit set [Runtime Error/std::ios_base::failure] Does anyone have an idea how to fix it and can explain why the error occurs? My C++ experience is very limited. The conversion routine the other way round would be: std::ostream& operator<<(std::ostream& ostr,const AnsiString& str) { ostr << (str.c_str()); return ostr; } Maybe someone will spot an error here too :) With best regards! Edit: At the moment I'm using the edited version of Jem, it works in the beginning. After a while of using the programm the Borland Codeguard mentions some pointer arithmetic in already freed regions. Any ideas how this could be related? The Codeguard log (I'm using the german version, translations marked with stars): ------------------------------------------ Fehler 00080. 0x104230 (r) (Thread 0x07A4): Zeigerarithmetik in freigegebenem Speicher: 0x0241A238-0x0241A258. **(pointer arithmetic in freed region)** | d:\program files\borland\bds\4.0\include\dinkumware\sstream Zeile 126: | { // not first growth, adjust pointers | _Seekhigh = _Seekhigh - _Mysb::eback() + _Ptr; |> _Mysb::setp(_Mysb::pbase() - _Mysb::eback() + _Ptr, | _Mysb::pptr() - _Mysb::eback() + _Ptr, _Ptr + _Newsize); | if (_Mystate & _Noread) Aufrufhierarchie: **(stack-trace)** 0x00411731(=FOSChampion.exe:0x01:010731) d:\program files\borland\bds\4.0\include\dinkumware\sstream#126 0x00411183(=FOSChampion.exe:0x01:010183) d:\program files\borland\bds\4.0\include\dinkumware\streambuf#465 0x0040933D(=FOSChampion.exe:0x01:00833D) d:\program files\borland\bds\4.0\include\dinkumware\streambuf#151 0x00405988(=FOSChampion.exe:0x01:004988) d:\program files\borland\bds\4.0\include\dinkumware\ostream#679 0x00405759(=FOSChampion.exe:0x01:004759) D:\Projekte\Schule\foschamp\src\Server\Ansistringkonverter.h#31 0x004080C9(=FOSChampion.exe:0x01:0070C9) D:\Projekte\Schule\foschamp\lib\boost_1_34_1\boost/lexical_cast.hpp#151 Objekt (0x0241A238) [Größe: 32 Byte] war erstellt mit new **(Object was created with new)** | d:\program files\borland\bds\4.0\include\dinkumware\xmemory Zeile 28: | _Ty _FARQ *_Allocate(_SIZT _Count, _Ty _FARQ *) | { // allocate storage for _Count elements of type _Ty |> return ((_Ty _FARQ *)::operator new(_Count * sizeof (_Ty))); | } | Aufrufhierarchie: **(stack-trace)** 0x0040ED90(=FOSChampion.exe:0x01:00DD90) d:\program files\borland\bds\4.0\include\dinkumware\xmemory#28 0x0040E194(=FOSChampion.exe:0x01:00D194) d:\program files\borland\bds\4.0\include\dinkumware\xmemory#143 0x004115CF(=FOSChampion.exe:0x01:0105CF) d:\program files\borland\bds\4.0\include\dinkumware\sstream#105 0x00411183(=FOSChampion.exe:0x01:010183) d:\program files\borland\bds\4.0\include\dinkumware\streambuf#465 0x0040933D(=FOSChampion.exe:0x01:00833D) d:\program files\borland\bds\4.0\include\dinkumware\streambuf#151 0x00405988(=FOSChampion.exe:0x01:004988) d:\program files\borland\bds\4.0\include\dinkumware\ostream#679 Objekt (0x0241A238) war Gelöscht mit delete **(Object was deleted with delete)** | d:\program files\borland\bds\4.0\include\dinkumware\xmemory Zeile 138: | void deallocate(pointer _Ptr, size_type) | { // deallocate object at _Ptr, ignore size |> ::operator delete(_Ptr); | } | Aufrufhierarchie: **(stack-trace)** 0x004044C6(=FOSChampion.exe:0x01:0034C6) d:\program files\borland\bds\4.0\include\dinkumware\xmemory#138 0x00411628(=FOSChampion.exe:0x01:010628) d:\program files\borland\bds\4.0\include\dinkumware\sstream#111 0x00411183(=FOSChampion.exe:0x01:010183) d:\program files\borland\bds\4.0\include\dinkumware\streambuf#465 0x0040933D(=FOSChampion.exe:0x01:00833D) d:\program files\borland\bds\4.0\include\dinkumware\streambuf#151 0x00405988(=FOSChampion.exe:0x01:004988) d:\program files\borland\bds\4.0\include\dinkumware\ostream#679 0x00405759(=FOSChampion.exe:0x01:004759) D:\Projekte\Schule\foschamp\src\Server\Ansistringkonverter.h#31 ------------------------------------------ Ansistringkonverter.h is the file with the posted operators and line 31 is: std::ostream& operator<<(std::ostream& ostr,const AnsiString& str) { ostr << (str.c_str()); **(31)** return ostr; } Thanks for your help :)

    Read the article

  • How to write syntax highlighting?

    - by ML
    I am embarking on some learning and I want to write my own syntax highlighting for files in C++. Can anyone give me ideas on how to go about doing this? To me it seems that when a file is opened: 1. it would need to be parsed and decided what type of source file it is. Trusting the extension might not be full-proof a way to know what keywords/commands apply to what language a way to decide what color each keyword/command gets I want to do this on OS X, C++ or Objective-C Can anyone provide pointers on how I might get started with this?

    Read the article

  • Flex/bison, error: undeclared

    - by Imran
    hallo, i have a problem, the followed program gives back an error, error:: Undeclared(first use in function), why this error appears all tokens are declared, but this error comes, can anyone help me, here are the lex and yac files.thanks lex: %{ int yylinenu= 1; int yycolno= 1; %} %x STR DIGIT [0-9] ALPHA [a-zA-Z] ID {ALPHA}(_?({ALPHA}|{DIGIT}))*_? GROUPED_NUMBER ({DIGIT}{1,3})(\.{DIGIT}{3})* SIMPLE_NUMBER {DIGIT}+ NUMMER {GROUPED_NUMBER}|{SIMPLE_NUMBER} %% <INITIAL>{ [\n] {++yylinenu ; yycolno=1;} [ ]+ {yycolno=yycolno+yyleng;} [\t]+ {yycolno=yycolno+(yyleng*8);} "*" {return MAL;} "+" {return PLUS;} "-" {return MINUS;} "/" {return SLASH;} "(" {return LINKEKLAMMER;} ")" {return RECHTEKLAMMER;} "{" {return LINKEGESCHWEIFTEKLAMMER;} "}" {return RECHTEGESCHEIFTEKLAMMER;} "=" {return GLEICH;} "==" {return GLEICHVERGLEICH;} "!=" {return UNGLEICH;} "<" {return KLEINER;} ">" {return GROSSER;} "<=" {return KLEINERGLEICH;} ">=" {return GROSSERGLEICH;} "while" {return WHILE;} "if" {return IF;} "else" {return ELSE;} "printf" {return PRINTF;} ";" {return SEMIKOLON;} \/\/[^\n]* { ;} {NUMMER} {return NUMBER;} {ID} {return IDENTIFIER;} \" {BEGIN(STR);} . {;} } <STR>{ \n {++yylinenu ;yycolno=1;} ([^\"\\]|"\\t"|"\\n"|"\\r"|"\\b"|"\\\"")+ {return STRING;} \" {BEGIN(INITIAL);} } %% yywrap() { } YACC: %{ #include stdio.h> #include string.h> #include "lex.yy.c" void yyerror(char *err); int error=0,linecnt=1; %} %token IDENTIFIER NUMBER STRING COMMENT PLUS MINUS MAL SLASH LINKEKLAMMER RECHTEKLAMMER LINKEGESCHWEIFTEKLAMMER RECHTEGESCHEIFTEKLAMMER GLEICH GLEICHVERGLEICH UNGLEICH GROSSER KLEINER GROSSERGLEICH KLEINERGLEICH IF ELSE WHILE PRINTF SEMIKOLON %start Stmts %% Stmts : Stmt {puts("\t\tStmts : Stmt");} |Stmt Stmts {puts("\t\tStmts : Stmt Stmts");} ; //NEUE REGEL---------------------------------------------- Stmt : LINKEGESCHWEIFTEKLAMMER Stmts RECHTEGESCHEIFTEKLAMMER {puts("\t\tStmt : '{' Stmts '}'");} |IF LINKEKLAMMER Cond RECHTEKLAMMER Stmt {puts("\t\tStmt : '(' Cond ')' Stmt");} |IF LINKEKLAMMER Cond RECHTEKLAMMER Stmt ELSE Stmt {puts("\t\tStmt : '(' Cond ')' Stmt 'ELSE' Stmt");} |WHILE LINKEKLAMMER Cond RECHTEKLAMMER Stmt {puts("\t\tStmt : 'PRINTF' Expr ';'");} |PRINTF Expr SEMIKOLON {puts("\t\tStmt : 'PRINTF' Expr ';'");} |IDENTIFIER GLEICH Expr SEMIKOLON {puts("\t\tStmt : 'IDENTIFIER' '=' Expr ';'");} |SEMIKOLON {puts("\t\tStmt : ';'");} ;//NEUE REGEL --------------------------------------------- Cond: Expr GLEICHVERGLEICH Expr {puts("\t\tCond : '==' Expr");} |Expr UNGLEICH Expr {puts("\t\tCond : '!=' Expr");} |Expr KLEINER Expr {puts("\t\tCond : '<' Expr");} |Expr KLEINERGLEICH Expr {puts("\t\tCond : '<=' Expr");} |Expr GROSSER Expr {puts("\t\tCond : '>' Expr");} |Expr GROSSERGLEICH Expr {puts("\t\tCond : '>=' Expr");} ;//NEUE REGEL -------------------------------------------- Expr:Term {puts("\t\tExpr : Term");} |Term PLUS Expr {puts("\t\tExpr : Term '+' Expr");} |Term MINUS Expr {puts("\t\tExpr : Term '-' Expr");} ;//NEUE REGEL -------------------------------------------- Term:Factor {puts("\t\tTerm : Factor");} |Factor MAL Term {puts("\t\tTerm : Factor '*' Term");} |Factor SLASH Term {puts("\t\tTerm : Factor '/' Term");} ;//NEUE REGEL -------------------------------------------- Factor:SimpleExpr {puts("\t\tFactor : SimpleExpr");} |MINUS SimpleExpr {puts("\t\tFactor : '-' SimpleExpr");} ;//NEUE REGEL -------------------------------------------- SimpleExpr:LINKEKLAMMER Expr RECHTEKLAMMER {puts("\t\tSimpleExpr : '(' Expr ')'");} |IDENTIFIER {puts("\t\tSimpleExpr : 'IDENTIFIER'");} |NUMBER {puts("\t\tSimpleExpr : 'NUMBER'");} |STRING {puts("\t\tSimpleExpr : 'String'");} ;//ENDE ------------------------------------------------- %% void yyerror(char *msg) { error=1; printf("Line: %d , Column: %d : %s \n", yylinenu, yycolno,yytext, msg); } int main(int argc, char *argv[]) { int val; while(yylex()) { printf("\n",yytext); } return yyparse(); }

    Read the article

  • How do you implement syntax highlighting?

    - by ML
    I am embarking on some learning and I want to write my own syntax highlighting for files in C++. Can anyone give me ideas on how to go about doing this? To me it seems that when a file is opened: It would need to be parsed and decided what type of source file it is. Trusting the extension might not be fool-proof A way to know what keywords/commands apply to what language A way to decide what color each keyword/command gets I want to do this on OS X, using C++ or Objective-C. Can anyone provide pointers on how I might get started with this?

    Read the article

  • Syntactical analysis with Flex/Bison part 2

    - by Imran
    Hallo, I need help in Lex/Yacc Programming. I wrote a compiler for a syntactical analysis for inputs of many statements. Now i have a special problem. In case of an Input the compiler gives the right output, which statement is uses, constant operator or a jmp instructor to which label, now i have to write so, if now a if statement comes, first the first command (before the else) must be give out when the assignment of the if is yes then it must jump to the end because the command after the else isnt needed, so after this jmp then the second command must be give out. I show it in an example maybe you understand what i mean. Input adr. Output if(x==0) 10 if(x==0) Wait 5 20 WAIT 5 else 30 JMP 50 Wait 1 40 WAIT 1 end 50 END like so. I have an idea, maybe i can do it whith a special if statement like IF exp jmp_stmt_end stmt_seq END when the if statement is given in the input the compiler has to recognize the end ofthe statement and like my jmp_stmt in my compiler ( you have to download the files from http://bitbucket.org/matrix/changed-tiny) only to jump to the end. I hope you understand my problem.thanks.

    Read the article

1 2 3 4  | Next Page >