Search Results

Search found 19217 results on 769 pages for 'log parser'.

Page 6/769 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • hand coding a parser

    - by John Leidegren
    For all you compiler gurus, I wanna write a recursive descent parser and I wanna do it with just code. No generating lexers and parsers from some other grammar and don't tell me to read the dragon book, i'll come around to that eventually. I wanna get into the gritty details about implementing a lexer and parser for a reasonable simple langauge, say CSS. And I wanna do this right. This will probably end up being a series of questions but right now I'm starting with a lexer. Tokenization rules for CSS can be found here. I find my self writing code like this (hopefully you can infer the rest from this snippet): public CssToken ReadNext() { int val; while ((val = _reader.Read()) != -1) { var c = (char)val; switch (_stack.Top) { case ParserState.Init: if (c == ' ') { continue; // ignore } else if (c == '.') { _stack.Transition(ParserState.SubIdent, ParserState.Init); } break; case ParserState.SubIdent: if (c == '-') { _token.Append(c); } _stack.Transition(ParserState.SubNMBegin); break; What is this called? and how far off am I from something reasonable well understood? I'm trying to balence something which is fair in terms of efficiency and easy to work with, using a stack to implement some kind of state machine is working quite well, but I'm unsure how to continue like this. What I have is an input stream, from which I can read 1 character at a time. I don't do any look a head right now, I just read the character then depending on the current state try to do something with that. I'd really like to get into the mind set of writing reusable snippets of code. This Transition method is currently means to do that, it will pop the current state of the stack and then push the arguments in reverse order. That way, when I write Transition(ParserState.SubIdent, ParserState.Init) it will "call" a sub routine SubIdent which will, when complete, return to the Init state. The parser will be implemented in much the same way, currently, having everyhing in a single big method like this allows me to easily return a token when I found one, but it also forces me to keep everything in one single big method. Is there a nice way to split these tokenization rules into seperate methods? Any input/advice on the matter would be greatly appriciated!

    Read the article

  • How to get lookahead symbol when constructing LR(1) NFA for parser?

    - by greenoldman
    I am reading an explanation (awesome "Parsing Techniques" by D.Grune and C.J.H.Jacobs; p.292 in the 2nd edition) about how to construct an LR(1) parser, and I am at the stage of building the initial NFA. What I don't understand is how to get/compute a lookahead symbol. Here is the example from the book, the grammar: S -> E E -> E - T E -> T T -> ( E ) T -> n n is terminal. The "weird" transitions for me are is the sequence: 1) S -> . E eof 2) E -> . E - T eof 3) E -> . E - T - 4) E -> E . - T - 5) E -> E - . T - (Note: In the above table, the state numbers are in front and the lookahead symbol is at the end.) What puzzles me is that transition from (4) to (5) means reading - token, right? So how is it that - is still a lookahead symbol and even more important why is it that eof is no longer a lookahead symbol? After all in an input such as n - n eof there is only one - symbol. My naive thinking tells me (5) should be written as: 5) E -> E - . T - eof And another thing -- n is terminal. Why it is not used at all as a lookahead symbol? I mean -- we expect to see - or (, it is ok, but lack of n means we are sure it won't appear in input? Update: after more reading I am only more confused ;-) I.e. what is really a lookahead? Because I see such state as (p.292, 2nd column, 2nd row): E -> E . - T eof Lookahead says eof but the incoming input says -. Isn't it a contradiction? And it is not only in this book.

    Read the article

  • exception occured in java compiler

    - by user2892977
    I am a beginner in Java.I have JDK1.7.0 installed on windows 7 OS.I just wrote a sample java file where the file was not getting compiled and throws the below error. Sam.java:5: ';' expected Sample p = New Sample(); An exception has occurred in the compile r (1.7.0-ea). Please file a bug at the Java Developer Connection (http://java.su n.com/webapps/bugreport) after checking the Bug Parade for duplicates. Include your program and the following diagnostic in your report. Thank you. java.lang.StringIndexOutOfBoundsException: String index out of range: 26 at java.lang.String.charAt(String.java:694) at com.sun.tools.javac.util.Log.printErrLine(Log.java:251) at com.sun.tools.javac.util.Log.writeDiagnostic(Log.java:343) at com.sun.tools.javac.util.Log.report(Log.java:315) at com.sun.tools.javac.util.AbstractLog.error(AbstractLog.java:96) at com.sun.tools.javac.parser.Parser.reportSyntaxError(Parser.java:295) at com.sun.tools.javac.parser.Parser.accept(Parser.java:326) at com.sun.tools.javac.parser.Parser.blockStatements(Parser.java:1599) at com.sun.tools.javac.parser.Parser.block(Parser.java:1500) at com.sun.tools.javac.parser.Parser.block(Parser.java:1514) at com.sun.tools.javac.parser.Parser.methodDeclaratorRest(Parser.java:25 69) at com.sun.tools.javac.parser.Parser.classOrInterfaceBodyDeclaration(Par ser.java:2518) at com.sun.tools.javac.parser.Parser.classOrInterfaceBody(Parser.java:24 45) at com.sun.tools.javac.parser.Parser.classDeclaration(Parser.java:2290) at com.sun.tools.javac.parser.Parser.classOrInterfaceOrEnumDeclaration(P arser.java:2228) at com.sun.tools.javac.parser.Parser.typeDeclaration(Parser.java:2217) at com.sun.tools.javac.parser.Parser.compilationUnit(Parser.java:2163) at com.sun.tools.javac.main.JavaCompiler.parse(JavaCompiler.java:530) at com.sun.tools.javac.main.JavaCompiler.parse(JavaCompiler.java:571) at com.sun.tools.javac.main.JavaCompiler.parseFiles(JavaCompiler.java:82 2) at com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:748) at com.sun.tools.javac.main.Main.compile(Main.java:386) at com.sun.tools.javac.main.Main.compile(Main.java:312) at com.sun.tools.javac.main.Main.compile(Main.java:303) at com.sun.tools.javac.Main.compile(Main.java:82) at com.sun.tools.javac.Main.main(Main.java:67) Below is the code for Sam.java file class sam { public static void main(String args[]) { Sample p = New Sample(); p.show(); p.display(); } } I researched in google with the various compiler options but that did not help.I would like to understand the below errors. 1 - Sam.java:5: ';' expected 2 - An exception has occurred in the compiler (1.7.0-ea)

    Read the article

  • awstats parse of postfix mail log drops all records

    - by accidental admin
    I'm trying to get awstats to parse the postfix mail log, but it drops allmost all entries with messages like: Corrupted record (date 20091204042837 lower than 20091211065829-20000): 2009-12-04 04:28:37 root root localhost 127.0.0.1 SMTP - 1 17480 Few more are dropped with an invalid LogFormat: Corrupted record line 24 (record format does not match LogFormat parameter): 2009-11-16 04: 28:22 root root localhost 127.0.0.1 SMTP - 14755 My conf LogFormat="%time2 %email %email_r %host %host_r %method %url %code %bytesd" I believe matches the log format (and besides is the log format I've seen everywhere for awstats mail parsing). Besides, is the same entry format as all the other entries in the mail log. Whatever is left is dropped too: Dropped record (host localhost and 127.0.0.1 not qualified by SkipHosts): 2009-12-07 04:28:36 root root localhost 127.0.0.1 SMTP - 1 17152 I added SkipHosts="" to the .conf file but to no avail. I feel like awstats really has some personal quarrel with me today.

    Read the article

  • awstats parse of postfix mail log drops all records

    - by accidental admin
    I'm trying to get awstats to parse the postfix mail log, but it drops allmost all entries with messages like: Corrupted record (date 20091204042837 lower than 20091211065829-20000): 2009-12-04 04:28:37 root root localhost 127.0.0.1 SMTP - 1 17480 Few more are dropped with an invalid LogFormat: Corrupted record line 24 (record format does not match LogFormat parameter): 2009-11-16 04: 28:22 root root localhost 127.0.0.1 SMTP - 14755 My conf LogFormat="%time2 %email %email_r %host %host_r %method %url %code %bytesd" I believe matches the log format (and besides is the log format I've seen everywhere for awstats mail parsing). Besides, is the same entry format as all the other entries in the mail log. Whatever is left is dropped too: Dropped record (host localhost and 127.0.0.1 not qualified by SkipHosts): 2009-12-07 04:28:36 root root localhost 127.0.0.1 SMTP - 1 17152 I added SkipHosts="" to the .conf file but to no avail. I feel like awstats really has some personal quarrel with me today.

    Read the article

  • SQL transaction log backups conflicting with full backups?

    - by BradC
    On our SQL servers (2000, 2005, and 2008), we run full backups once a day in the evening, and transaction log backups every 2 hrs. We haven't really worried about these two processes conflicting, but lately we've run into some of the following issues: On one server, the trans log backup occasionally blocks the full backup, and must be manually stopped before the full backup can complete We sometimes end up with a massively-sized trans log backup file (sometimes larger than the full backup!) that seems to occur at the same time the full backup is running. I found a reference that indicate that these are "not allowed" to run at the same time, whatever that means: SQL 2000 Books Online and SQL 2005 Books Online. I'm not sure whether that means that the server will simply prevent them from running simultaneously, or if we ought to be explicitly stopping the log backups while the full backups are running. So are there known conflicts/issues between these? Does the answer differ between SQL versions? Should I have the trans log backup job check to see if the full backup is running before it executes? (and how do I do that...?)

    Read the article

  • Get sessions' remote IP from Teamviewer log file

    - by etuardu
    I'd like to know who has logged in to my machine and when. I have two TeamViewer log files: Connections_incoming.txt and TeamViewer7_Logfile.log. The first one is quite plain and lists, as its name says, the incoming connections to the machine, reporting the local name of the remote host, login time, logout time, and some ids. e.g.: 173274362 MYLAPTOP 20-02-2012 17:32:16 20-02-2012 17:50:42 Master RemoteControl {C5AAE483-ED0B-54B8-9235-7AE597CAD342} This is almost all what I need, but unfortunately no remote IP address is reported here, so I checked for IPs in TeamViewer7_Logfile.log but it is really messy. It indeed contains some IP addresses but I can't understand which one is bound with the items in the first log file. Is there a way to interpolate the two logs to get what I need? Should I search the second file for some particular text? What do you suggest?

    Read the article

  • IIS 7.5 log to: sql server vs file

    - by stacker
    I want to know if get IIS to log directly to the sql server is resource costive, and a better solution maybe generate log files, and each hour import this files to sql server. Does it VERY big cost to log to sql server each request directly? The pages are open connection to the database anyway for each request.

    Read the article

  • Creating a custom view for windows log based on a "Contains {text}" rule

    - by jussinen
    I have a server running Windows Server 2008. I'm using Windows Server Auditing to check when and by which user a folder is modified to determine who is modifying it as the modifications are causing problems. I can see the log of the audit when a change is made in the System log. How do I create a Custom View that will return all events from System log where a certain text (which is the folder name) is present? The create custom view doesn't seem to have that option. I'm not sure whether it's possible via custom xml query or whether I'll need to export the system log to csv and search in Excel. John

    Read the article

  • Event log message size 31885? Windows 2008

    - by testuser
    We recently upgraded our production boxes to Windows 2008 from Windows 2003 servers. Everything works fine except the event logging. We log at max 32000 bytes of data for each message On 2008 servers, event logging fails if number of characters is greater than 31885. Is this new limit on Windows 2008 R2 servers? Any help appreciated. On Win 2003 servers, I am able to log 32000 bytes of data for each log entry.

    Read the article

  • Apache log lines contain "..."

    - by mtah
    We have a custom log line format for Apache logs which are analyzed. CustomLog "|/usr/sbin/rotatelogs -l /mnt/var/log/apache2/access-%Y%m%d%H%M%S.log 900" "%a %{%s}t \"%r\"" However, some log lines are mysteriously shortened with "..." for some reason, but how can this be? The shortest length line discovered where this occurs is 317 chars while the longest line is way over 2000 chars. "GET /exposure?sg=&ap=0x0&fv=WIN%2010,0,22,87&si=IH95VDUAVLJ0&pt=Lage%20hjemmelaget%20sengegavl%20-%20Forum%20-%20Diskusjon.no&iv=0&sd=1024x600&ct=680&tz=-120&eu=http%3A//www.diskusjon.no/index.php%3Fshowtopic%3D1011139&l...AS3&an=NO%20-%20180x500%20Pretail%20CPC&wd=1024x483&rf=http%3A//www.google.no/search%3Fhl%3Dno%26source%3Dhp%26q%3Dsengegavl+lage%26meta%3D%26aq%3D2%26aqi%3Dg10%26aql%3D%26oq%3Dsengega%26gs_rfai%3D&ui=3INYF5QAZL10&ws=0x417&ad=180x500&sa= HTTP/1.1"

    Read the article

  • Reading log files from web application

    - by Egorinsk
    Hi! I want to write a small PHP application for monitoring logs on a Debian server, including syslog logs and Apache/PHP messages. The problem here is that Apache user (www-data) has no access to /var/log directory. What would be the best way to grant an access to logs for PHP application? Let's assume that log files can be really large, like hundreds of megabytes. I have some ideas: Write a shell script that would be run via sudo and tail last 512 Kb of log into a separate file that can be read by application - that's ineffective, because of forking a new process and having to read data twice Add www-data to adm group (that can read logs) - that's insecure Start a PHP process via cron every minute to read logs — that's not very good, because it doesn't allow real-time monitoring. Also, this script will be started even when I don't read logs, and consume CPU time (server is in the cloud, and I'll have to pay for it) Create a hardlink for all log files with lowered permissions - I guess, that won't work because logrotate could recreate log files and they'll change inode number. Start a separate nginx/Apache server under privileged user that may read logs. Maybe anyone got a better solution?

    Read the article

  • Cannot write log file 'ffmpeg2pass-0.log' for pass-1 encoding: Permission denied

    - by matt_tm
    Our PHP application is installed as 'root' on a Redhat5/CentOS system at: /var/www/html/beta/ After disabling SELINUX in order to allow these scripts to execute other programs on the system - http://serverfault.com/questions/192951/what-permissions-are-needed-to-run-a-system-command-within-a-php-script-that-wr I faced the error that the Apache error_log showed this: Cannot write log file 'ffmpeg2pass-0.log' for pass-1 encoding: Permission denied

    Read the article

  • How to further improve error messages in Scala parser-combinator based parsers?

    - by rse
    I've coded a parser based on Scala parser combinators: class SxmlParser extends RegexParsers with ImplicitConversions with PackratParsers { [...] lazy val document: PackratParser[AstNodeDocument] = ((procinst | element | comment | cdata | whitespace | text)*) ^^ { AstNodeDocument(_) } [...] } object SxmlParser { def parse(text: String): AstNodeDocument = { var ast = AstNodeDocument() val parser = new SxmlParser() val result = parser.parseAll(parser.document, new CharArrayReader(text.toArray)) result match { case parser.Success(x, _) => ast = x case parser.NoSuccess(err, next) => { tool.die("failed to parse SXML input " + "(line " + next.pos.line + ", column " + next.pos.column + "):\n" + err + "\n" + next.pos.longString) } } ast } } Usually the resulting parsing error messages are rather nice. But sometimes it becomes just sxml: ERROR: failed to parse SXML input (line 32, column 1): `"' expected but `' found ^ This happens if a quote characters is not closed and the parser reaches the EOT. What I would like to see here is (1) what production the parser was in when it expected the '"' (I've multiple ones) and (2) where in the input this production started parsing (which is an indicator where the opening quote is in the input). Does anybody know how I can improve the error messages and include more information about the actual internal parsing state when the error happens (perhaps something like a production rule stacktrace or whatever can be given reasonably here to better identify the error location). BTW, the above "line 32, column 1" is actually the EOT position and hence of no use here, of course.

    Read the article

  • Boost.Log - Multiple processes to one log file?

    - by Kevin
    Reading through the doc for Boost.Log, it explains how to "fan out" into multiple files/sinks pretty well from one application, and how to get multiple threads working together to log to one place, but is there any documentation on how to get multiple processes logging to a single log file? What I imagine is that every process would log to its own "private" log file, but in addition, any messages above a certain severity would also go to a "common" log file. Is this possible with Boost.Log? Is there some configuration of the sinks that makes this easy? I understand that I will likely have the same "timestamp out of order" problem described in the FAQ here, but that's OK, as long as the timestamps are correct I can work with that. This is all on one machine, so no remote filesystem problems either.

    Read the article

  • Nokogiri pull parser (Nokogiri::XML::Reader) issue with self closing tag

    - by Vlad Zloteanu
    I have a huge XML(400MB) containing products. Using a DOM parser is therefore excluded, so i tried to parse and process it using a pull parser. Below is a snippet from the each_product(&block) method where i iterate over the product list. Basically, using a stack, i transform each <product> ... </product> node into a hash and process it. while (reader.read) case reader.node_type #start element when Nokogiri::XML::Node::ELEMENT_NODE elem_name = reader.name.to_s stack.push([elem_name, {}]) #text element when Nokogiri::XML::Node::TEXT_NODE, Nokogiri::XML::Node::CDATA_SECTION_NODE stack.last[1] = reader.value #end element when Nokogiri::XML::Node::ELEMENT_DECL return if stack.empty? elem = stack.pop parent = stack.last if parent.nil? yield(elem[1]) elem = nil next end key = elem[0] parent_childs = parent[1] # ... parent_childs[key] = elem[1] end The issue is on self-closing tags (EG <country/>), as i can not make the difference between a 'normal' and a 'self-closing' tag. They both are of type Nokogiri::XML::Node::ELEMENT_NODE and i am not able to find any other discriminator in the documentation. Any ideas on how to solve this issue?

    Read the article

  • C++ JSON parser

    - by pollux
    Dear reader, I'm working on a twitter client which uses the twitter streaming json api. Twitter advices JSON as XML version is deprecated. I'm looking for a good JSON parser which can parse the json data below. I'm receiving this JSON which I want to be able to read/parse using a JSON parser. { "in_reply_to_status_id": null, "text": "Home-plate umpire Crawford gets stung http://tinyurl.com/27ujc86", "favorited": false, "coordinates": null, "in_reply_to_user_id": null, "source": "<a href=\"http://apiwiki.twitter.com/\" rel=\"nofollow\">API</a>", "geo": null, "created_at": "Fri Jun 18 15:12:06 +0000 2010", "place": null, "user": { "profile_text_color": "333333", "screen_name": "HostingViral", "time_zone": "Pacific Time (US & Canada)", "url": "http://bit.ly/1Way7P", "profile_link_color": "228235", "profile_background_image_url": "http://s.twimg.com/a/1276654401/images/themes/theme14/bg.gif", "description": "Full time Internet Marketer - Helping other reach their Goals\r\nhttp://wavemarker.com", "statuses_count": 1944, "profile_sidebar_fill_color": "c7b7c7", "profile_background_tile": true, "contributors_enabled": false, "lang": "en", "notifications": null, "created_at": "Wed Dec 30 07:50:52 +0000 2009", "profile_sidebar_border_color": "120412", "following": null, "geo_enabled": false, "followers_count": 2485, "protected": false, "friends_count": 2495, "location": "Working at Home", "name": "Johnathan Thomas", "verified": false, "profile_background_color": "131516", "profile_image_url": "http://a1.twimg.com/profile_images/600114776/nessykalvo421_normal.jpg", "id": 100439873, "utc_offset": -28800, "favourites_count": 0 }, "in_reply_to_screen_name": null, "id": 16477056501, "contributors": null, "truncated": false } *This is the raw string (above it beautified) * {"in_reply_to_status_id":null,"text":"Home-plate umpire Crawford gets stung http://tinyurl.com/27ujc86","favorited":false,"coordinates":null,"in_reply_to_user_id":null,"source":"<a href=\"http://apiwiki.twitter.com/\" rel=\"nofollow\">API</a>","geo":null,"created_at":"Fri Jun 18 15:12:06 +0000 2010","place":null,"user":{"profile_text_color":"333333","screen_name":"HostingViral","time_zone":"Pacific Time (US & Canada)","url":"http://bit.ly/1Way7P","profile_link_color":"228235","profile_background_image_url":"http://s.twimg.com/a/1276654401/images/themes/theme14/bg.gif","description":"Full time Internet Marketer - Helping other reach their Goals\r\nhttp://wavemarker.com","statuses_count":1944,"profile_sidebar_fill_color":"c7b7c7","profile_background_tile":true,"contributors_enabled":false,"lang":"en","notifications":null,"created_at":"Wed Dec 30 07:50:52 +0000 2009","profile_sidebar_border_color":"120412","following":null,"geo_enabled":false,"followers_count":2485,"protected":false,"friends_count":2495,"location":"Working at Home","name":"Johnathan Thomas","verified":false,"profile_background_color":"131516","profile_image_url":"http://a1.twimg.com/profile_images/600114776/nessykalvo421_normal.jpg","id":100439873,"utc_offset":-28800,"favourites_count":0},"in_reply_to_screen_name":null,"id":16477056501,"contributors":null,"truncated":false} I've tried multiple JSON parsers from json.org though I've tried 4 now and can't find one which can parse above json. Kind regards, Pollux

    Read the article

  • Javascript BBCode Parser recognizes only first list element

    - by nolandark
    I have a really simple Javascript BBCode Parser for client-side live preview (don't want to use Ajax for that). The problem ist, this parser only recognizes the first list element: function bbcode_parser(str) { search = new Array( /\[b\](.*?)\[\/b\]/, /\[i\](.*?)\[\/i\]/, /\[img\](.*?)\[\/img\]/, /\[url\="?(.*?)"?\](.*?)\[\/url\]/, /\[quote](.*?)\[\/quote\]/, /\[list\=(.*?)\](.*?)\[\/list\]/i, /\[list\]([\s\S]*?)\[\/list\]/i, /\[\*\]\s?(.*?)\n/); replace = new Array( "<strong>$1</strong>", "<em>$1</em>", "<img src=\"$1\" alt=\"An image\">", "<a href=\"$1\">$2</a>", "<blockquote>$1</blockquote>", "<ol>$2</ol>", "<ul>$1</ul>", "<li>$1</li>"); for (i = 0; i < search.length; i++) { str = str.replace(search[i], replace[i]); } return str;} [list] [*] adfasdfdf [*] asdfadsf [*] asdfadss [/list] only the first element is converted to a HTML List element, the rest stays as BBCode: adfasdfdf [*] asdfadsf [*] asdfadss I tried playing around with "\s", "\S" and "\n" but I'm mostly used to PHP Regex and totally new to Javascript Regex. Any suggestions?

    Read the article

  • Fastest XML parser for small, simple documents in Java

    - by Varkhan
    I have to objectify very simple and small XML documents (less than 1k, and it's almost SGML: no namespaces, plain UTF-8, you name it...), read from a stream, in Java. I am using JAXP to process the data from my stream into a Document object. I have tried Xerces, it's way too big and slow... I am using Dom4j, but I am still spending way too much time in org.dom4j.io.SAXReader. Does anybody out there have any suggestion on a faster, more efficient implementation, keeping in mind I have very tough CPU and memory constraints? [Edit 1] Keep in mind that my documents are very small, so the overhead of staring the parser can be important. For instance I am spending as much time in org.xml.sax.helpers.XMLReaderFactory.createXMLReader as in org.dom4j.io.SAXReader.read [Edit 2] The result has to be in Dom format, as I pass the document to decision tools that do arbitrary processing on it, like switching code based on the value of arbitrary XPaths, but also extracting lists of values packed as children of a predefined node. [Edit 3] In any case I eventually need to load/parse the complete document, since all the information it contains is going to be used at some point. (This question is related to, but different from, http://stackoverflow.com/questions/373833/best-xml-parser-for-java )

    Read the article

  • Robust, Mature HTML Parser for PHP

    - by Alan Storm
    Are there any robust and mature HTML parsers available for PHP? A quick skimming of PEAR didn't turn anything up (lots of classes for generating HTML, not so much for consuming), and Google taught me a lot of people have started and then abandoned a variety of parser projects. Not interested in XML parsers (unless then can consume non-well formed HTML) or hacking it on my own with regular expressions. Clarification of Intent: I'm not interested in filtering of HTML content, I'm interesting in extracting information from HTML documents.

    Read the article

  • Regarding parser DOM and REGEX

    - by giri
    Hi I am writing an application in java I need to fetch specific data from website.I do not know which one to use whether REGEX or Parser.Can anybody please advise me how to get this done? and which one is prefered. Thanks

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >