Search Results

Search found 21350 results on 854 pages for 'url parsing'.

Page 47/854 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • What's a good library for parsing mathematical expressions in java?

    - by CSharperWithJava
    I'm an Android Developer and as part of my next app I will need to evaluate a large variety of user created mathematical expressions and equations. I am looking for a good java library that is lightweight and can evaluate mathematical expressions using user defined variables and constants, trig and exponential functions, etc. I've looked around and Jep seems to be popular, but I would like to hear more suggestions, especially from people who have used these libraries before.

    Read the article

  • Examples of attoparsec in parsing binary file formats?

    - by me2
    Previously attoparsec was suggested to me for parsing complex binary file formats. While I can find examples of attoparsec parsing HTTP, which is essentially text based, I cannot find an example parsing actual binary, for example, a TCP packet, or image file, or mp3. Can someone post some code or pointer to some code which does this using attoparsec?

    Read the article

  • Why doesn't a URL with two hyphens ( in one segment ) match a Route in my Route Table?

    - by Atomiton
    I'm trying to resolve this URL Route: Route articlesByCategory = new Route("articles/c{cid}-{category}", new Handler); However, it seems like the following url won't resolve to this route: // doesn't work www.site.com/articles/c24-this-is-the-category-title // This works www.site.com/articles/c24-category I assume it has to do with the dashes in the title, but can anyone tell me why this works this way? Is there a way to allow dashes in the title for a URL route like this?

    Read the article

  • Why doesn't a URL Route with two dashes resolve?

    - by Atomiton
    I'm trying to resolve this URL Route: Route articlesByCategory = new Route("articles/c{cid}-{category}", new Handler); However, it seems like the following url won't resolve to this route: // doesn't work www.site.com/articles/c24-this-is-the-category-title // This works www.site.com/articles/c24-category I assume it has to do with the dashes in the title, but can anyone tell me why this works this way? Is there a way to allow dashes in the title for a URL route like this?

    Read the article

  • How do you access URL text following the # sign through Java?

    - by cmcculloh
    Using Java (.jsp or whatever) is there a way where I can send a request for this page: http://www.mystore.com/store/shelf.jsp?category=mens#page=2 and have the Java code parse the URL and see the #page=2 and respond accordingly? Basically, I'm looking for the Java code that allows me to access the characters following the hash tag. The reason I'm doing this is that I want to load subsequent pages via AJAX (on my shelf) and then allow the user to copy and paste the URL and send it to a friend. Without the ability of Java being able to read the characters following the hash tag I'm uncertain as to how I would manipulate the URL with Javascript in a way that the server would be able to also read without causing the page to re-load. I'm having trouble even figuring out how to access/see the entire URL (http://www.mystore.com/store/shelf.jsp?category=mens#page=2) from within my Java code...

    Read the article

  • How can I check the content of the arrays? Parsing XML file with ObjectiveC

    - by skiria
    I have 3 classes- video { nameVideo, id, description, user... } topic {nameTopic, topicID, NSMutableArray videos; } category {nameCategory, categoryID, NSMUtableArray topics} And then in my app delegate I defined- NSMutableArray categories; I parse an XML file with this code. I try the arrays hierachy, and i think that i don't add any object on the arrays. How can I check it? What's wrong? (void)parser:(NSXMLParser *)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qualifiedName attributes:(NSDictionary *)attributeDict { if([elementName isEqualToString:@"Videos"]) { //Initialize the array. appDelegate.categories = [[NSMutableArray alloc] init]; } else if ([elementName isEqualToString:@"Category"]) { aCategory = [[Category alloc] init]; aCategory.categoryID = [[attributeDict objectForKey:@"id"] integerValue]; NSLog(@"Reading id category value: %i", aCategory.categoryID); } else if ([elementName isEqualToString:@"Topic"]) { aTopic = [[Topic alloc] init]; aTopic.topicID = [[attributeDict objectForKey:@"id"] integerValue]; NSLog(@"Reading id topic value: %i", aTopic.topicID); } else if ([elementName isEqualToString:@"video"]) { aVideo = [[Video alloc] init]; aVideo.videoID = [[attributeDict objectForKey:@"id"] integerValue]; aVideo.nameTopic = currentNameTopic; aVideo.nameCategory = currentNameCategory; NSLog(@"Reading id video value: %i", aVideo.videoID); } NSLog(@"Processing Element: %@", elementName); } (void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)string { if(!currentElementValue) currentElementValue = [[NSMutableString alloc] initWithString:string]; else [currentElementValue appendString:string]; NSLog(@"Processing Value: %@", currentElementValue); } (void)parser:(NSXMLParser *)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName { if([elementName isEqualToString:@"Videos"]) return; if ([elementName isEqualToString:@"Category"]) { [appDelegate.categories addObject:aCategory]; [aCategory release]; aCategory = nil; } if ([elementName isEqualToString:@"Topic"]) { [aCategory.topics addObject:aTopic]; //NSLog(@"contador: %i", [aCategory.topics count]); //NSLog(@"contador: %@", aTopic.nameTopic); [aTopic release]; aTopic = nil; } if ([elementName isEqualToString:@"video"]) { [aTopic.videos addObject:aVideo]; NSLog(@"count number videos: %i", [aTopic.videos count]); -- always 0 NSLog(@"NOM CATEGORIA VIDEO: %@", aVideo.urlVideo); -- OK [aVideo release]; aVideo = nil; } if ([elementName isEqualToString:@"nameCategory"]) { //[aCategory setValue:currentElementValue forKey:elementName]; aCategory.nameCategory = currentElementValue; currentNameCategory = currentElementValue; } if ([elementName isEqualToString:@"nameTopic"]) { aTopic.nameTopic = currentElementValue; currentNameTopic = currentElementValue; } else [aVideo setValue:currentElementValue forKey:elementName]; [currentElementValue release]; currentElementValue = nil; }

    Read the article

  • Parsing: How to make error recovery in grammars like " a* b*"?

    - by Lavir the Whiolet
    Let we have a grammar like this: Program ::= a* b* where "*" is considered to be greedy. I usually implement "*" operator naively: Try to apply the expression under "*" to input one more time. If it has been applied successfully then we are still under current "*"-expression; try to apply the expression under "*" one more time. Otherwise we have reached next grammar expression; put characters parsed by expression under "*" back into input and proceed with next expression. But if there are errors in input in any of "a*" or "b*" part such a parser will "think" that in position of error both "a*" and "b*" have finished ("let's try "a"... Fail! OK, it looks like we have to proceed to "b*". Let's try "b"... Fail! OK, it looks like the string should have been finished...). For example, for string "daaaabbbbbbc" it will "say": "The string must end at position 1, delete superflous characters: daaaabbbbbbc". In short, greedy "*" operator becomes lazy if there are errors in input. How to make "*" operator to recover from errors nicely?

    Read the article

  • Is there a module for parsing numbers (inkl. ranges)?

    - by sid_com
    Is there a module, which does this for me? #!/usr/bin/env perl use warnings; use strict; use 5.012; sub aw_parse { my( $in, $max ) = @_; chomp $in; my @array = split ( /\s*,\s*/, $in ); my %zahlen; for ( @array ) { if ( /^\s*(\d+)\s*$/ ) { $zahlen{$1}++; } elsif ( /^\s*(\d+)\s*-\s*(\d+)\s*$/ ) { die "'$1-$2' not a valid input $!" if $1 >= $2; for ( $1 .. $2 ) { $zahlen{$_}++; } } else { die "'$_' not a valid input $!"; } } @array = sort { $a <=> $b } keys ( %zahlen ); if ( defined $max ) { for ( @array ) { die "Input '0' not allowed $!" if $_ == 0; die "Input ($_) greater than $max not allowed $!" if $_ > $max; } } return \@array; } my $max = 20; print "Input (max $max): "; my $in = <>; my $out = aw_parse( $in, $max ); say "@$out";

    Read the article

  • Is there a Perl module for parsing numbers, including ranges?

    - by sid_com
    Is there a module, which does this for me? sample_input: 2, 5-7, 9, 3, 11-14 #!/usr/bin/env perl use warnings; use strict; use 5.012; sub aw_parse { my( $in, $max ) = @_; chomp $in; my @array = split ( /\s*,\s*/, $in ); my %zahlen; for ( @array ) { if ( /^\s*(\d+)\s*$/ ) { $zahlen{$1}++; } elsif ( /^\s*(\d+)\s*-\s*(\d+)\s*$/ ) { die "'$1-$2' not a valid input $!" if $1 >= $2; for ( $1 .. $2 ) { $zahlen{$_}++; } } else { die "'$_' not a valid input $!"; } } @array = sort { $a <=> $b } keys ( %zahlen ); if ( defined $max ) { for ( @array ) { die "Input '0' not allowed $!" if $_ == 0; die "Input ($_) greater than $max not allowed $!" if $_ > $max; } } return \@array; } my $max = 20; print "Input (max $max): "; my $in = <>; my $out = aw_parse( $in, $max ); say "@$out";

    Read the article

  • What is the right method for parsing a blog post?

    - by Zedwal
    Hi guys, Need a guide line .... I am trying to write a personal blog. What is the standard structure for for input for the post. I am trying the format like: This is the simple text And I am [b] bold text[/b]. This is the code part: [code lang=java] public static void main (String args[]) { System.out.println("Hello World!"); } [/code] Is this the right way to store post in the database? And What is the right method to parse this kind of post? Shall I use regular expression to parse this or there is another standard for this. If the above mentioned format is not the right way for storage, then what it could be? Thanks

    Read the article

  • C# XML parsing with LINQ storing directly to a struct?

    - by Luke
    Say I have the following XML schema: <root> <version>2.0</version> <type>fiction</type> <chapters> <chapter>1</chapter> <title>blah blah</title> </chapter> <chapters> <chapter>2</chapter> <title>blah blah</title> </chapters> </root> Would it be possibly to parse the elements which I know will not be repeated in the XML and store them directly into the struct using LINQ? For example, could I do something like this for "version" and "type" //setup structs Book book = new Book(); book.chapter = new Chapter(); //use LINQ to parse the xml var bookData = from b in xmlDoc.Decendants("root") select new { book.version = b.Element("version").Value, book.type = b.Element("type").Value }; //then for "chapters" since I know there are multiple I can do: var chapterData = from c in xmlDoc.Decendants("root" select new { chapter = c.Element("chapters") }; foreach (var ch in chapterData) { book.chapter.Add(getChapterData(ch.chapter)); }

    Read the article

  • How to prevent users to change url parameter in PHP?

    - by Sachin
    I am developing a site where I am sending parameters like ids by url. I have used urlencode and base64encode to encode the parameters. My problem is that how can I prevent the users or hackers to play with url paramenters Or give access only if the parameter value is exist in database? I have at least 2 and at most 5 parameter in url so is this feasible to check every parameter is exist in database on every page? Thanks

    Read the article

  • Parsing an XML string containing "&#x20;" (which must be preserved)

    - by Zoodor
    I have code that is passed a string containing XML. This XML may contain one or more instances of &#x20; (an entity reference for the blank space character). I have a requirement that these references should not be resolved (i.e. they should not be replaced with an actual space character). Is there any way for me to achieve this? Basically, given a string containing the XML: <pattern value="[A-Z0-9&#x20;]" /> I do not want it to be converted to: <pattern value="[A-Z0-9 ]" /> (What I am actually trying to achieve is to simply take an XML string and write it to a "pretty-printed" file. This is having the side-effect of resolving occurrences of &#x20; in the string to a single space character, which need to be preserved. The reason for this requirement is that the written XML document must conform to an externally-defined specification.) I have tried creating a sub-class of XmlTextReader to read from the XML string and overriding the ResolveEntity() method, but this isn't called. I have also tried assigning a custom XmlResolver.

    Read the article

  • Yet another URL prefix regex question (to be used in C#).

    - by Hamish Grubijan
    Hi, I have seen many regular expressions for Url validation. In my case I want the Url to be simpler, so the regex should be tighter: Valid Url prefixes look like: http[s]://[www.]addressOrIp[.something]/PageName.aspx[?] This describe a prefix. I will be appending ?x=a&y=b&z=c later. I just want to check if the web page is live before accessing it, but even before that I want to make sure that it is properly configured. I want to treat bad url and host is down conditions differently, although when in doubt, I'd rather give a host is down message, because that is an ultimate test anyway. Hopefully that makes sense. I guess what I am trying to say - the regex does not need be too aggressive, I just want it to cover say 95% of the cases. This is C# - centric, so Perl regex extensions are not helpful to me; let's stick to the lowest common denominator. Thanks!

    Read the article

  • can anyone help me with this javascript for validating url in aspx?

    - by orrep
    heres my code - function Validate_URL(url) { var iurl = url.value; var v = new RegExp(); v.compile("/^(((ht|f){1}(tp:[/][/]){1})|((www.){1}))[-a-zA-Z0-9@:%_+.~#?&//=]+$/;"); if (!v.test(iurl.value)) { url.style.backgroundColor = 'yellow'; } return true; } no matter what i put in url, say http://www.abc.com/newpage.html, it returns false. how come?

    Read the article

  • different types of parsing

    - by kostas_menu
    I have read the tutorial from ibm about xml parsing (http://www.ibm.com/developerworks/opensource/library/x-android/) In this example,there are four types of xml parsing.Dom,Sax,Android Sax and xml_pull.Could you please tell me what's the difference between these four types and when i have to use each one? Also,with every way of xml parsing in this tutorial,the feeds are shown in a listView. What i have to do in order to appear every announcement in a btn for example? thanks for your time!Merry Christmas:D

    Read the article

  • Is this a valid url parameter in jquery.ajax()?

    - by udaya
    Is this a valid url parameter in jquery.ajax(), <script type="text/javascript"> $(document).ready(function() { getRecordspage(); }); function getRecordspage() { $.ajax({ type: "POST", url: "http://localhost/codeigniter_cup_myth/index.php/adminController/mainAccount", data: "", contentType: "application/json; charset=utf-8", global:false, async: false, dataType: "json", success: function(jsonObj) { alert(jsonobj); } }); } </script> The url doesn't seem to go to my controller function...

    Read the article

  • Does JAXP natively parse HTML?

    - by ikmac
    So, I whip up a quick test case in Java 7 to grab a couple of elements from random URIs, and see if the built-in parsing stuff will do what I need. Here's the basic setup (with exception handling etc omitted): DocumentBuilderFactory dbfac = DocumentBuilderFactory.newInstance(); DocumentBuilder dbuild = dbfac.newDocumentBuilder(); Document doc = dbuild.parse("uri-goes-here"); With no error handler installed, the parse method throws exceptions on fatal parse errors. When getting the standard Apache 2.2 directory index page from a local server: a SAXParseException with the message White spaces are required between publicId and systemId. The doctype looks ok to me, whitespace and all. When getting a page off a Drupal 7 generated site, it never finishes. The parse method seems to hang. No exceptions thrown, never returns. When getting http://www.oracle.com, a SAXParseException with the message The element type "meta" must be terminated by the matching end-tag "</meta>". So it would appear that the default setup I've used here doesn't handle HTML, only strictly written XML. My question is: can JAXP be used out-of-the-box from openJDK 7 to parse HTML from the wild (without insane gesticulations), or am I better off looking for an HTML 5 parser? PS this is for something I may not open-source, so licensing is also an issue :(

    Read the article

  • Can the csv format be defined by a regex?

    - by Spencer Rathbun
    A colleague and I have recently argued over whether a pure regex is capable of fully encapsulating the csv format, such that it is capable of parsing all files with any given escape char, quote char, and separator char. The regex need not be capable of changing these chars after creation, but it must not fail on any other edge case. I have argued that this is impossible for just a tokenizer. The only regex that might be able to do this is a very complex PCRE style that moves beyond just tokenizing. I am looking for something along the lines of: ... the csv format is a context free grammar and as such, it is impossible to parse with regex alone ... Or am I wrong? Is it possible to parse csv with just a POSIX regex? For example, if both the escape char and the quote char are ", then these two lines are valid csv: """this is a test.""","" "and he said,""What will be, will be."", to which I replied, ""Surely not!""","moving on to the next field here..."

    Read the article

  • Sentence Tree v/s Words List

    - by Rohit Jose
    I was recently tasked with building a Name Entity Recognizer as part of a project. The objective was to parse a given sentence and come up with all the possible combinations of the entities. One approach that was suggested was to keep a lookup table for all the know connector words like articles and conjunctions, remove them from the words list after splitting the sentence on the basis of the spaces. This would leave out the Name Entities in the sentence. A lookup is then done for these identified entities on another lookup table that associates them to the entity type, for example if the sentence was: Remember the Titans was a movie directed by Boaz Yakin, the possible outputs would be: {Remember the Titans,Movie} was {a movie,Movie} directed by {Boaz Yakin,director} {Remember the Titans,Movie} was a movie directed by Boaz Yakin {Remember the Titans,Movie} was {a movie,Movie} directed by Boaz Yakin {Remember the Titans,Movie} was a movie directed by {Boaz Yakin,director} Remember the Titans was {a movie,Movie} directed by Boaz Yakin Remember the Titans was {a movie,Movie} directed by {Boaz Yakin,director} Remember the Titans was a movie directed by {Boaz Yakin,director} Remember the {the titans,Movie,Sports Team} was {a movie,Movie} directed by {Boaz Yakin,director} Remember the {the titans,Movie,Sports Team} was a movie directed by Boaz Yakin Remember the {the titans,Movie,Sports Team} was {a movie,Movie} directed by Boaz Yakin Remember the {the titans,Movie,Sports Team} was a movie directed by {Boaz Yakin,director} The entity lookup table here would contain the following data: Remember the Titans=Movie a movie=Movie Boaz Yakin=director the Titans=Movie the Titans=Sports Team Another alternative logic that was put forward was to build a crude sentence tree that would contain the connector words in the lookup table as parent nodes and do a lookup in the entity table for the leaf node that might contain the entities. The tree that was built for the sentence above would be: The question I am faced with is the benefits of the two approaches, should I be going for the tree approach to represent the sentence parsing, since it provides a more semantic structure? Is there a better approach I should be going for solving it?

    Read the article

  • Haskell: Best tools to validate textual input?

    - by Ana
    In Haskell, there are a few different options to "parsing text". I know of Alex & Happy, Parsec and Attoparsec. Probably some others. I'd like to put together a library where the user can input pieces of a URL (scheme e.g. HTTP, hostname, username, port, path, query, etc.) I'd like to validate the pieces according to the ABNF specified in RFC 3986. In other words, I'd like to put together a set of functions such as: validateScheme :: String -> Bool validateUsername :: String -> Bool validatePassword :: String -> Bool validateAuthority :: String -> Bool validatePath :: String -> Bool validateQuery :: String -> Bool What is the most appropriate tool to use to write these functions? Alex's regexps is very concise, but it's a tokenizer and doesn't straightforwardly allow you to parse using specific rules, so it's not quite what I'm looking for, but perhaps it can be wrangled into doing this easily. I've written Parsec code that does some of the above, but it looks very different from the original ABNF and unnecessarily long. So, there must be an easier and/or more appropriate way. Recommendations?

    Read the article

  • How exactly is an Abstract Syntax Tree created?

    - by Howcan
    I think I understand the goal of an AST, and I've build a couple of tree structures before, but never an AST. I'm mostly confused because the nodes are text and not number, so I can't think of a nice way to input a token/string as I'm parsing some code. For example, when I looked at diagrams of AST's, the variable and its value were leaf nodes to an equal sign. This makes perfect sense to me, but how would I go about implementing this? I guess I can do it case by case, so that when I stumble upon an "=" I use that as a node, and add the value parsed before the "=" as the leaf. It just seems wrong, because I'd probably have to make cases for tons and tons of things, depending on the syntax. And then I came upon another problem, how is the tree traversed? Do I go all the way down the height, and go back up a node when I hit the bottom, and do the same for it's neighbor? I've seen tons of diagrams on ASTs, but I couldn't find a fairly simple example of one in code, which would probably help.

    Read the article

  • How do I parse a header with two different version [ID3] avoiding code duplication?

    - by user66141
    I really hope you can give me some interesting viewpoints for my situation, because I am not satisfied with my current approach. I am writing an MP3 parser, starting with an ID3v2 parser. Right now I`m working on the extended header parsing, my issue is that the optional header is defined differently in version 2.3 and 2.4 of the tag. The 2.3 version optional header is defined as follows: struct ID3_3_EXTENDED_HEADER{ DWORD dwExtHeaderSize; //Extended header size (either 6 or 8 bytes , excluded) WORD wExtFlags; //Extended header flags DWORD dwSizeOfPadding; //Size of padding (size of the tag excluding the frames and headers) }; While the 2.4 version is defined : struct ID3_4_EXTENDED_HEADER{ DWORD dwExtHeaderSize; //Extended header size (synchsafe int) BYTE bNumberOfFlagBytes; //Number of flag bytes BYTE bFlags; //Flags }; How could I parse the header while minimizing code duplication? Using two different functions to parse each version sounds less great, using a single function with a different flow for each occasion is similar, any good practices for this kind of issues ? Any tips for avoiding code duplication? Any help would be appreciated.

    Read the article

  • Any good reason open files in text mode?

    - by Tinctorius
    (Almost-)POSIX-compliant operating systems and Windows are known to distinguish between 'binary mode' and 'text mode' file I/O. While the former mode doesn't transform any data between the actual file or stream and the application, the latter 'translates' the contents to some standard format in a platform-specific manner: line endings are transparently translated to '\n' in C, and some platforms (CP/M, DOS and Windows) cut off a file when a byte with value 0x1A is found. These transformations seem a little useless to me. People share files between computers with different operating systems. Text mode would cause some data to be handled differently across some platforms, so when this matters, one would probably use binary mode instead. As an example: while Windows uses the sequence CR LF to end a line in text mode, UNIX text mode will not treat CR as part of the line ending sequence. Applications would have to filter that noise themselves. Older Mac versions only use CR in text mode as line endings, so neither UNIX nor Windows would understand its files. If this matters, a portable application would probably implement the parsing by itself instead of using text mode. Implementing newline interpretation in the parser might also remove some overhead of using text mode, as buffers would need to be rewritten (and possibly resized) before returning to the application, while this may be less efficient than when it would happen in the application instead. So, my question is: is there any good reason to still rely on the host OS to translate line endings and file truncation?

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >