Search Results

Search found 4222 results on 169 pages for 'dtd parsing'.

Page 47/169 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • `strip`ing the results of a split in python

    - by Igor
    i'm trying to do something pretty simple: line = "name : bob" k, v = line.lower().split(':') k = k.strip() v = v.strip() is there a way to combine this into one line somehow? i found myself writing this over and over again when making parsers, and sometimes this involves way more than just two variables. i know i can use regexp, but this is simple enough to not really have to require it...

    Read the article

  • How would I use HTMLAgilityPack to extract the value I want

    - by Nai
    For the given HTML I want the value of id <div class="name" id="john-5745844"> <div class="name" id="james-6940673"> UPDATE This is what I have at the moment HtmlDocument htmlDoc = new HtmlDocument(); htmlDoc.Load(new StringReader(pageResponse)); HtmlNode root = htmlDoc.DocumentNode; List<string> anchorTags = new List<string>(); foreach (HtmlNode div in root.SelectNodes("//div[@class='name' and @id]")) { HtmlAttribute att = div.Attributes["id"]; Console.WriteLine(att.Value); } The error I am getting is at the foreach line stating: Object reference not set to an instance of an object.

    Read the article

  • python getelementbyid from string

    - by matthewgall
    Hey, I have the following program, that is trying to upload a file (or files) to an image upload site, however I am struggling to find out how to parse the returned HTML to grab the direct link (contained in a ). I have the code below: #!/usr/bin/python # -*- coding: utf-8 -*- import pycurl import urllib import urlparse import xml.dom.minidom import StringIO import sys import gtk import os import imghdr import locale import gettext try: import pynotify except: print "Please install pynotify." APP="Uploadir Uploader" DIR="locale" locale.setlocale(locale.LC_ALL, '') gettext.bindtextdomain(APP, DIR) gettext.textdomain(APP) _ = gettext.gettext ##STRINGS uploading = _("Uploading image to Uploadir.") oneimage = _("1 image has been successfully uploaded.") multimages = _("images have been successfully uploaded.") uploadfailed = _("Unable to upload to Uploadir.") class Uploadir: def __init__(self, args): self.images = [] self.urls = [] self.broadcasts = [] self.username="" self.password="" if len(args) == 1: return else: for file in args: if file == args[0] or file == "": continue if file.startswith("-u"): self.username = file.split("-u")[1] #print self.username continue if file.startswith("-p"): self.password = file.split("-p")[1] #print self.password continue self.type = imghdr.what(file) self.images.append(file) for file in self.images: self.upload(file) self.setClipBoard() self.broadcast(self.broadcasts) def broadcast(self, l): try: str = '\n'.join(l) n = pynotify.Notification(str) n.set_urgency(pynotify.URGENCY_LOW) n.show() except: for line in l: print line def upload(self, file): #Try to login cookie_file_name = "/tmp/uploadircookie" if ( self.username!="" and self.password!=""): print "Uploadir authentication in progress" l=pycurl.Curl() loginData = [ ("username",self.username),("password", self.password), ("login", "Login") ] l.setopt(l.URL, "http://uploadir.com/user/login") l.setopt(l.HTTPPOST, loginData) l.setopt(l.USERAGENT,"User-Agent: Uploadir (Python Image Uploader)") l.setopt(l.FOLLOWLOCATION,1) l.setopt(l.COOKIEFILE,cookie_file_name) l.setopt(l.COOKIEJAR,cookie_file_name) l.setopt(l.HEADER,1) loginDataReturnedBuffer = StringIO.StringIO() l.setopt( l.WRITEFUNCTION, loginDataReturnedBuffer.write ) if l.perform(): self.broadcasts.append("Login failed. Please check connection.") l.close() return loginDataReturned = loginDataReturnedBuffer.getvalue() l.close() #print loginDataReturned if loginDataReturned.find("<li>Your supplied username or password is invalid.</li>")!=-1: self.broadcasts.append("Uploadir authentication failed. Username/password invalid.") return else: self.broadcasts.append("Uploadir authentication successful.") #cookie = loginDataReturned.split("Set-Cookie: ")[1] #cookie = cookie.split(";",0) #print cookie c = pycurl.Curl() values = [ ("file", (c.FORM_FILE, file)) ] buf = StringIO.StringIO() c.setopt(c.URL, "http://uploadir.com/file/upload") c.setopt(c.HTTPPOST, values) c.setopt(c.COOKIEFILE, cookie_file_name) c.setopt(c.COOKIEJAR, cookie_file_name) c.setopt(c.WRITEFUNCTION, buf.write) if c.perform(): self.broadcasts.append(uploadfailed+" "+file+".") c.close() return self.result = buf.getvalue() #print self.result c.close() doc = urlparse.urlparse(self.result) self.urls.append(doc.getElementsByTagName("download")[0].childNodes[0].nodeValue) def setClipBoard(self): c = gtk.Clipboard() c.set_text('\n'.join(self.urls)) c.store() if len(self.urls) == 1: self.broadcasts.append(oneimage) elif len(self.urls) != 0: self.broadcasts.append(str(len(self.urls))+" "+multimages) if __name__ == '__main__': uploadir = Uploadir(sys.argv) Any help would be gratefully appreciated. Warm regards,

    Read the article

  • pyparsing ambiguity

    - by Claudiu
    I'm trying to parse some text using PyParser. The problem is that I have names that can contain white spaces. So my input might look like this: Joe Bob Jimmy Foo Joe decides to eat. Bob decides to not eat. Jimmy Foo decides to eat. How can I create a parser for the decides to eat line? If I create my name parser naively, meaning with alphabetic characters plus space characters, then it will match the entire line.

    Read the article

  • writing header in csv python with DictWriter

    - by user248237
    assume I have a csv.DictReader object and I want to write it out as a csv file. How can I do this? I thought of the following: dr = csv.DictReader(open(f), delimiter='\t') # process my dr object # ... # write out object output = csv.DictWriter(open(f2, 'w'), delimiter='\t') for item in dr: output.writerow(item) Is that the best way? More importantly, how can I make it so a header is written out too, in this case the object "dr"s .fieldnames property? thanks.

    Read the article

  • Why the double.Parse throw error in live server and how to track?

    - by Kovu
    Hi, I build a website, that: reads data from a website by HttpWebRequest Sort all Data Parse values of the data and give out newly On local server it works perfect, but when I push it to my live server, the double.Parse fails with an error. So: - how to track what the double.parse is trying to parse? - how to debug live server? Lang is ASP.Net / C#.net 2.0

    Read the article

  • Perl - Read XML

    - by chinna_82
    XML <?xml version='1.0'?> <employee> <name>Smith</name> <age>43</age> <sex>M</sex> <department role='manager'>Operations</department> </employee> Perl use XML::Simple; use Data::Dumper; $xml = new XML::Simple; foreach my $data1 ($data = $xml->XMLin("test.xml")) { print Dumper($data1); } Above code managed to all the xml value like this. Output $VAR1 = { 'department' => { 'content' => 'Operations', 'role' => 'manager' }, 'name' => 'John Doe', 'sex' => 'M', 'age' => '43' }; How do I do, if I only want to get the role value. For this example I need to get Role = manager. Any advice or reference link is highly appreciated.

    Read the article

  • How to parse a custom XML-style error code response from a website

    - by user1870127
    I'm developing a program that queries and prints out open data from the local transit authority, which is returned in the form of an XML response. Normally, when there are buses scheduled to run in the next few hours (and in other typical situations), the XML response generated by the page is handled correctly by the java.net.URLConnection.getInputStream() function, and I am able to print the individual results afterwards. The problem is when the buses are NOT running, or when some other problem with my queries develops after it is sent to the transit authority's web server. When the authority developed their service, they came up with their own unique error response codes, which are also sent as XMLs. For example, one of these error messages might look like this: <Error xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <Code>3005</Code> <Message>Sorry, no stop estimates found for given values.</Message> </Error> (This code and similar is all that I receive from the transit authority in such situations.) However, it appears that URLConnection.getInputStream() and some of its siblings are unable to interpret this custom code as a "valid" response that I can handle and print out as an error message. Instead, they give me a more generic HTTP/1.1 404 Not Found error. This problem cascades into my program which then prints out a java.io.FileNotFoundException error pointing to the offending input stream. My question is therefore two-fold: 1. Is there a way to retrieve, parse, and print a custom XML-formatted error code sent by a web service using the plugins that are available in Java? 2. If the above is not possible, what other tools should I use or develop to handle such custom codes as described?

    Read the article

  • How to parse a string into a nullable int in C# (.NET 3.5)

    - by Glenn Slaven
    I'm wanting to parse a string into a nullable int in C#. ie. I want to get back either the int value of the string or null if it can't be parsed. I was kind of hoping that this would work int? val = stringVal as int?; But that won't work, so the way I'm doing it now is I've written this extension method public static int? ParseNullableInt(this string value) { if (value == null || value.Trim() == string.Empty) { return null; } else { try { return int.Parse(value); } catch { return null; } } } Is there a better way of doing this? EDIT: Thanks for the TryParse suggestions, I did know about that, but it worked out about the same. I'm more interested in knowing if there is a built-in framework method that will parse directly into a nullable int?

    Read the article

  • translating play in HTML to python

    - by aharon
    So, I'd like to represent one of Shakespeare's plays, Hamlet, into the following objects (maybe this isn't the best representation, if so please tell me): class Play(): acts = [] ... def add_act(self, act): acts.append(act) class Act(): scenes = [] ... def add_scene(self, scene): scenes.append(scene) class Scene(): elems = [] def __init__(self, title, setting=""): ... def add_elem(self, elem): elems.append(elem) ... class StageDirection(): # elem def __init__(self, text): ... class Line(): # elem def __init__(self, id, text, character = None): ... # A None character represents a continuation from the previous line # id could be, for example, 1.1.1 There are other methods, of course, for printing and such in each of the classes. The question is, how do I get a structure based on these classes (or something like them) from HTML 4 code that looks like this: <H3>ACT I</h3> <h3>SCENE I. Elsinore. A platform before the castle.</h3> <p><blockquote> <i>FRANCISCO at his post. Enter to him BERNARDO</i> </blockquote> <A NAME=speech1><b>BERNARDO</b></a> <blockquote> <A NAME=1.1.1>Who's there?</A><br> </blockquote> <A NAME=speech2><b>FRANCISCO</b></a> <blockquote> <A NAME=1.1.2>Nay, answer me: stand, and unfold yourself.</A><br> </blockquote> <A NAME=speech3><b>BERNARDO</b></a> <blockquote> <A NAME=1.1.3>Long live the king!</A><br> </blockquote> <A NAME=speech4><b>FRANCISCO</b></a> <blockquote> <A NAME=1.1.4>Bernardo?</A><br> </blockquote> <A NAME=speech5><b>BERNARDO</b></a> <blockquote> <A NAME=1.1.5>He.</A><br> </blockquote> <!-- for more, see the source of shakespeare.mit.edu/hamlet/full.html --> translating that into something like this: play = Play() actI = Act() sceneI = Scene("Scene I", "Elsinore. A platform before the castle.") sceneI.add_elem(StageDirection("Francisco at his post. Enter to him Bernardo.")) sceneI.add_elem(Line("Bernardo", "Who's there?")) ... Of course, I don't expect all the code—but what libraries and, when there aren't libraries, logic should I use? Thanks. (This is for a future opensource project and me learning Python for fun—not homework.)

    Read the article

  • How do I get 3 lines of text from a paragraph in C#

    - by Keltex
    I'm trying to create an "snippet" from a paragraph. I have a long paragraph of text with a word hilighted in the middle. I want to get the line containing the word before that line and the line after that line. I have the following piece of information: The text (in a string) The lines are deliminated by a NEWLINE character \n I have the index into the string of the text I want to hilight A couple other criteria: If my word falls on first line of the paragraph, it should show the 1st 3 lines If my word falls on the last line of the paragraph, it should show the last 3 lines Should show the entire paragraph in the degenative cases (the paragraph only has 1 or 2 lines) Here's an example: This is the 1st line of CAT text in the paragraph This is the 2nd line of BIRD text in the paragraph This is the 3rd line of MOUSE text in the paragraph This is the 4th line of DOG text in the paragraph This is the 5th line of RABBIT text in the paragraph Example, if my index points to BIRD, it should show lines 1, 2, & 3 as one complete string like this: This is the 1st line of CAT text in the paragraph This is the 2nd line of BIRD text in the paragraph This is the 3rd line of MOUSE text in the paragraph If my index points to DOG, it should show lines 3, 4, & 5 as one complete string like this: This is the 3rd line of MOUSE text in the paragraph This is the 4th line of DOG text in the paragraph This is the 5th line of RABBIT text in the paragraph etc. Anybody want to help tackle this?

    Read the article

  • Lexing partial SQL in C#

    - by Chris T
    I'd need to parse partial SQL queries (it's for a SQL injection auditing tool). For example '1' AND 1=1-- Should break down into tokens like [0] => [SQL_STRING, '1'] [1] => [SQL_AND] [2] => [SQL_INT, 1] [3] => [SQL_AND] [4] => [SQL_INT, 1] [5] => [SQL_COMMENT] [6] => [SQL_QUERY_END] Are their any at least lexers for SQL that I base mine off of or any good tools like bison for C# (though I'd rather not write my own grammar as I need to support most if not all the grammar of MySQL 5)

    Read the article

  • Using Python's ConfigParser to read a file without section name

    - by Arrieta
    Hello: I am using ConfigParser to read the runtime configuration of a script. I would like to have the flexibility of not providing a section name (there are scripts which are simple enough; they don't need a 'section'). ConfigParser will throw the NoSectionError exception, and will not accept the file. How can I make ConfigParser simply retrieve the (key, value) tuples of a config file without section names? For instance: key1=val1 key2:val2 I would rather not write to the config file.

    Read the article

  • Code editor with autocomplete

    - by Andrey
    I need to create a code editor for my own simple language: className.MethodName(parameterName = 2, ... ) I've created the appropriate grammar and autogenerate parser using ANTLR tool. Now I would like to have an autocomplete for class, method, variables and parameter names. This list should be context dependent, f.e. for "class." it should display methods and for "class.Method(" - parameters. I was going to parse the text and display the list depending on in which node the cursor is. The problem is that for incomplete code like "aaa.bbb(" the parser produces an error instead of a syntax tree. Any idea how to solve this problem? Maybe I'm on the wrong way and I shouldn't parse code to display autocomplete?

    Read the article

  • Changing href atributes with nokogiri and ruby on rails

    - by fool
    Hi, I Have a HTML document with links links, for exemple: <html> <body> <ul> <li><a href="http://someurl.com/etc/etc">teste1</a></li> <li><a href="http://someurl.com/etc/etc">teste2</a></li> <li><a href="http://someurl.com/etc/etc">teste3</a></li> <ul> </body> </html> I want with Ruby on Rails, with nokogiri or some other method, to have a final doc like this: <html> <body> <ul> <li><a href="http://myproxy.com/?url=http://someurl.com/etc/etc">teste1</a></li> <li><a href="http://myproxy.com/?url=http://someurl.com/etc/etc">teste2</a></li> <li><a href="http://myproxy.com/?url=http://someurl.com/etc/etc">teste3</a></li> <ul> </body> </html> What's the best strategy to achieve this?

    Read the article

  • Unable to Parse Date using NSDateFormatter

    - by Ansari
    Hi, I am fetching a RSS, in which i receive the following Date stamp: 2010-05-10T06:11:14.000Z Now i am using NSDateFormatter to parse this datetime stamp. [parseFormatter setDateFormat:@"yyyy-MM-dTH:m:s.z"]; But its not working fine if just remove the time stamp part it works for the date [parseFormatter setDateFormat:@"yyyy-MM-d"]; But if i add the rest of the stuff it returns nil. Any idea ? Thanks in Advance....

    Read the article

  • How can I parse a C header file with Perl?

    - by Alphaneo
    Hi, I have a header file in which there is a large struct. I need to read this structure using some program and make some operations on each member of the structure and write them back. For example I have some structure like const BYTE Some_Idx[] = { 4,7,10,15,17,19,24,29, 31,32,35,45,49,51,52,54, 55,58,60,64,65,66,67,69, 70,72,76,77,81,82,83,85, 88,93,94,95,97,99,102,103, 105,106,113,115,122,124,125,126, 129,131,137,139,140,149,151,152, 153,155,158,159,160,163,165,169, 174,175,181,182,183,189,190,193, 197,201,204,206,208,210,211,212, 213,214,215,217,218,219,220,223, 225,228,230,234,236,237,240,241, 242,247,249}; Now, I need to read this and apply some operation on each of the member variable and create a new structure with different order, something like: const BYTE Some_Idx_Mod_mul_2[] = { 8,14,20, ... ... 484,494,498}; Is there any Perl library already available for this? If not Perl, something else like Python is also OK. Can somebody please help!!!

    Read the article

  • How to use Wordpress' http.php in external projects?

    - by NJTechGuy
    I am trying to parse data from a pipe-delimited text file hosted on another server which in turn will be inserted in a database. My host (1and1) disabled allow_url_fopen in php.ini I guess. Error message : Warning: fopen() [function.fopen]: URL file-access is disabled in the server configuration in Code : <? // make sure curl is installed if (function_exists('curl_init')) { // initialize a new curl resource $ch = curl_init(); // set the url to fetch curl_setopt($ch, CURLOPT_URL, 'http://abc.com/data/output.txt'); // don't give me the headers just the content curl_setopt($ch, CURLOPT_HEADER, 0); // return the value instead of printing the response to browser curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); // use a user agent to mimic a browser curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.5) Gecko/20041107 Firefox/1.0'); $content = curl_exec($ch); // remember to always close the session and free all resources curl_close($ch); } else { // curl library is not installed so we better use something else } //$contents = fread ($fd,filesize ($filename)); //fclose ($fd); $delimiter = "|"; $splitcontents = explode($delimiter, $contents); $counter = ""; ?> <font color="blue" face="arial" size="4">Complete File Contents</font> <hr> <? echo $contents; ?> <br><br> <font color="blue" face="arial" size="4">Split File Contents</font> <hr> <? foreach ( $splitcontents as $color ) { $counter = $counter+1; echo "<b>Split $counter: </b> $colorn<br>"; } ?> Wordpress has this cool http.php file. Is there a better way of doing it? If not, how do I use http.php for this task? Thank you guys..

    Read the article

  • Jquery to find a name on html page and add hyperlink

    - by mikejones12
    Here is my example: I have a a website that contains the following: <body> Jim Nebraska zipcode 65437 Tony lives in California his zipcode is 98708 </body> I would like to be able to search for zip codes on the page and wrap them with hyperlinks like: <body> Jim Nebraska zipcode <a href="/65437.htm">65437</a> Tony lives in California his zipcode is <a href="/65437.htm">98708</a> </body> Could I use a regex selector to find the string and then wrap the string, or replace it with the new hyperlink? I am new to Jquery and looking for someone to point me in the right direction. Thank you, Mike

    Read the article

  • Parsec Haskell Lists

    - by Martin
    I'm using Text.ParserCombinators.Parsec and Text.XHtml to parse an input and get a HTML output. If my input is: * First item, First level ** First item, Second level ** Second item, Second level * Second item, First level My output should be: <ul><li>First item, First level <ul><li>First item, Second level </li><li>Second item, Second level </li></ul></li><li>Second item, First level</li></ul> I wrote this, but obviously does not work recursively list= do{ s <- many1 item;return (olist << s) } item= do{ (count 1 (char '*')) ;s <- manyTill anyChar newline ;return ( li << s) } Any ideas? the recursion can be more than two levels Thanks!

    Read the article

  • Recognize Dates In A String

    - by Tim Scott
    I want a class something like this: public interface IDateRecognizer { DateTime[] Recognize(string s); } The dates might exist anywhere in the string and might be any format. For now, I could limit to U.S. culture formats. The dates would not be delimited in any way. They might have arbitrary amounts of whitespace between parts of the date. The ideas I have are: ANTLR Regex Hand rolled I have never used ANTLR, so I would be learning from scratch. I wonder if there are libraries or code samples out there that do something similar that could jump start me. Is ANTLR too heavy for such a narrow use? I have used Regex a lot before, but I hate it for all the reasons that most people hate it. I could certainly hand roll it but I'd rather not re-solve a solved problem. Suggestions? UPDATE: Here is an example. Given this input: This is a date 11/3/63. Here is another one: November 03, 1963; and another one Nov 03, 63 and some more (11/03/1963). The dates could be in any U.S. format. They might have dashes like 11-2-1963 or weird extra whitespace inside like this: Nov   3,   1963, and even maybe the comma is missing like [Nov 3 63] but that's an edge case. The output should be an array of seven DateTimes. Each date would be the same: 11/03/1963 00:00:00.

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >