Search Results

Search found 4962 results on 199 pages for 'parse'.

Page 62/199 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • problem parsing with XMLReader (using ReadSubTree)

    - by no9
    Hello. Im trying to build a simple XML to Controls parser in my CF application. In the code below the string im trying to parse looks like this: "<Panel><Label>Text1</Label><Label>Text2</Label></Panel>" The result i want with this code would be a Panel with two labels. But the problem is when the first Label is parsed the subreader.Read() returns false in the ParsePanelElementh method, and so it falls out of while statement. Since im new into XMLReader i must be missing something very simple. Any help would be apreciated ! peace. static class XMLParser { public static Control Parse(string aXmlString) { XmlReader reader = XmlReader.Create(new StringReader(aXmlString)); return ParseXML(reader); } public static Control ParseXML(XmlReader reader) { using (reader) { while (reader.Read()) { if (reader.NodeType == XmlNodeType.Element) { if (reader.LocalName == "Panel") { return ParsePanelElement(reader); } if (reader.LocalName == "Label") { return ParseLabelElement(reader); } } } } return null; } private static Control ParsePanelElement(XmlReader reader) { var myPanel = new Panel(); XmlReader subReader = reader.ReadSubtree(); while (subReader.Read()) { Control subControl = ParseXML(subReader); if (subControl != null) { myPanel.Controls.Add(subControl); }; } return myPanel; } private static Control ParseLabelElement(XmlReader reader) { reader.Read(); var myString = reader.Value as string; var myLabel = new Label(); myLabel.Text = myString; return myLabel; } }

    Read the article

  • Django upload failing on request data read error

    - by Jake
    Hi All, I've got a Django app that accepts uploads from jQuery uploadify, a jQ plugin that uses flash to upload files and give a progress bar. Files under about 150k work, but bigger files always fail and almost always at around 192k (that's 3 chunks) completed, sometimes at around 160k. The Exception I get is below. exceptions.IOError request data read error File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 171, in _get_post self._load_post_and_files() File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 137, in _load_post_and_files self._post, self._files = self.parse_file_upload(self.META, self.environ[\'wsgi.input\']) File "/usr/lib/python2.4/site-packages/django/http/__init__.py", line 124, in parse_file_upload return parser.parse() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 192, in parse for chunk in field_stream: File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next output = self._producer.next() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 468, in next for bytes in stream: File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next output = self._producer.next() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 375, in next data = self.flo.read(self.chunk_size) File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 405, in read return self._file.read(num_bytes) When running locally on the Django development server, big files work. I've tried setting my FILE_UPLOAD_HANDLERS = ("django.core.files.uploadhandler.TemporaryFileUploadHandler",) in case it was the memory upload handler, but it made no difference. Does anyone know how to fix this?

    Read the article

  • Writing XML in different character encodings with Java

    - by Roman Myers
    I am attempting to write an XML library file that can be read again into my program. The file writer code is as follows: XMLBuilder builder = new XMLBuilder(); Document doc = builder.build(bookList); DOMImplementation impl = doc.getImplementation(); DOMImplementationLS implLS = (DOMImplementationLS) impl.getFeature("LS", "3.0"); LSSerializer ser = implLS.createLSSerializer(); String out = ser.writeToString(doc); //System.out.println(out); try{ FileWriter fstream = new FileWriter(location); BufferedWriter outwrite = new BufferedWriter(fstream); outwrite.write(out); outwrite.close(); }catch (Exception e){ } The above code does write an xml document. However, in the XML header, it is an attribute that the file is encoded in UTF-16. when i read in the file, i get the error: "content not allowed in prolog" this error does not occur when the encoding attribute is manually changed to UTF-8. I am trying to get the above code to write an XML document encoded in UTF-8, or successfully parse a UTF-16 file. the code for parsing in is DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); DocumentBuilder loader = factory.newDocumentBuilder(); Document document = loader.parse(filename); the last line returns the error.

    Read the article

  • Python minidom and UTF-8 encoded XML with hash references

    - by Jakob Simon-Gaarde
    Hi I am experiencing some difficulty in my home project where I need to parse a SOAP request. The SOAP is generated with gSOAP and involves string parameters with special characters like the danish letters "æøå". gSOAP builds SOAP requests with UTF-8 encoding by default, but instead of sending the special chatacters in raw format (ie. bytes C3A6 for the special character "æ") it sends what I think is called character hash references (ie. &#195;&#166;). I don't completely understand why gSOAP does it this way as I can see that it has marked the incomming payload as being UTF-8 encoded anyway (Content-Type: text/xml; charset=utf-8), but this is besides the question (I think). Anyway I guess gSOAP probably is obeying transport rules, or what? When I parse the request from gSOAP in python with xml.dom.minidom.parseString() I get element values as unicode objects which is fine, but the character hash references are not decoded as UTF-8 character codes. It unescapes the character hash references, but does not decode the string afterwards. In the end I have a unicode string object with UTF-8 encoding: So if the string "æble" is contained in the XML, it comes like this in the request: "&#195;&#166;ble" After parsing the XML the unicode string in the DOM Text Node's data member looks like this: u'\xc3\xa6ble' I would expect it to look like this: u'\xe6ble' What am I doing wrong? Should I unescape the SOAP XML before parsing it, or is it somewhere else I should be looking for the solution, maybe gSOAP? Thanks in advance. Best regards Jakob Simon-Gaarde

    Read the article

  • PDF Form Field Manipulation

    - by 108039818756939362532
    I'm making a web interface to autofill pdf forms with user data from a database. The admin needs to be able to upload a pdf (right now targeted at IRS pdf forms) and then associate the fields in the pdf with data fields in the database. I need a way to help the admin associate the field names (stuff like "topmostSubform[0].Page2[0].p2-t66[0]") with the the data fields in the database. I'm looking for a way to modify the PDF programatically to in some way provide this information. Basically I'm open to suggestions on how I might make the field names appear in an obvious manner on a modified version of the original pdf. The closest I've gotten is being able to insert Tooltips into the fields in the pdf by just editting the raw pdf line by line. However when editting the pdf in this manner the field names are gibberish, and so I can't just use them. An optimal solution would be anything that could automatically parse a pdf and set each field's tooltip to be the fields name. Anything that can be run from the command line, or any python tool, or just a basic how to correctly parse a field's name from a raw pdf file would be amazing.

    Read the article

  • A doubt on DOM parser used with Python

    - by fixxxer
    I'm using the following python code to search for a node in an XML file and changing the value of an attribute of one of it's children.Changes are happening correctly when the node is displayed using toxml().But, when it is written to a file, the attributes rearrange themselves(as seen in the Source and the Final XML below). Could anyone explain how and why this happen? Python code: #!/usr/bin/env python import xml from xml.dom.minidom import parse dom=parse("max.xml") #print "Please enter the store name:" for sku in dom.getElementsByTagName("node"): if sku.getAttribute("name") == "store": sku.childNodes[1].childNodes[5].setAttribute("value","Delhi,India") print sku.toxml() xml.dom.ext.PrettyPrint(dom, open("new.xml", "w")) a part of the Source XML: <node name='store' node_id='515' module='mpx.lib.node.simple_value.SimpleValue' config_builder='' inherant='false' description='Configurable Value'> <match> <property name='1' value='point'/> <property name='2' value='0'/> <property name='val' value='Store# 09204 Staten Island, NY'/> <property name='3' value='str'/> </match> </node> Final XML : <node config_builder="" description="Configurable Value" inherant="false" module="mpx.lib.node.simple_value.SimpleValue" name="store" node_id="515"> <match> <property name="1" value="point"/> <property name="2" value="0"/> <property name="val" value="Delhi,India"/> <property name="3" value="str"/> </match> </node>

    Read the article

  • jQuery Autocomplete & jTemplates - handling response

    - by Diegos Grace
    Has anyone had any experience with using jTemplates to display autocomplete results. I have the following $("#address-search").autocomplete({ source: "/Address/SearchAddress", minLength: 2, delay: 400, focus: function (event, ui) { $('#address-search').val(ui.item.name); return false; }, parse: function(data) { $("#autocomplete-results").setTemplate($("#templateHolder").html()); $("#autocomplete-results").processTemplate(data); }, select: function (event, ui) { $('#address-search').val(ui.item.name); $('#search-address-id').val(ui.item.id); $('#search-description').html(ui.item.address); }); and the simple jtemplate holder: <script type="text/html" id="templateHolder"> <ul class="autocomplete"> {#foreach $T as data} <li>{$T.name}</li> {#/for} </ul> </script> Above i'm using 'Parse' to format results, I've also tried the autocomplete result method but not having any luck so far. The only success I've had is by using the private method ._renderItem and formatting the data that way but we want to render the output using the jTemplate. Any advice appreciated.

    Read the article

  • Appending and prepending to XML files with Clojure

    - by Isaac Copper
    I have an XML file with format similar to: <root> <baby> <a>stuff</a> <b>stuff</b> <c>stuff</c> </baby> ... <baby> <a>stuff</a> <b>stuff</b> <c>stuff</c> </baby> </root> And a Clojure hash-map similar to: {:a "More stuff" :b "Some other stuff" :c "Yet more of that stuff"} And I'd like to prepend XML (¶) created from this hash-map after the <root> tag and before the first <baby> (¶) The XML to prepend would be like: <baby> <a>More stuff</a> <b>Some other stuff</b> <c>Yet more of that stuff</c> </baby> I'd also like to be able to delete the last one (or n...) <baby>...</baby>s from the file. I'm struggling with coming up with an idiomatic was to prepend and append this data. I can do raw string manipulations, or parse the XML using xml/parse and xml-seq and then roll through the nodes and (somehow?) replace the data there, but that seems messy. Any tips? Ideas? Hints? Pointers? They'd all be much appreciated. Thank you!

    Read the article

  • How can I remove certain characters from inside angle-brackets, leaving the characters outside alone

    - by Iain Fraser
    Edit: To be clear, please understand that I am not using Regex to parse the html, that's crazy talk! I'm simply wanting to clean up a messy string of html so it will parse Edit #2: I should also point out that the control character I'm using is a special unicode character - it's not something that would ever be used in a proper tag under any normal circumstances Suppose I have a string of html that contains a bunch of control characters and I want to remove the control characters from inside tags only, leaving the characters outside the tags alone. For example Here the control character is the numeral "1". Input The quick 1<strong>orange</strong> lemming <sp11a1n 1class1='jumpe111r'11>jumps over</span> 1the idle 1frog Desired Output The quick 1<strong>orange</strong> lemming <span class='jumper'>jumps over</span> 1the idle 1frog So far I can match tags which contain the control character but I can't remove them in one regex. I guess I could perform another regex on my matches, but I'd really like to know if there's a better way. My regex Bear in mind this one only matches tags which contain the control character. <(([^>])*?`([^>])*?)*?> Thanks very much for your time and consideration. Iain Fraser

    Read the article

  • reading csv files in scipy/numpy in Python

    - by user248237
    I am having trouble reading a csv file, delimited by tabs, in python. I use the following function: def csv2array(filename, skiprows=0, delimiter='\t', raw_header=False, missing=None, with_header=True): """ Parse a file name into an array. Return the array and additional header lines. By default, parse the header lines into dictionaries, assuming the parameters are numeric, using 'parse_header'. """ f = open(filename, 'r') skipped_rows = [] for n in range(skiprows): header_line = f.readline().strip() if raw_header: skipped_rows.append(header_line) else: skipped_rows.append(parse_header(header_line)) f.close() if missing: data = genfromtxt(filename, dtype=None, names=with_header, deletechars='', skiprows=skiprows, missing=missing) else: if delimiter != '\t': data = genfromtxt(filename, dtype=None, names=with_header, delimiter=delimiter, deletechars='', skiprows=skiprows) else: data = genfromtxt(filename, dtype=None, names=with_header, deletechars='', skiprows=skiprows) if data.ndim == 0: data = array([data.item()]) return (data, skipped_rows) the problem is that genfromtxt complains about my files, e.g. with the error: Line #27100 (got 12 columns instead of 16) I am not sure where these errors come from. Any ideas? Here's an example file that causes the problem: #Gene 120-1 120-3 120-4 30-1 30-3 30-4 C-1 C-2 C-5 genesymbol genedesc ENSMUSG00000000001 7.32 9.5 7.76 7.24 11.35 8.83 6.67 11.35 7.12 Gnai3 guanine nucleotide binding protein alpha ENSMUSG00000000003 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Pbsn probasin Is there a better way to write a generic csv2array function? thanks.

    Read the article

  • Comet, responseText and memory usage

    - by ithcy
    Is there a way to clear out the responseText of an XHR object without destroying the XHR object? I need to keep a persistent connection open to a web server to feed live data to a browser. The problem is, there is a relatively large amount of data coming through (several hundred K per second constantly), so memory usage is a big problem, because this connection must remain open for at least several minutes. responseText gets very big very quickly, even though the JSON I send back has been crunched as small as it can get. Due to the way the server-side app works, if I use AJAX-style short polling and just destroy the XHR object when I'm done with it, I miss significant amounts of important data even in the few milliseconds it takes to parse the response, create a new XHR and send it out. I do not have the option to use overlapping requests, as the web server only accepts one connection at a time. (Don't ask.) So Comet is exactly the model I need. What I would like to do is parse each JSON chunk as it comes back from the server, and then clear out responseText so that I can keep using the same connection. However, responseText is read-only. It cannot be directly emptied by any method I have found. Is there a part of the picture I am missing here? Does anyone know any tricks I can use to free up responseText when I'm done reading it? Or is there another place the server responses can go? I am not including code because this is really almost a code-agnostic question. The Javascript routines that spawn the XHRs and handle the returned data are very, very simple.

    Read the article

  • C#: Easy access to the member of a singleton ICollection<> ?

    - by Rosarch
    I have an ICollection that I know will only ever have one member. Currently, I loop through it, knowing the loop will only ever run once, to grab the value. Is there a cleaner way to do this? I could alter the persistentState object to return single values, but that would complicate the rest of the interface. It's grabbing data from XML, and for the most part ICollections are appropriate. // worldMapLinks ensured to be a singleton ICollection<IDictionary<string, string>> worldMapLinks = persistentState.GetAllOfType("worldMapLink"); string levelName = ""; //worldMapLinks.GetEnumerator().Current['filePath']; // this loop will only run once foreach (IDictionary<string, string> dict in worldMapLinks) // hacky hack hack hack { levelName = dict["filePath"]; } // proceed with levelName loadLevel(levelName); Here is another example of the same issue: // meta will be a singleton ICollection<IDictionary<string, string>> meta = persistentState.GetAllOfType("meta"); foreach (IDictionary<string, string> dict in meta) // this loop should only run once. HACKS. { currentLevelName = dict["name"]; currentLevelCaption = dict["teaserCaption"]; } Yet another example: private Vector2 startPositionOfKV(ICollection<IDictionary<string, string>> dicts) { Vector2 result = new Vector2(); foreach (IDictionary<string, string> dict in dicts) // this loop will only ever run once { result.X = Single.Parse(dict["x"]); result.Y = Single.Parse(dict["y"]); } return result; }

    Read the article

  • multithreading issue

    - by vbNewbie
    I have written a multithreaded crawler and the process is simply creating threads and having them access a list of urls to crawl. They then access the urls and parse the html content. All this seems to work fine. Now when I need to write to tables in a database is when I experience issues. I have 2 declared arraylists that will contain the content each thread parse. The first arraylist is simply the rss feed links and the other arraylist contains the different posts. I then use a for each loop to iterate one while sequentially incrementing the other and writing to the database. My problem is that each time a new thread accesses one of the lists the content is changed and this affects the iteration. I tried using nested loops but it did not work before and this works fine using a single thread.I hope this makes sense. Here is my code: SyncLock dlock For Each l As String In links finallinks.Add(l) Next End SyncLock SyncLock dlock For Each p As String In posts finalposts.Add(p) Next End SyncLock ... Dim i As Integer = 0 SyncLock dlock For Each rsslink As String In finallinks postlink = finalposts.Item(i) i = i + 1 finallinks and finalposts are the two arraylists. I did not include the rest of the code which shows the threads working but this is the essential part where my error occurs which is basically here postlink = finalposts.Item(i) i = i + 1 ERROR: index was out of range. Must be non-negative and less than the size of the collection Is there an alternative?

    Read the article

  • What header file is where the boost libray define its own primitive data type?

    - by ronghai
    Recently, I try to use the boost::spirit::qi binary endian parser to parse some binary data depends on the endianness of the Platform. There is a simple example, like following: Using declarations and variables: using boost::spirit::qi::little_word; using boost::spirit::qi::little_dword; using boost::spirit::qi::little_qword; boost::uint16_t us; boost::uint32_t ui; boost::uint64_t ul; Basic usage of the little endian binary parsers: test_parser_attr("\x01\x02", little_word, us); assert(us == 0x0201); test_parser_attr("\x01\x02\x03\x04", little_dword, ui); assert(ui == 0x04030201); test_parser_attr("\x01\x02\x03\x04\x05\x06\x07\x08", little_qword, ul); assert(ul == 0x0807060504030201LL); test_parser("\x01\x02", little_word(0x0201)); test_parser("\x01\x02\x03\x04", little_dword(0x04030201)); test_parser("\x01\x02\x03\x04\x05\x06\x07\x08", little_qword(0x0807060504030201LL)); It works very well. But my questions come, why do we need use some data types like boost::uint16_t, boost::uint32_t here? Can I use unsigned long or unsigned int here? And if I want to parse double or float data type, what boost data type should I use? And please tell me where is boost define the above these types? Thanks a lot.

    Read the article

  • Get node content from xml file and transform it into php array?

    - by Kirzilla
    Hello, I'm looking for solution to extract some node from large xml file (using xmlstarlet http://xmlstar.sourceforge.net/) and then parse this node to php array. elements.xml <?xml version="1.0"?> <elements> <element id="1" par1="val1_1" par2="val1_2" par3="val1_3"> <title>element 1 title</title> <description>element 1 description</description> </element> <element id="2" par1="val2_1" par2="val2_2" par3="val2_3"> <title>element 2 title</title> <description>element 2 description</description> </element> </elements> To extract element tag with id="1" using xmlstarlet I'm executing this shell command... xmlstarlet sel -t -c "/elements/element[id=1]" elements.xml This shell command outputs something like this... <element id="1" par1="val1_1" par2="val1_2" par3="val1_3"> <title>element 1 title</title> <description>element 1 description</description> </element> How could I parse this shell output into php array? Thank you.

    Read the article

  • Can parser combinators be made efficient?

    - by Jon Harrop
    Around 6 years ago, I benchmarked my own parser combinators in OCaml and found that they were ~5× slower than the parser generators on offer at the time. I recently revisited this subject and benchmarked Haskell's Parsec vs a simple hand-rolled precedence climbing parser written in F# and was surprised to find the F# to be 25× faster than the Haskell. Here's the Haskell code I used to read a large mathematical expression from file, parse and evaluate it: import Control.Applicative import Text.Parsec hiding ((<|>)) expr = chainl1 term ((+) <$ char '+' <|> (-) <$ char '-') term = chainl1 fact ((*) <$ char '*' <|> div <$ char '/') fact = read <$> many1 digit <|> char '(' *> expr <* char ')' eval :: String -> Int eval = either (error . show) id . parse expr "" . filter (/= ' ') main :: IO () main = do file <- readFile "expr" putStr $ show $ eval file putStr "\n" and here's my self-contained precedence climbing parser in F#: let rec (|Expr|) (P(f, xs)) = Expr(loop (' ', f, xs)) and loop = function | ' ' as oop, f, ('+' | '-' as op)::P(g, xs) | (' ' | '+' | '-' as oop), f, ('*' | '/' as op)::P(g, xs) -> let h, xs = loop (op, g, xs) let op = match op with | '+' -> (+) | '-' -> (-) | '*' -> (*) | '/' -> (/) loop (oop, op f h, xs) | _, f, xs -> f, xs and (|P|) = function | '('::Expr(f, ')'::xs) -> P(f, xs) | c::xs when '0' <= c && c <= '9' -> P(int(string c), xs) My impression is that even state-of-the-art parser combinators waste a lot of time back tracking. Is that correct? If so, is it possible to write parser combinators that generate state machines to obtain competitive performance or is it necessary to use code generation?

    Read the article

  • Data mining - parsing a log file in Java

    - by nuvio
    Hello I am carrying on a Java project for the university, where I should analyse poker hands. I found some poker hands in a txt log file. They would typically look like this: PokerStars Zoom Hand #86981279921: Hold'em No Limit ($0.10/$0.25 USD) - 2012/09/30 23:49:51 ET Table 'Whirlpool Zoom 40-100 bb' 9-max Seat #1 is the button Seat 1: lgwong ($30.99 in chips) Seat 2: hastyboots ($28.61 in chips) Seat 3: seula i ($25.31 in chips) Seat 4: fr_kevin01 ($31.81 in chips) Seat 5: limey05 ($27.45 in chips) Seat 6: sanlu ($24.65 in chips) Seat 7: Masterfrank ($25.35 in chips) Seat 8: Refu$e2Lose ($33.23 in chips) Seat 9: 1pepepe0114 ($37.62 in chips) hastyboots: posts small blind $0.10 seula i: posts big blind $0.25 *** HOLE CARDS *** fr_kevin01: folds limey05: folds sanlu: folds Masterfrank: folds Refu$e2Lose: folds 1pepepe0114: folds lgwong: folds hastyboots: folds Uncalled bet ($0.15) returned to seula i seula i collected $0.20 from pot seula i: doesn't show hand *** SUMMARY *** Total pot $0.20 | Rake $0 Seat 1: lgwong (button) folded before Flop (didn't bet) Seat 2: hastyboots (small blind) folded before Flop Seat 3: seula i (big blind) collected ($0.20) Seat 4: fr_kevin01 folded before Flop (didn't bet) Seat 5: limey05 folded before Flop (didn't bet) Seat 6: sanlu folded before Flop (didn't bet) Seat 7: Masterfrank folded before Flop (didn't bet) Seat 8: Refu$e2Lose folded before Flop (didn't bet) Seat 9: 1pepepe0114 folded before Flop (didn't bet) My problem is that I am not sure about how to proceed to parse the log file: the only knowledge I have is "manually" scanning line by line for a particular character or symbol, but I am afraid it would need exhaustive error handling. So I was wandering if there is any other techniques or better way to parse these poker hands? Many thanks for your help

    Read the article

  • Webcrawler, feedback?

    - by Jan Kuboschek
    Hey folks, every once in a while I have the need to automate data collection tasks from websites. Sometimes I need a bunch of URLs from a directory, sometimes I need an XML sitemap (yes, I know there is lots of software for that and online services). Anyways, as follow up to my previous question I've written a little webcrawler that can visit websites. Basic crawler class to easily and quickly interact with one website. Override "doAction(String URL, String content)" to process the content further (e.g. store it, parse it). Concept allows for multi-threading of crawlers. All class instances share processed and queued lists of links. Instead of keeping track of processed links and queued links within the object, a JDBC connection could be established to store links in a database. Currently limited to one website at a time, however, could be expanded upon by adding an externalLinks stack and adding to it as appropriate. JCrawler is intended to be used to quickly generate XML sitemaps or parse websites for your desired information. It's lightweight. Is this a good/decent way to write the crawler, provided the limitations above? http://pastebin.com/VtgC4qVE - Main.java http://pastebin.com/gF4sLHEW - JCrawler.java http://pastebin.com/VJ1grArt - HTMLUtils.java Thanks for your feedback in advance! :)

    Read the article

  • A question about DOM parser used with Python

    - by fixxxer
    I'm using the following python code to search for a node in an XML file and changing the value of an attribute of one of it's children.Changes are happening correctly when the node is displayed using toxml().But, when it is written to a file, the attributes rearrange themselves(as seen in the Source and the Final XML below). Could anyone explain how and why this happen? Python code: #!/usr/bin/env python import xml from xml.dom.minidom import parse dom=parse("max.xml") #print "Please enter the store name:" for sku in dom.getElementsByTagName("node"): if sku.getAttribute("name") == "store": sku.childNodes[1].childNodes[5].setAttribute("value","Delhi,India") print sku.toxml() xml.dom.ext.PrettyPrint(dom, open("new.xml", "w")) a part of the Source XML: <node name='store' node_id='515' module='mpx.lib.node.simple_value.SimpleValue' config_builder='' inherant='false' description='Configurable Value'> <match> <property name='1' value='point'/> <property name='2' value='0'/> <property name='val' value='Store# 09204 Staten Island, NY'/> <property name='3' value='str'/> </match> </node> Final XML : <node config_builder="" description="Configurable Value" inherant="false" module="mpx.lib.node.simple_value.SimpleValue" name="store" node_id="515"> <match> <property name="1" value="point"/> <property name="2" value="0"/> <property name="val" value="Delhi,India"/> <property name="3" value="str"/> </match> </node>

    Read the article

  • How to read XML using XPath in Java

    - by kaibuki
    Hi guys, I want to read XML data using XPath in Java, so for the information I have gathered I am not able to parse XML according to my requirement. here is what I want to do: Get XML file from online via its URL, then use XPath to parse it, I want to create two methods in it. One is in which I enter a specific node attribute id, and I get all the child nodes as result, and second is suppose I just want to get a specific child node value only <?xml version="1.0"?><howto> <topic name="Java"> <url>http://www.rgagnonjavahowto.htm</url> <car>taxi</car> </topic> <topic ame="PowerBuilder"> <url>http://www.rgagnon/pbhowto.htm</url> <url>http://www.rgagnon/pbhowtonew.htm</url> </topic> <topic name="Javascript"> <url>http://www.rgagnon/jshowto.htm</url> </topic> <topic name="VBScript"> <url>http://www.rgagnon/vbshowto.htm</url> </topic></howto> In above example I want to read all the elements if I search via @name and also one function in which I just want the url from @name 'Javascript' only return one node element. I hope I cleared my question :) Thanks. Kai

    Read the article

  • Idiomatic way to build a custom structure from XML zipper in Clojure

    - by Checkers
    Say, I'm parsing an RSS feed and want to extract a subset of information from it. (def feed (-> "http://..." clojure.zip/xml-zip clojure.xml/parse)) I can get links and titles separately: (xml-> feed :channel :item :link text) (xml-> feed :channel :item :title text) However I can't figure out the way to extract them at the same time without traversing the zipper more than once, e.g. (let [feed (-> "http://..." clojure.zip/xml-zip clojure.xml/parse)] (zipmap (xml-> feed :channel :item :link text) (xml-> feed :channel :item :title text))) ...or a variation of thereof, involving mapping multiple sequences to a function that incrementally builds a map with, say, assoc. Not only I have to traverse the sequence multiple times, the sequences also have separate states, so elements must be "aligned", so to speak. That is, in a more complex case than RSS, a sub-element may be missing in particular element, making one of sequences shorter by one (there are no gaps). So the result may actually be incorrect. Is there a better way or is it, in fact, the way you do it in Clojure?

    Read the article

  • Pass the values from ListView control to text boxes in c#

    - by Kumu
    private void deleteDisplayGamesButton_Click(object sender, EventArgs e) { //Game game = new Game(homeTeamComboBox.Text, int.Parse(homeScoreUpDown.Value.ToString()), awayTeamComboBox.Text, int.Parse(awayScoreUpDown.Value.ToString())); deleteDisplayGamesListView.Items.Clear(); deleteDisplayGamesListView.View = View.Details; foreach (Game currentgame in footballLeagueDatabase.games) { ListViewItem row = new ListViewItem(); row.SubItems.Add(currentgame.HomeTeam.ToString()); row.SubItems.Add(currentgame.HomeScore.ToString()); row.SubItems.Add(currentgame.AwayTeam.ToString()); row.SubItems.Add(currentgame.AwayScore.ToString()); deleteDisplayGamesListView.Items.Add(row); } } I need to pass the values from above ListView control to following text boxes when I use the deleteDisplayGamesListView_SelectedIndexChanged method. private void deleteDisplayGamesListView_SelectedIndexChanged(object sender, EventArgs e) { deleteModifyHomeTeamTxt.Text = ""; deleteModifyHomeScoreUpDown.Text = ""; deleteModifyAwayTeamTxt.Text = ""; deleteModifyAwayScoreUpDown.Text = ""; foreach (Game currentgame in footballLeagueDatabase.games); { ?----------------------------- } } Futhermore I need to clear this data from the ListView Control. If you know how to do this please let me know.

    Read the article

  • Crystal Report with Error : A number range is required here.

    - by gofor.net
    Hi Everyone, I am using the crystal report, in that i am using code like below to show the SQL data into the crystal report, string req = "{View_EODPumpTest.ROId} IN " + str + " AND " + "({View_EODPumpTest.RecordCreatedDate}>=date(" + fromDate.Year + " , " + fromDate.Month + " , " + fromDate.Day + ")" + "AND" + "{View_EODPumpTest.RecordCreatedDate}<=date(" + toDate.Year + " , " + toDate.Month + " ," + toDate.Day + " ))"; ReportDocument rep = new ReportDocument(); rep.Load(Server.MapPath("PumpTestReport.rpt")); DateTime fromDate = DateTime.Parse(Request.QueryString["fDate"].ToString()); DateTime toDate = DateTime.Parse(Request.QueryString["tDate"].ToString()); CrystalReportViewer_PumpTest.ReportSource = rep; //CrystalReportViewer1.SelectionFormula = str; rep.RecordSelectionFormula = str; CrystalDecisions.CrystalReports.Engine.TextObject from = ((CrystalDecisions.CrystalReports.Engine.TextObject)rep.ReportDefinition.ReportObjects["txtFrom"]); from.Text = fromDate.ToShortDateString(); CrystalDecisions.CrystalReports.Engine.TextObject to = ((CrystalDecisions.CrystalReports.Engine.TextObject)rep.ReportDefinition.ReportObjects["txtTO"]); to.Text = toDate.ToShortDateString(); //Session["Repo"] = rep; CrystalReportViewer_PumpTest.RefreshReport(); after running my application it executes fine with no exception but such error i am getting, A number range is required here. Error in File C:\DOCUME~1\Delmon\LOCALS~1\Temp\PumpTestReport {14E557A7-51B3-4791-9C78-B6FBAFFBD87C}.rpt: Error in formula . '{View_EODPumpTest.ROId} IN ['15739410','13465410'] AND ({View_EODPumpTest.RecordCreatedDate}>=date(2010 , 12 , 1)AND{View_EODPumpTest.RecordCreatedDate}<=date(2010 , 12 ,25 ))' A number range is required here. error. what i will do for this? Please help, Thanks in advance

    Read the article

  • Regular Expression Newbie

    - by Registered User
    I need to parse strings inputs where the columns are separated by columns and any field that contains a comma in the data is wrapped in quotes (commas separated, quoted text identifiers). For this project I need to remove the quotes and any commas that occur between pairs of quotes. Basically, I need to remove commas and quotes that are contained in fields while preserving the commas that are used to separate the fields. Here's a little code I put together that handles the simple scenario: // Sample input 1: This works and covers 99% of the records that I need to parse. string str1 = "[email protected],2010/03/27 12:2:02,,some_first_name,some_last_name,,\"This Address Works, Suite 200\",Some City,TN,09876-5432,9795551212x123,XYZ"; str1 = Regex.Replace(str1, "\"([^\"^,]*),([^\"^,]*)\"", "$1$2"); Console.WriteLine(str1); // Outputs: [email protected],2010/03/27 12:2:02,,some_first_name,some_last_name,,This Address Works Suite 200,Some City,TN,09876-5432,9795551212x123,XYZ Although this code works for most of my records, it doesn't work when a field contains more than one commas. What I would like to do is modify the code so that it remove each instance of a comma contained within the column no matter how many commas there are in the field. I don't want to hard code only handling 2 commas, or 3 commas, or 25 commas. The code should just remove all the commas in the field. Below is an example of what my code doesn't handle properly. // Sample input 2: This doesn't work since there is more than 1 comma between the quotes. string str2 = "[email protected],2010/03/27 12:2:02,,some_first_name,some_last_name,,\"i,l,k,e, c,o,m,m,a,s, i,n ,m,y, f,i,e,l,d\",Some City,TN,09876-5432,9795551212x123,XYZ"; str2 = Regex.Replace(str2, "\"([^\"^,]*),([^\"^,]*)\"", "$1$2"); Console.WriteLine(str2); // Desired output: [email protected],2010/03/27 12:2:02,,some_first_name,some_last_name,,i like commas in my field,Some City,TN,09876-5432,9795551212x123,XYZ Any help would be appreciated for this Regular Expression newbie.

    Read the article

  • Elegant way of parsing Data files for Simulation

    - by sc_ray
    I am working on this project where I need to read in a lot of data from .dat files and use the data to perform simulations. The data in my .dat file looks as follows: DeviceID InteractingDeviceID InteractionStartTime InteractionEndTime 1 2 1101 1105 1,2 1101 and 1105 are tab delimited and it means Device 1 interacted with Device 2 at 1101 ms and ended the interaction at 1105ms. I have a trace data sets that compile thousands of such interactions and my job is to analyze these interactions. The first step is to parse the file. The language of choice is C++. The approach I was thinking of taking was to read the file, for every line that's read create a Device Object. This Device object will contain the property DeviceId and an array/vector of structs, that will contain a list of all the devices the given DeviceId interacted with over the course of the simulation.The struct will contain the Interacting Device Id, Interaction Start Time and Interaction End Time. I have a two fold question here: Is my approach correct? If I am on the right track, how do I rapidly parse these tab delimited data files and create Device objects without excessive memory overhead using C++? A push in the right direction will be much appreciated. Thanks

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >