Search Results

Search found 4222 results on 169 pages for 'dtd parsing'.

Page 33/169 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Linux distro name parsing

    - by Ockonal
    Hello, I chose this way to get linux distro name: ls /etc/*release And now I have to parse it for name: /etc/<name>-release def checkDistro(): p = Popen('ls /etc/*release' , shell = True, stdout = PIPE) distroRelease = p.stdout.read() distroName = re.search( ur"\/etc\/(.*)\-release", distroRelease).group() print distroName But this prints the same string that is in distroRelease.

    Read the article

  • Parsing an Open XML doc via styled blocks

    - by Chris B. Behrens
    I'm working with docx docs, and I need to parse a document into sections on the basis of headings styled with the "heading 1" style. So if I had a doc like this (markup is pseudocode): <doc> <title style>Doc Title</title style> <heading1>First Section</heading1> ... <heading2>Second Section</heading2> ... <heading3>Third Section</heading3> ... </doc> I'd want to break this into a doc with four sections, the first being the content that precedes the first section. I figure that this is probably pretty simple once you're familiar with Open XML, but I am not. TIA.

    Read the article

  • Parsing question

    - by j-t-s
    Hi All I have tried using several different parsers as advised by somebody but i don't believe that they'd be of any use for this particular situation. I have a file that looks like this: mylanguagename(main) { OnLoad(protected) { Display(img, text, link); } Canvas(public) { Image img: "Images\my_image.png"; img.Name: "img"; img.Border: "None"; img.BackgroundColor: "Transparent"; img.Position: 10, 10; Text text: "This is a multiline str#ning. The #n creates a new line."; text.Name: text; text.Position: 10, 25; Link link: "Click here to enlarge img."; link.Name: "link"; link.Position: 10, 60; link.Event: link.Clicked; } link.Clicked(sender, link, protected) { Link link: from sender; Message.Display: "You clicked link."; } } ... and I need to be able to parse that code above, so and convert it to a Javascript equivelent, (or JScript). Can somebody please help, or get me started in the right direction? Thanks

    Read the article

  • TouchCode XML parsing error

    - by itsaboutcode
    Hi, I have a xml document which has only one element in the document, which is <error>error string<error> But when i try to parse it, it says this document has no element at all. In other words when i try to access the rootElement it says "null" CXMLDocument *rssParser = [[[CXMLDocument alloc] initWithContentsOfURL:url options:0 error:nil] autorelease]; NSLog(@"Root: %@",[[rssParser rootElement] name]); Please tell me what is wroing with this. Thanks

    Read the article

  • Parsing NSXMLNode Attributes in Cocoa

    - by Jeffrey Kern
    Hello everyone, Given the following XML file: <?xml version="1.0" encoding="UTF-8"?> <application name="foo"> <movie name="tc" english="tce.swf" chinese="tcc.swf" a="1" b="10" c="20" /> <movie name="tl" english="tle.swf" chinese="tlc.swf" d="30" e="40" f="50" /> </application> How can I access the attributes ("english", "chinese", "name", "a", "b", etc.) and their associated values of the MOVIE nodes? I currently have in Cocoa the ability to traverse these nodes, but I'm at a loss at how I can access the data in the MOVIE NSXMLNodes. Is there a way I can dump all of the values from each NSXMLNode into a Hashtable and retrieve values that way? Am using NSXMLDocument and NSXMLNodes.

    Read the article

  • Why is parsing a jSON response to a jQUERY ajax request not working

    - by Ankur
    I know there are several similar questions on SO, I have used some of them to get this far. I am trying to list a set of urls that match my input values. I have a servlet which takes some input e.g. "aus" in the example below and returns some output using out.print() e.g. the two urls I have shown below. EXAMPLE Input: "aus" Output: [{"url":"http://dbpedia.org/resource/Stifel_Nicolaus"},{"url":"http://sempedia.org/ontology/object/Australia"}] Which is exactly what I want. I have seen that firebug doesn't seem to have anything in the response section despite having called out.print(jsonString); and it seems that out.print(jsonString); is working as expected which suggests that the variable 'jsonString' contains the expected values. However I am not exactly sure what is wrong. -------- The jQuery --------- $(document).ready(function() { $("#input").keyup(function() { var input = $("#input").val(); //$("#output").html(input); ajaxCall(input); }); }); function ajaxCall(input) { // alert(input); $.ajax({ url: "InstantSearchServlet", data: "property="+input, beforeSend: function(x) { if(x && x.overrideMimeType) { x.overrideMimeType("application/j-son;charset=UTF-8"); } }, dataType: "json", success: function(data) { for (var i = 0, len = datalength; i < len; ++i) { var urlData = data[i]; $("#output").html(urlData.url); } } }); } ------ The Servlet that calls the DAO class - and returns the results ------- public class InstantSearchServlet extends HttpServlet{ private static final long serialVersionUID = 1L; public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { System.out.println("You called?"); response.setContentType("application/json"); PrintWriter out = response.getWriter(); InstantSearch is = new InstantSearch(); String input = (String)request.getParameter("property"); System.out.println(input); try { ArrayList<String> urllist; urllist = is.getUrls(input); String jsonString = convertToJSON(urllist); out.print(jsonString); System.out.println(jsonString); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); } } private String convertToJSON(ArrayList<String> urllist) { Iterator<String> itr = urllist.iterator(); JSONArray jArray = new JSONArray(); int i = 0; while (itr.hasNext()) { i++; JSONObject json = new JSONObject(); String url = itr.next(); try { json.put("url",url); } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } jArray.put(json); } String results = jArray.toString(); return results; } }

    Read the article

  • Parsing XML with jQuery

    - by Jamie
    I've used this: $(document).ready(function () { $.ajax({ type: "GET", url: "http://domain.com/languages.php", dataType: "xml", success: xmlParser }); }); function xmlParser(xml) { $('#load').fadeOut(); $(xml).find("result").each(function () { $(".main").append('' + $(this).find("language").text() + ''); $(".lang").fadeIn(1000); }); } I used a local XML file on my computer, it works fine, but when I change the URL to an website, it just keeps loading all the time... How do I get this to work?

    Read the article

  • how can multiple trailing slashes can be removed from an url in Ruby

    - by splintercell
    Hello, What i'm trying to achieve here is lets say we have two example urls: url1 "http://emy.dod.com/kaskaa/dkaiad/amaa//////////" & url2 = "http://www.example.com/". How can I extract the striped down urls? url1 : "http://emy.dod.com/kaskaa/dkaiad/amaa" & url2 to "http://http://www.example.com"? URI.parse in ruby sanitizes certain type of malformed url but is ineffective in this case. If we use regex then /^(.*)\/$/ removes a single slash (/) from url1 & is ineffective for url2. Is anybody aware of how to handle this type of url parsing? The point here is I dont want my system to have "http://www.example.com/" & "http://www.example.com" being treated as two different urls. And same goes for "http://emy.dod.com/kaskaa/dkaiad/amaa////" & "http://emy.dod.com/kaskaa/dkaiad/amaa/" cheers, -dg

    Read the article

  • Detecting regular expression in content during parse

    - by sonofdelphi
    I am writing a parser for C. I was just running it with some other language files (for fun, to see the extent C-likeness). It breaks down if the code being parsed contains regular expressions... Case 1: For example, while parsing the JavaScript code snippet, var phone="(304)434-5454" phone=phone.replace(/[\(\)-]/g, "") //Returns "3044345454" (removes "(", ")", and "-") The '(', '[' etc get matched as starters of new scopes, which may never be closed. Case 2: And, for the Perl code snippet, # Replace backslashes with two forward slashes # Any character can be used to delimit the regex $FILE_PATH =~ s@\\@//@g; The // gets matched as a comment... How can I detect a regular expression within the content text of a "C-like" program-file?

    Read the article

  • C++ option/long option implementation

    - by K Hein
    I am working on a parser for meta programming language using C++ on Linux platform. Right now, I need to implement option/long option for the parser to provide some additional features. Basically, if the user pass in some additional option, the parser needs to store statistics while parsing the text files. I can think of two ways to implement it. One way is to user global to store options entered by users. Another way is to create a singleton class to store options. So I would like to know if there is any other way to implement it. What is the best/most recommended way of implementing it? Thanks in advance. Regards, K.Hein

    Read the article

  • JSON parsing using JSON.net

    - by vbNewbie
    I am accessing the facebook api and using the json.net library (newtonsoft.json.net) I declare a Jobject to parse the content and look for the specific elements and get their values. Everything works fine for the first few but then I get this unexplained nullexception error " (Object reference not set to an instance of an object) Now I took a look at the declaration but cannot see how to change it. Any help appreciated: Dim jobj as JObject = JObject.Parse(responseData) Dim message as string = string.empty message = jobj("message").tostring The error occurs at the last line above.

    Read the article

  • Nokogiri Not Parsing File

    - by Jesse J
    I'm using Nokogiri to parse pepXML files from different peptide search engines. I have two pepXML files, both of which appear, inasmuch as I can tell, to be of correct format, and puts Nokogiri::XML(IO.read(file)) will output the whole XML file for both files. The problem is, doc.xpath("any valid xpath") will parse the tag from one of the files, but not the other. No errors are given, so I have no idea why it won't parse. Anyone know of any reasons why Nokogiri wouldn't parse something out?

    Read the article

  • Table transformation / field parsing in PL/SQL

    - by IMHO
    I have de-normalized table, something like CODES ID | VALUE 10 | A,B,C 11 | A,B 12 | A,B,C,D,E,F 13 | R,T,D,W,W,W,W,W,S,S The job is to convert is where each token from VALUE will generate new record. Example: CODES_TRANS ID | VALUE_TRANS 10 | A 10 | B 10 | C 11 | A 11 | B What is the best way to do it in PL/SQL without usage of custom pl/sql packages, ideally with pure SQL? Obvious solution is to implement it via cursors. Any ideas?

    Read the article

  • jQuery .getJSON() Not Parsing All Objects

    - by Brad
    I'm using jQuery's .getJSON function to parse a set of search results from a Google Search Appliance. The search appliance has an xslt stylesheet that returns the results as JSON data, which I validated with both JSONLint and Curious Concept's JSON Formatter. According to FireBug, the full result set is returned from the XMLHTTPRequest, but I tried dumping the data (with jquery.dump.js) and it only ever parses back the first result. It does successfully get all the Google Search Protocol stuff, but it only ever sees one "R" object (or individual result). Has anybody had a similar problem with jQuery's .getJSON? I know it likes to fail silently if the JSON is not valid, but like I said, I validated the results with several validators and it should be good to go. Edit: Clicking this link will show you the JSON results returned for a search for the word "google": http://bigbird.uww.edu/search?client=json_frontend&proxystylesheet=json_frontend&proxyrefresh=1&output=xml_no_dtd&q=google jQuery only retrieves the first "R" object, even though all "R" objects are siblings.

    Read the article

  • Parsing XML in Javascript getElementsByTagName not working

    - by Probocop
    Hi, I am trying to parse the following XML with javascript: <?xml version='1.0' encoding='UTF-8'?> <ResultSet> <Result> <URL>www.asd.com</URL> <Value>10500</Value> </Result> </ResultSet> The XML is generated by a PHP script to get how many pages are indexed in Bing. My javascript function is as follows: function bingIndexedPages() { ws_url = "http://archreport.epiphanydev2.co.uk/worker.php?query=bingindexed&domain="+$('#hidden_the_domain').val(); $.ajax({ type: "GET", url: ws_url, dataType: "xml", success: function(xmlIn){ alert('success'); result = xmlIn.getElementsByTagName("Result"); $('#tb_actualvsindexedbing_indexed').val($(result.getElementsByTagName("Value")).text()); $('#img_actualvsindexedbing_worked').attr("src","/images/worked.jpg"); }, error: function() {$('#img_actualvsindexedbing_worked').attr("src","/images/failed.jpg");} }); } The problem I'm having is firebug is saying: 'result.getElementsByTagName is not a function' Can you see what is going wrong? Thanks

    Read the article

  • parsing email text reply/forward

    - by Theofanis Pantelides
    Hi, I am creating a web based email client using c# asp.net. What is confusing is that various email clients seem to add the original text in alot of different ways when replying by email. What I was wondering is that, if there is some sort of standardized way, to disambiguate this process? Thank you -Theo

    Read the article

  • Having trouble parsing XML with jQuery

    - by Jack
    Hi Guys, I'm trying to parse some XML data using jQuery, and as it stands I have extracted the 'ID' attribute of the required nodes and stored them in an array, and now I want to run a loop for each array member and eventually grab more attributes from the notes specific to each ID. The problem currently is that once I get to the 'for' loop, it isn't looping, and I think I may have written the xml path data incorrectly. It runs once and I recieve the 'alert(arrayIds.length);' only once, and it only loops the correct amount of times if I remove the subsequent xml path code. Here is my function: var arrayIds = new Array(); $(document).ready(function(){ $.ajax({ type: "GET", url: "question.xml", dataType: "xml", success: function(xml) { $(xml).find("C").each(function(){ $("#attr2").append($(this).attr('ID') + "<br />"); arrayIds.push($(this).attr('ID')); }); for (i=0; i<arrayIds.length; i++) { alert(arrayIds.length); $(xml).find("C[ID='arrayIds[i]']").(function(){ // pass values alert('test'); }); } } }); }); Any ideas?

    Read the article

  • Regular expression help parsing SQLIO output

    - by jaspernygaard
    Hi I've been working on a regular expression to parse the output of a series of SQLIO runs. I've gotten pretty far, but not quite there yet. I'm seeking a 100% regex solution and no pre-manipulation of the input. Could anyone assist with a little guidance with the following regular expression: .*v(?<SQLIOVersion>\d\.\d).*\n.*\n(?<threads>\d*)\s.*for\s(?<Seconds>\d+).*\n.*using\s(?<clustersize>[0-9]*)KB.*\n.*\n.*size:\s(?<currentfilesize>\d+).*\n.*\n.*\n.*\n.*\s(?<IOs>\d*\.\d*).*\n.*\s(?<MBs>\d*\.\d*).*\n.*\n.*\s(?<MinLatency_ms>\d+).*\n.*\s(?<AvgLatency_ms>\d+).*\n.*\s(?<MaxLatency_ms>\d+).*\n.*\n.*\n\%\:..(?<ms>\d*\s+)* Here's a snippet of the output - note the headers, which change during the SQLIO batch run: File

    Read the article

  • Actionscript 3 svg XML parsing bug?

    - by Mahir
    Hey I get two different results when using the for each loop below. As far as I can tell there's no difference aside from attributes in the two XML literals. for each (var pathXML:XML in svg.path) { // do stuff... trace(pathXML.@stroke) } // This one works, the loop iterates once over the single path element... var svg:XML = <svg> <path stroke="#00FF00" /> </svg> // This one doesn't, the loop just exits. var svg:XML = <svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" width="612px" height="792px" viewBox="0 0 612 792" enable-background="new 0 0 612 792" xml:space="preserve"> <path fill="#FFFFFF" stroke="#000000" d="M160.333,372.444c0,0,17.778-115.555,60-63.333s27.778-106.666,78.889,40" /> </svg>

    Read the article

  • Parsing CSV File to MySQL DB in PHP

    - by Austin
    I have a some 350-lined CSV File with all sorts of vendors that fall into Clothes, Tools, Entertainment, etc.. categories. Using the following code I have been able to print out my CSV File. <?php $fp = fopen('promo_catalog_expanded.csv', 'r'); echo '<tr><td>'; echo implode('</td><td>', fgetcsv($fp, 4096, ',')); echo '</td></tr>'; while(!feof($fp)) { list($cat, $var, $name, $var2, $web, $var3, $phone,$var4, $kw,$var5, $desc) = fgetcsv($fp, 4096); echo '<tr><td>'; echo $cat. '</td><td>' . $name . '</td><td><a href="http://www.' . $web .'" target="_blank">' .$web.'</a></td><td>'.$phone.'</td><td>'.$kw.'</td><td>'.$desc.'</td>' ; echo '</td></tr>'; } fclose($file_handle); show_source(__FILE__); ?> First thing you will probably notice is the extraneous vars within the list(). this is because of how the excel spreadsheet/csv file: Category,,Company Name,,Website,,Phone,,Keywords,,Description ,,,,,,,,,, Clothes,,4imprint,,4imprint.com,,877-466-7746,,"polos, jackets, coats, workwear, sweatshirts, hoodies, long sleeve, pullovers, t-shirts, tees, tshirts,",,An embroidery and apparel company based in Wisconsin. ,,Apollo Embroidery,,apolloemb.com,,1-800-982-2146,,"hats, caps, headwear, bags, totes, backpacks, blankets, embroidery",,An embroidery sales company based in California. One thing to note is that the last line starts with two commas as it is also listed within "Clothes" category. My concern is that I am going about the CSV output wrong. Should I be using a foreach loop instead of this list way? Should I first get rid of any unnecessary blank columns? Please advise any flaws you may find, improvements I can use so I can be ready to import this data to a MySQL DB.

    Read the article

  • Parsing returned array in javascript

    - by Dan
    I'm making a call to PayPal's credit card processor, and after a successful/unsuccessful transaction it returns me a string that looks like this: DoDirectPayment failed: Array ( [TIMESTAMP] = 2010%2d05%2d02T23%3a33%3a28Z [CORRELATIONID] = 8c503f5c6c861 [ACK] = Failure [VERSION] = 51%2e0 [BUILD] = 1268624 [L_ERRORCODE0] = 10527 [L_SHORTMESSAGE0] = Invalid%20Data [L_LONGMESSAGE0] = This%20transaction%20cannot%20be%20processed%2e%20Please%20enter%20a%20valid%20credit%20card%20number%20and%20type%2e [L_SEVERITYCODE0] = Error [AMT] = 90%2e00 [CURRENCYCODE] = USD ) I'm not a javascript pro, but how exactly can I turn that into a parseable array? Thank you!

    Read the article

  • How to handle building and parsing HTTP URL's / URI's / paths in Perl

    - by Robert S. Barnes
    I have a wget like script which downloads a page and then retrieves all the files linked in img tags on that page. Given the URL of the original page and the the link extracted from the img tag in that page I need to build the URL for the image file I want to retrieve. Currently I use a function I wrote: sub build_url { my ( $base, $path ) = @_; # if the path is absolute just prepend the domain to it if ($path =~ /^\//) { ($base) = $base =~ /^(?:http:\/\/)?(\w+(?:\.\w+)+)/; return "$base$path"; } my @base = split '/', $base; my @path = split '/', $path; # remove a trailing filename pop @base if $base =~ /[[:alnum:]]+\/[\w\d]+\.[\w]+$/; # check for relative paths my $relcount = $path =~ /(\.\.\/)/g; while ( $relcount-- ) { pop @base; shift @path; } return join '/', @base, @path; } The thing is, I'm surely not the first person solving this problem, and in fact it's such a general problem that I assume there must be some better, more standard way of dealing with it, using either a core module or something from CPAN - although via a core module is preferable. I was thinking about File::Spec but wasn't sure if it has all the functionality I would need.

    Read the article

  • Simple string parsing with C++

    - by Andreas Brinck
    I've been using C++ for quite a long time now but nevertheless I tend to fall back on scanf when I have to parse simple text files. For example given a config like this (also assuming that the order of the fields could vary): foo: [3 4 5] baz: 3.0 I would write something like: char line[SOME_SIZE]; while (fgets(line, SOME_SIZE, file)) { int x, y, z; if (3 == sscanf(line, "foo: [%d %d %d]", &x, &y, &z)) { continue; } float w; if (1 == sscanf(line, "baz: %f", &w)) { continue; } } What's the most concise way to achieve this in C++? Whenever I try I end up with a lot of scaffolding code.

    Read the article

  • Parsing response from the WSDL

    - by htf
    Hello. I've generated the web service client in eclipse for the OpenCalais WSDL using the "develop" client type. Actually I was following this post so not really going in detail. Now when I get the results this way: new CalaisLocator().getcalaisSoap().enlighten(key, content, requestParams);, I get the String object, containing the response XML. Sure it's possible to parse that XML, but I think there must be some way to do it automatically, e.g. getting the response object in the form of some list whatsoever?

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >