Search Results

Search found 15004 results on 601 pages for 'date parsing'.

Page 79/601 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Parsing an Open XML doc via styled blocks

    - by Chris B. Behrens
    I'm working with docx docs, and I need to parse a document into sections on the basis of headings styled with the "heading 1" style. So if I had a doc like this (markup is pseudocode): <doc> <title style>Doc Title</title style> <heading1>First Section</heading1> ... <heading2>Second Section</heading2> ... <heading3>Third Section</heading3> ... </doc> I'd want to break this into a doc with four sections, the first being the content that precedes the first section. I figure that this is probably pretty simple once you're familiar with Open XML, but I am not. TIA.

    Read the article

  • Parsing XHTML with inline tags

    - by user290796
    Hi, I'm trying to parse an XHTML document using TBXML on the iPhone (although I would be happy to use either libxml2 or NSXMLParser if it would be easier). I need to extract the content of the body as a series of paragraphs and maintain the inline tags, for example: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> <head> <title>Title</title> <link rel="stylesheet" href="css/style.css" type="text/css"/> <meta http-equiv="Content-Type" content="application/xhtml+xml; charset=utf-8"/> </head> <body> <div class="body"> <div> <h3>Title</h3> <p>Paragraph with <em>inline</em> tags</p> <img src="image.png" /> </div> </div> </body> </html> I need to extract the paragraph but maintain the <em>inline</em> content with the paragraph, all my testing so far has extracted that as a subelement without me knowing exactly where it fitted in the paragraph. Can anyone suggest a way to do this? Thanks.

    Read the article

  • Detecting regular expression in content during parse

    - by sonofdelphi
    I am writing a parser for C. I was just running it with some other language files (for fun, to see the extent C-likeness). It breaks down if the code being parsed contains regular expressions... Case 1: For example, while parsing the JavaScript code snippet, var phone="(304)434-5454" phone=phone.replace(/[\(\)-]/g, "") //Returns "3044345454" (removes "(", ")", and "-") The '(', '[' etc get matched as starters of new scopes, which may never be closed. Case 2: And, for the Perl code snippet, # Replace backslashes with two forward slashes # Any character can be used to delimit the regex $FILE_PATH =~ s@\\@//@g; The // gets matched as a comment... How can I detect a regular expression within the content text of a "C-like" program-file?

    Read the article

  • Why is parsing a jSON response to a jQUERY ajax request not working

    - by Ankur
    I know there are several similar questions on SO, I have used some of them to get this far. I am trying to list a set of urls that match my input values. I have a servlet which takes some input e.g. "aus" in the example below and returns some output using out.print() e.g. the two urls I have shown below. EXAMPLE Input: "aus" Output: [{"url":"http://dbpedia.org/resource/Stifel_Nicolaus"},{"url":"http://sempedia.org/ontology/object/Australia"}] Which is exactly what I want. I have seen that firebug doesn't seem to have anything in the response section despite having called out.print(jsonString); and it seems that out.print(jsonString); is working as expected which suggests that the variable 'jsonString' contains the expected values. However I am not exactly sure what is wrong. -------- The jQuery --------- $(document).ready(function() { $("#input").keyup(function() { var input = $("#input").val(); //$("#output").html(input); ajaxCall(input); }); }); function ajaxCall(input) { // alert(input); $.ajax({ url: "InstantSearchServlet", data: "property="+input, beforeSend: function(x) { if(x && x.overrideMimeType) { x.overrideMimeType("application/j-son;charset=UTF-8"); } }, dataType: "json", success: function(data) { for (var i = 0, len = datalength; i < len; ++i) { var urlData = data[i]; $("#output").html(urlData.url); } } }); } ------ The Servlet that calls the DAO class - and returns the results ------- public class InstantSearchServlet extends HttpServlet{ private static final long serialVersionUID = 1L; public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { System.out.println("You called?"); response.setContentType("application/json"); PrintWriter out = response.getWriter(); InstantSearch is = new InstantSearch(); String input = (String)request.getParameter("property"); System.out.println(input); try { ArrayList<String> urllist; urllist = is.getUrls(input); String jsonString = convertToJSON(urllist); out.print(jsonString); System.out.println(jsonString); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); } } private String convertToJSON(ArrayList<String> urllist) { Iterator<String> itr = urllist.iterator(); JSONArray jArray = new JSONArray(); int i = 0; while (itr.hasNext()) { i++; JSONObject json = new JSONObject(); String url = itr.next(); try { json.put("url",url); } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } jArray.put(json); } String results = jArray.toString(); return results; } }

    Read the article

  • how can multiple trailing slashes can be removed from an url in Ruby

    - by splintercell
    Hello, What i'm trying to achieve here is lets say we have two example urls: url1 "http://emy.dod.com/kaskaa/dkaiad/amaa//////////" & url2 = "http://www.example.com/". How can I extract the striped down urls? url1 : "http://emy.dod.com/kaskaa/dkaiad/amaa" & url2 to "http://http://www.example.com"? URI.parse in ruby sanitizes certain type of malformed url but is ineffective in this case. If we use regex then /^(.*)\/$/ removes a single slash (/) from url1 & is ineffective for url2. Is anybody aware of how to handle this type of url parsing? The point here is I dont want my system to have "http://www.example.com/" & "http://www.example.com" being treated as two different urls. And same goes for "http://emy.dod.com/kaskaa/dkaiad/amaa////" & "http://emy.dod.com/kaskaa/dkaiad/amaa/" cheers, -dg

    Read the article

  • SQL dynamic date but fixed time query

    - by Marko Lombardi
    I am trying to write a sql query like the example below, however, I need it to always choose the DateEntered field between the current day's date at 8:00am and the current day's date at 4:00pm. Not sure how to go about this. Can someone please help? SELECT OrderNumber , OrderRelease , HeatNumber , HeatSuffix , Operation , COUNT(Operation) AS [Pieces Out of Tolerance] FROM Alerts WHERE (Mill = 3) AND (DateEntered BETWEEN GetDate '08:00' AND GetDate '16:00') GROUP BY OrderNumber, OrderRelease, HeatNumber, HeatSuffix, Operation

    Read the article

  • C++ option/long option implementation

    - by K Hein
    I am working on a parser for meta programming language using C++ on Linux platform. Right now, I need to implement option/long option for the parser to provide some additional features. Basically, if the user pass in some additional option, the parser needs to store statistics while parsing the text files. I can think of two ways to implement it. One way is to user global to store options entered by users. Another way is to create a singleton class to store options. So I would like to know if there is any other way to implement it. What is the best/most recommended way of implementing it? Thanks in advance. Regards, K.Hein

    Read the article

  • Nokogiri Not Parsing File

    - by Jesse J
    I'm using Nokogiri to parse pepXML files from different peptide search engines. I have two pepXML files, both of which appear, inasmuch as I can tell, to be of correct format, and puts Nokogiri::XML(IO.read(file)) will output the whole XML file for both files. The problem is, doc.xpath("any valid xpath") will parse the tag from one of the files, but not the other. No errors are given, so I have no idea why it won't parse. Anyone know of any reasons why Nokogiri wouldn't parse something out?

    Read the article

  • JSON parsing using JSON.net

    - by vbNewbie
    I am accessing the facebook api and using the json.net library (newtonsoft.json.net) I declare a Jobject to parse the content and look for the specific elements and get their values. Everything works fine for the first few but then I get this unexplained nullexception error " (Object reference not set to an instance of an object) Now I took a look at the declaration but cannot see how to change it. Any help appreciated: Dim jobj as JObject = JObject.Parse(responseData) Dim message as string = string.empty message = jobj("message").tostring The error occurs at the last line above.

    Read the article

  • Parsing XML in Javascript getElementsByTagName not working

    - by Probocop
    Hi, I am trying to parse the following XML with javascript: <?xml version='1.0' encoding='UTF-8'?> <ResultSet> <Result> <URL>www.asd.com</URL> <Value>10500</Value> </Result> </ResultSet> The XML is generated by a PHP script to get how many pages are indexed in Bing. My javascript function is as follows: function bingIndexedPages() { ws_url = "http://archreport.epiphanydev2.co.uk/worker.php?query=bingindexed&domain="+$('#hidden_the_domain').val(); $.ajax({ type: "GET", url: ws_url, dataType: "xml", success: function(xmlIn){ alert('success'); result = xmlIn.getElementsByTagName("Result"); $('#tb_actualvsindexedbing_indexed').val($(result.getElementsByTagName("Value")).text()); $('#img_actualvsindexedbing_worked').attr("src","/images/worked.jpg"); }, error: function() {$('#img_actualvsindexedbing_worked').attr("src","/images/failed.jpg");} }); } The problem I'm having is firebug is saying: 'result.getElementsByTagName is not a function' Can you see what is going wrong? Thanks

    Read the article

  • Table transformation / field parsing in PL/SQL

    - by IMHO
    I have de-normalized table, something like CODES ID | VALUE 10 | A,B,C 11 | A,B 12 | A,B,C,D,E,F 13 | R,T,D,W,W,W,W,W,S,S The job is to convert is where each token from VALUE will generate new record. Example: CODES_TRANS ID | VALUE_TRANS 10 | A 10 | B 10 | C 11 | A 11 | B What is the best way to do it in PL/SQL without usage of custom pl/sql packages, ideally with pure SQL? Obvious solution is to implement it via cursors. Any ideas?

    Read the article

  • Date validation in PHP

    - by Rachel
    What would the regex expression that would go into preg_split function to validate date in the format of 7-Mar-10 or how can I validate a date of format 7-Mar-10 in PHP Thanks.

    Read the article

  • jQuery .getJSON() Not Parsing All Objects

    - by Brad
    I'm using jQuery's .getJSON function to parse a set of search results from a Google Search Appliance. The search appliance has an xslt stylesheet that returns the results as JSON data, which I validated with both JSONLint and Curious Concept's JSON Formatter. According to FireBug, the full result set is returned from the XMLHTTPRequest, but I tried dumping the data (with jquery.dump.js) and it only ever parses back the first result. It does successfully get all the Google Search Protocol stuff, but it only ever sees one "R" object (or individual result). Has anybody had a similar problem with jQuery's .getJSON? I know it likes to fail silently if the JSON is not valid, but like I said, I validated the results with several validators and it should be good to go. Edit: Clicking this link will show you the JSON results returned for a search for the word "google": http://bigbird.uww.edu/search?client=json_frontend&proxystylesheet=json_frontend&proxyrefresh=1&output=xml_no_dtd&q=google jQuery only retrieves the first "R" object, even though all "R" objects are siblings.

    Read the article

  • parsing email text reply/forward

    - by Theofanis Pantelides
    Hi, I am creating a web based email client using c# asp.net. What is confusing is that various email clients seem to add the original text in alot of different ways when replying by email. What I was wondering is that, if there is some sort of standardized way, to disambiguate this process? Thank you -Theo

    Read the article

  • Having trouble parsing XML with jQuery

    - by Jack
    Hi Guys, I'm trying to parse some XML data using jQuery, and as it stands I have extracted the 'ID' attribute of the required nodes and stored them in an array, and now I want to run a loop for each array member and eventually grab more attributes from the notes specific to each ID. The problem currently is that once I get to the 'for' loop, it isn't looping, and I think I may have written the xml path data incorrectly. It runs once and I recieve the 'alert(arrayIds.length);' only once, and it only loops the correct amount of times if I remove the subsequent xml path code. Here is my function: var arrayIds = new Array(); $(document).ready(function(){ $.ajax({ type: "GET", url: "question.xml", dataType: "xml", success: function(xml) { $(xml).find("C").each(function(){ $("#attr2").append($(this).attr('ID') + "<br />"); arrayIds.push($(this).attr('ID')); }); for (i=0; i<arrayIds.length; i++) { alert(arrayIds.length); $(xml).find("C[ID='arrayIds[i]']").(function(){ // pass values alert('test'); }); } } }); }); Any ideas?

    Read the article

  • Regular expression help parsing SQLIO output

    - by jaspernygaard
    Hi I've been working on a regular expression to parse the output of a series of SQLIO runs. I've gotten pretty far, but not quite there yet. I'm seeking a 100% regex solution and no pre-manipulation of the input. Could anyone assist with a little guidance with the following regular expression: .*v(?<SQLIOVersion>\d\.\d).*\n.*\n(?<threads>\d*)\s.*for\s(?<Seconds>\d+).*\n.*using\s(?<clustersize>[0-9]*)KB.*\n.*\n.*size:\s(?<currentfilesize>\d+).*\n.*\n.*\n.*\n.*\s(?<IOs>\d*\.\d*).*\n.*\s(?<MBs>\d*\.\d*).*\n.*\n.*\s(?<MinLatency_ms>\d+).*\n.*\s(?<AvgLatency_ms>\d+).*\n.*\s(?<MaxLatency_ms>\d+).*\n.*\n.*\n\%\:..(?<ms>\d*\s+)* Here's a snippet of the output - note the headers, which change during the SQLIO batch run: File

    Read the article

  • Actionscript 3 svg XML parsing bug?

    - by Mahir
    Hey I get two different results when using the for each loop below. As far as I can tell there's no difference aside from attributes in the two XML literals. for each (var pathXML:XML in svg.path) { // do stuff... trace(pathXML.@stroke) } // This one works, the loop iterates once over the single path element... var svg:XML = <svg> <path stroke="#00FF00" /> </svg> // This one doesn't, the loop just exits. var svg:XML = <svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" width="612px" height="792px" viewBox="0 0 612 792" enable-background="new 0 0 612 792" xml:space="preserve"> <path fill="#FFFFFF" stroke="#000000" d="M160.333,372.444c0,0,17.778-115.555,60-63.333s27.778-106.666,78.889,40" /> </svg>

    Read the article

  • Parsing CSV File to MySQL DB in PHP

    - by Austin
    I have a some 350-lined CSV File with all sorts of vendors that fall into Clothes, Tools, Entertainment, etc.. categories. Using the following code I have been able to print out my CSV File. <?php $fp = fopen('promo_catalog_expanded.csv', 'r'); echo '<tr><td>'; echo implode('</td><td>', fgetcsv($fp, 4096, ',')); echo '</td></tr>'; while(!feof($fp)) { list($cat, $var, $name, $var2, $web, $var3, $phone,$var4, $kw,$var5, $desc) = fgetcsv($fp, 4096); echo '<tr><td>'; echo $cat. '</td><td>' . $name . '</td><td><a href="http://www.' . $web .'" target="_blank">' .$web.'</a></td><td>'.$phone.'</td><td>'.$kw.'</td><td>'.$desc.'</td>' ; echo '</td></tr>'; } fclose($file_handle); show_source(__FILE__); ?> First thing you will probably notice is the extraneous vars within the list(). this is because of how the excel spreadsheet/csv file: Category,,Company Name,,Website,,Phone,,Keywords,,Description ,,,,,,,,,, Clothes,,4imprint,,4imprint.com,,877-466-7746,,"polos, jackets, coats, workwear, sweatshirts, hoodies, long sleeve, pullovers, t-shirts, tees, tshirts,",,An embroidery and apparel company based in Wisconsin. ,,Apollo Embroidery,,apolloemb.com,,1-800-982-2146,,"hats, caps, headwear, bags, totes, backpacks, blankets, embroidery",,An embroidery sales company based in California. One thing to note is that the last line starts with two commas as it is also listed within "Clothes" category. My concern is that I am going about the CSV output wrong. Should I be using a foreach loop instead of this list way? Should I first get rid of any unnecessary blank columns? Please advise any flaws you may find, improvements I can use so I can be ready to import this data to a MySQL DB.

    Read the article

  • Simple string parsing with C++

    - by Andreas Brinck
    I've been using C++ for quite a long time now but nevertheless I tend to fall back on scanf when I have to parse simple text files. For example given a config like this (also assuming that the order of the fields could vary): foo: [3 4 5] baz: 3.0 I would write something like: char line[SOME_SIZE]; while (fgets(line, SOME_SIZE, file)) { int x, y, z; if (3 == sscanf(line, "foo: [%d %d %d]", &x, &y, &z)) { continue; } float w; if (1 == sscanf(line, "baz: %f", &w)) { continue; } } What's the most concise way to achieve this in C++? Whenever I try I end up with a lot of scaffolding code.

    Read the article

  • Parsing response from the WSDL

    - by htf
    Hello. I've generated the web service client in eclipse for the OpenCalais WSDL using the "develop" client type. Actually I was following this post so not really going in detail. Now when I get the results this way: new CalaisLocator().getcalaisSoap().enlighten(key, content, requestParams);, I get the String object, containing the response XML. Sure it's possible to parse that XML, but I think there must be some way to do it automatically, e.g. getting the response object in the form of some list whatsoever?

    Read the article

  • Parsing returned array in javascript

    - by Dan
    I'm making a call to PayPal's credit card processor, and after a successful/unsuccessful transaction it returns me a string that looks like this: DoDirectPayment failed: Array ( [TIMESTAMP] = 2010%2d05%2d02T23%3a33%3a28Z [CORRELATIONID] = 8c503f5c6c861 [ACK] = Failure [VERSION] = 51%2e0 [BUILD] = 1268624 [L_ERRORCODE0] = 10527 [L_SHORTMESSAGE0] = Invalid%20Data [L_LONGMESSAGE0] = This%20transaction%20cannot%20be%20processed%2e%20Please%20enter%20a%20valid%20credit%20card%20number%20and%20type%2e [L_SEVERITYCODE0] = Error [AMT] = 90%2e00 [CURRENCYCODE] = USD ) I'm not a javascript pro, but how exactly can I turn that into a parseable array? Thank you!

    Read the article

  • Javascript handling textarea

    - by bbenton
    Hi all, I'm a bit new to Javascript and am trying to create a delimited string from a textare. The problem is when passing in the textarea, it adds newlines for each row on the textarea. I need to have the entire textarea parsed into a string with a delimiter for each line (replacing the newline char). So for example, if you passed in a textarea with the following lines (which is also how it looks when using the alert function): abcd efgh ijkl It would look like: abcd-efgh-ijkl after parsing. function submitToForm(form) { var param_textarea = form.listofplugins.value; var test = param_textarea.replace(/\\r?\\n/, /:/) alert(test); } Thanks a lot!

    Read the article

  • How to handle building and parsing HTTP URL's / URI's / paths in Perl

    - by Robert S. Barnes
    I have a wget like script which downloads a page and then retrieves all the files linked in img tags on that page. Given the URL of the original page and the the link extracted from the img tag in that page I need to build the URL for the image file I want to retrieve. Currently I use a function I wrote: sub build_url { my ( $base, $path ) = @_; # if the path is absolute just prepend the domain to it if ($path =~ /^\//) { ($base) = $base =~ /^(?:http:\/\/)?(\w+(?:\.\w+)+)/; return "$base$path"; } my @base = split '/', $base; my @path = split '/', $path; # remove a trailing filename pop @base if $base =~ /[[:alnum:]]+\/[\w\d]+\.[\w]+$/; # check for relative paths my $relcount = $path =~ /(\.\.\/)/g; while ( $relcount-- ) { pop @base; shift @path; } return join '/', @base, @path; } The thing is, I'm surely not the first person solving this problem, and in fact it's such a general problem that I assume there must be some better, more standard way of dealing with it, using either a core module or something from CPAN - although via a core module is preferable. I was thinking about File::Spec but wasn't sure if it has all the functionality I would need.

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >