Search Results

Search found 15004 results on 601 pages for 'date parsing'.

Page 115/601 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • C++ Beginner - Best way to read 3 consecutive values from the command line?

    - by Francisco P.
    Hello everyone, I am writing a text-based Scrabble implementation for a college project. The specification states that the user's position input must be read from single line, like this: Coordinates of the word's first letter and orientation (<A – P> <1 – 15> <H ou V>): G 5 H G 5 H is the user's input for that particular example. The order, as shown, must be char int char. What is the best way to read the user's input? cin >> row >> column >> orientation will cause crashes if the user screws up. A getline and a subsequent string parser are a valid solution, but represent a bit of work. Is there another, better, way to do this, that I am missing? Thanks for your time!

    Read the article

  • Good way to parse query string

    - by m.edmondson
    I have a String that contains the following: ?workarea=London+&+Home+Counties+Ltd&sub=fs&&&FASh*5 which resembles a URI query string. What is the best way to parse the elements of this string (workarea and sub) without messing about with string manipulation? If I use HttpUtility.ParseQueryString is gets stuck as both elements include &. However if I encode the whole thing first I lose the seperations of the elements. Ideally the output would be: workarea = London & Home Counties Ltd sub = fs&&&FASh*5

    Read the article

  • Date time formate problem in c#

    - by jestges
    Hi I'm working with c# simple application to display system date time. textbox.Text = DateTime.Now.ToString("MM/dd/yyyy"); but it is showing result as : 05-12-2010 What is the problem with this code? or do I need to change any where in the regional settings of my machine. thank you

    Read the article

  • PHP - complete url parser help

    - by Mark
    I have been trying to find an effective url parser, php's own does not include subdomain or extension. On php.net a number of users had contributed and made this: function parseUrl($url) { $r = "^(?:(?P<scheme>\w+)://)?"; $r .= "(?:(?P<login>\w+):(?P<pass>\w+)@)?"; $r .= "(?P<host>(?:(?P<subdomain>[-\w\.]+)\.)?" . "(?P<domain>[-\w]+\.(?P<extension>\w+)))"; $r .= "(?::(?P<port>\d+))?"; $r .= "(?P<path>[\w/]*/(?P<file>\w+(?:\.\w+)?)?)?"; $r .= "(?:\?(?P<arg>[\w=&]+))?"; $r .= "(?:#(?P<anchor>\w+))?"; $r = "!$r!"; // Delimiters preg_match ( $r, $url, $out ); return $out; } Unfortunately it fails on paths with a '-' and I can't for the life of me workout how to amend it to accept '-' in the path name. Thanks

    Read the article

  • Suggestions for jQuery-based Date/Time Selector

    - by Jason Palmer
    Hi everyone, I'm in search for a jQuery-based Date/Time Selector. I have found a few that are quite nice, but one of my requirements is that I can provide a json/xml/etc source of available days/times and the control should only allow selections of available days/times. Is anyone aware of a plugin that does this, or at least a plugin that could be modified to do this? Thanks!

    Read the article

  • SimpleTest assertTags - loose matching? (for CakePHP)

    - by Arkaaito
    I'd like to use SimpleTest to set up some functionality tests for our project - in particular, we have a very busy page which has some random components and some static components, and I'd like to be able to write a simple test which only confirms the static bits (preferably only the one or two most important ones). In other words, I want to be able to leave out any tags on the page I don't care about, and write something like: $result = "<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"><head><title>...</title><meta .../></head><body><script type="text/javascript">...</script><div class="center-splash"><span>Welcome JohnDoe</span><p>Your progress:</p>...</div><div class="left-column">...</div><div class="right-column">...</div>...</body></html>"; $expects = array('html'=>true,'body'=>true,'div'=>array('class'=>'center_splash'),'span'=>true,'Welcome JohnDoe','/span','/div','/body','/html'); $this->assertTagsButIgnoreExtras($result, $expects); When I try this with assertTags it fails. Is there a version of assertTags which allows this - something either officially part of the SimpleTest or CakePHP project or unofficially put out under the MIT license or similar?

    Read the article

  • Parse one String data using C#

    - by skumar
    I need to parse the following string data and convert it into the specified C# class object. Please suggest me a solution for this: Input string: A||B||C Output: Class containing a list of 3 objects of type string i.e A, B, C Input String: A||{a1||a2||a3}||B||C Output: Class containing a list of 3 elements i.e A, B, C and inside A having one more List with 3 elements i.e a1, a2, a3. Here elements inside brace symbol { .. } would represent the child elements. Note: Child elements could have again multiple child elements. Please help me on this.

    Read the article

  • input URL, output contents of "view page source", i.e. after javascript / etc, library or command-li

    - by Ryan Berckmans
    I need a scalable, automated, method of dumping the contents of "view page source" (DOM) to a file. Programs such as wget or curl will non-interactively retrieve a set of URLs, but do not execute javascript or any of that 'fancy stuff'. My ideal solution looks like any of the following (fantasy solutions): cat urls.txt | google-chrome --quiet --no-gui \ --output-sources-directory=~/urls-source (fantasy command line, no idea if flags like these exist) or cat urls.txt | python -c "import some-library; \ ... use some-library to process urls.txt ; output sources to ~/urls-source" As a secondary concern, I also need: dump all included javascript source to file (a la firebug) dump pdf/image of page to file (print to file)

    Read the article

  • Python + Expat: Error on &#0; entities

    - by clacke
    I have written a small function, which uses ElementTree and xpath to extract the text contents of certain elements in an xml file: #!/usr/bin/env python2.5 import doctest from xml.etree import ElementTree from StringIO import StringIO def parse_xml_etree(sin, xpath): """ Takes as input a stream containing XML and an XPath expression. Applies the XPath expression to the XML and returns a generator yielding the text contents of each element returned. >>> parse_xml_etree( ... StringIO('<test><elem1>one</elem1><elem2>two</elem2></test>'), ... '//elem1').next() 'one' >>> parse_xml_etree( ... StringIO('<test><elem1>one</elem1><elem2>two</elem2></test>'), ... '//elem2').next() 'two' >>> parse_xml_etree( ... StringIO('<test><null>&#0;</null><elem3>three</elem3></test>'), ... '//elem2').next() 'three' """ tree = ElementTree.parse(sin) for element in tree.findall(xpath): yield element.text if __name__ == '__main__': doctest.testmod(verbose=True) The third test fails with the following exception: ExpatError: reference to invalid character number: line 1, column 13 Is the � entity illegal XML? Regardless whether it is or not, the files I want to parse contain it, and I need some way to parse them. Any suggestions for another parser than Expat, or settings for Expat, that would allow me to do that?

    Read the article

  • strange characters at beginning of file

    - by luca
    there are strange characters at the beginning of a file I'm editing (using textmate..) I don't know when they appeared, they're invisible in textmate but my script that reads the file goes crazy.. this is the first few chars in the file (as seen with od command): 0000000 177377 000120 000105 000117 000120 000114 000105 000072 the first 2 shouldn't be there I think.. maybe they were caused by some strange dropbox sync? Or something else.. but they tend to reappear (I don't yet know when..) My question: what is that 177377 and a simple way to remove it in my ruby script? thanks

    Read the article

  • Select count() max() Date HELP!!! mysql oracle

    - by DAVID
    Hi guys i have a table with shifts history along with emp ids im using this code to retrieve a list of employees and their total shifts by specifying the range to count from: SELECT ope_id, count(ope_id) FROM operator_shift WHERE ope_shift_date >=to_date( '01-MAR-10','dd-mon-yy') and ope_shift_date <= to_date('31-MAR-10','dd-mon-yy') GROUP BY OPE_ID which gives OPE_ID COUNT(OPE_ID) 1 14 2 7 3 6 4 6 5 2 6 5 7 2 8 1 9 2 10 4 10 rows selected. NOW how do i choose the employee with the highest no of shifts under the specified range date, please this is really important

    Read the article

  • sql report link with rs:Command paramaters not opening in JSF page

    - by H3wh0s33ks
    I have a report that we need to link (which we've checked to be working) to in a JSF project, the link looks like the following: http://www.example.com/report/summary&rs:Command=Render However when we try to load the page that links to it we get the following error: The reference to entity "rs:Command" must end with the ';' How can I link to the report within my pages and prevent it from trying to parse the rs:Command?

    Read the article

  • Client side page call/scrape?

    - by Silvre
    Here is the problem: I have a web application - a frequently changing notification system - that runs on a series of local computers. The application refreshes every couple of seconds to display the new information. The computers only display info, and do not have keyboards or ANY input device. The issue is that if the connection to the server is lost (say updates are installed and a server must be rebooted), a page not found error is displayed). We must then either reboot all computers that are running this app, OR add a keyboard and refresh the browser, OR try to access each computer remotely and refresh the browser. None of these are good options and result in a lot of frustration. I cannot change the actual application OR server environment. So what I need is some way to test the call to the application, and if an error is returned or it times out, continue trying every minute or so until the connection is reestablished. My idea is to create a client-side page scraper, that makes a JS request to the application (which displays basic HTML), and can run locally on the machine, no server required. If the scrape returns the correct content, it displays it. If not it continues to request the page until the actual page content is returned. Is this possible? What is the best way to do it?

    Read the article

  • parse a special xml in python

    - by zhaojing
    I have s special xml file like below: <alarm-dictionary source="DDD" type="ProxyComponent"> <alarm code="402" severity="Alarm" name="DDM_Alarm_402"> <message>Database memory usage low threshold crossed</message> <description>dnKinds = database type = quality_of_service perceived_severity = minor probable_cause = thresholdCrossed additional_text = Database memory usage low threshold crossed </description> </alarm> ... </alarm-dictionary> I know in python, I can get the "alarm code", "severity" in tag alarm by: for alarm_tag in dom.getElementsByTagName('alarm'): if alarm_tag.hasAttribute('code'): alarmcode = str(alarm_tag.getAttribute('code')) And I can get the text in tag message like below: for messages_tag in dom.getElementsByTagName('message'): messages = "" for message_tag in messages_tag.childNodes: if message_tag.nodeType in (message_tag.TEXT_NODE, message_tag.CDATA_SECTION_NODE): messages += message_tag.data But I also want to get the value like dnkind(database), type(quality_of_service), perceived_severity(thresholdCrossed) and probable_cause(Database memory usage low threshold crossed ) in tag description. That is, I also want to parse the content in the tag in xml. Could anyone help me with this? Thanks a lot!

    Read the article

  • Parasing HTML to find specific links (Without Keywords)

    - by Brett Powell
    I posted about this sort of earlier, but I am not sure how to post back to my original question as I can only comment or answer my own question. Anyways, I need to get 4 links from a website, the latest stable build links for windows and linux, and the latest development build links for windows and linux (4 links total) within my C++ application. I can download the page (http://www.sourcemod.net/snapshots.php) with LibCURL which is already implemented in the project, but after that I am not sure. I was looking at parsers, but I can't think of how I am going to discern link from link. Obviously using a parser I could get the first link from each table, but this does not seem efficient and would only provide me with the links to windows builds. It looks like the links I need will be in the fourth in both tables, but I am just very familiar with a good way to go about this, so any help would be appreciated.

    Read the article

  • Java -Android. Parser problem

    - by Kano
    I am making a very simple app with an RSS reader. The reader works great, but it's only giving me the title, and i want the description too. I'am very new to android, and I have tried a lot of things, but I can't get it to work. I've found a lot of parsers but they are to complicated for me to understand, so I was hoping to find a simple solution, since it's only title and description i want. Can anyone help me? import java.io.IOException; import java.net.MalformedURLException; import java.net.URL; import javax.xml.parsers.ParserConfigurationException; import javax.xml.parsers.SAXParser; import javax.xml.parsers.SAXParserFactory; import org.xml.sax.Attributes; import org.xml.sax.InputSource; import org.xml.sax.SAXException; import org.xml.sax.XMLReader; import org.xml.sax.helpers.DefaultHandler; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; public class NyhedActivity extends Activity { String streamTitle = ""; @Override protected void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); setContentView(R.layout.nyheder); TextView result = (TextView)findViewById(R.id.result); try { URL rssUrl = new URL("http://tv2sport.dk/rss/*/*/*/248/*/*"); SAXParserFactory mySAXParserFactory = SAXParserFactory.newInstance(); SAXParser mySAXParser = mySAXParserFactory.newSAXParser(); XMLReader myXMLReader = mySAXParser.getXMLReader(); RSSHandler myRSSHandler = new RSSHandler(); myXMLReader.setContentHandler(myRSSHandler); InputSource myInputSource = new InputSource(rssUrl.openStream()); myXMLReader.parse(myInputSource); result.setText(streamTitle); } catch (MalformedURLException e) { // TODO Auto-generated catch block e.printStackTrace(); result.setText("Cannot connect RSS!"); } catch (ParserConfigurationException e) { // TODO Auto-generated catch block e.printStackTrace(); result.setText("Cannot connect RSS!"); } catch (SAXException e) { // TODO Auto-generated catch block e.printStackTrace(); result.setText("Cannot connect RSS!"); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); result.setText("Cannot connect RSS!"); } } private class RSSHandler extends DefaultHandler { final int stateUnknown = 0; final int stateTitle = 1; int state = stateUnknown; int numberOfTitle = 0; String strTitle = ""; String strElement = ""; @Override public void startDocument() throws SAXException { // TODO Auto-generated method stub strTitle = "Nyheder fra "; } @Override public void endDocument() throws SAXException { // TODO Auto-generated method stub strTitle += ""; streamTitle = "" + strTitle; } @Override public void startElement(String uri, String localName, String qName, Attributes attributes) throws SAXException { // TODO Auto-generated method stub if (localName.equalsIgnoreCase("title")) { state = stateTitle; strElement = ""; numberOfTitle++; } else { state = stateUnknown; } } @Override public void endElement(String uri, String localName, String qName) throws SAXException { // TODO Auto-generated method stub if (localName.equalsIgnoreCase("title")) { strTitle += strElement + "\n"+"\n"; } state = stateUnknown; } @Override public void characters(char[] ch, int start, int length) throws SAXException { // TODO Auto-generated method stub String strCharacters = new String(ch, start, length); if (state == stateTitle) { strElement += strCharacters; } } } }

    Read the article

  • Will I use HtmlDocument even I want to parse the HTML string using HtmlAglityPack ?

    - by skhan
    Hi everyone, I'm working in C#. I'm trying to extract the first instance of img tag from a HTML string (which is actually a post data). This is my code: private string GrabImage(string htmlContent) { String firstImage; HtmlAgilityPack.HtmlDocument htmlDoc = new HtmlAgilityPack.HtmlDocument(); htmlDoc.LoadHtml(htmlContent); HtmlAgilityPack.HtmlNode imageNode = htmlDoc.DocumentNode.SelectSingleNode("//img"); if (imageNode != null) { return firstImage = imageNode.ToString(); } else return firstImage=" "; } But it gets null in htmlDoc, will I use the HtmlDocument type even if I'm trying to parse the HTML from a string ? P.S btw is it the correct way of grabbing the first instance of image tag from my HTML string?

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >