Search Results

Search found 4222 results on 169 pages for 'dtd parsing'.

Page 96/169 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • RapidXML, reading and saving values

    - by Layne
    Hello, I've worked myself through the rapidXML sources and managed to read some values. Now I want to change them and save them to my XML file: Parsing file and set a pointer void SettingsHandler::getConfigFile() { pcSourceConfig = parsing->readFileInChar(CONF); cfg.parse<0>(pcSourceConfig); } Reading values from XML void SettingsHandler::getDefinitions() { SettingsHandler::getConfigFile(); stGeneral = cfg.first_node("settings")->value(); /* stGeneral = 60 */ } Changing values and saving to file void SettingsHandler::setDefinitions() { SettingsHandler::getConfigFile(); stGeneral = "10"; cfg.first_node("settings")->value(stGeneral.c_str()); std::stringstream sStream; sStream << *cfg.first_node(); std::ofstream ofFileToWrite; ofFileToWrite.open(CONF, std::ios::trunc); ofFileToWrite << "<?xml version=\"1.0\"?>\n" << sStream.str() << '\0'; ofFileToWrite.close(); } Reading file into buffer char* Parser::readFileInChar(const char* p_pccFile) { char* cpBuffer; size_t sSize; std::ifstream ifFileToRead; ifFileToRead.open(p_pccFile, std::ios::binary); sSize = Parser::getFileLength(&ifFileToRead); cpBuffer = new char[sSize]; ifFileToRead.read( cpBuffer, sSize); ifFileToRead.close(); return cpBuffer; } However, it's not possible to save the new value. My code is just saving the original file with a value of "60" where it should be "10". Rgds Layne

    Read the article

  • how to parse pdf document in iphone or ipad?

    - by JohnWhite
    I have implemented pdf parsing application for ipad.But content of pdf display not clearly.I dont know why this happning.can you help me for this problem. Edit: I have used below code for pdf parsing.But text and image is not clear. (id)initWithFrame:(CGRect)frame { if (self = [super initWithFrame:frame]) { // Initialization code /* Demo PDF printed directly from Wikipedia without permission; all content (c) respective owners */ CFURLRef pdfURL = CFBundleCopyResourceURL(CFBundleGetMainBundle(), CFSTR("2008_11_mp.pdf"), NULL, NULL); pdf = CGPDFDocumentCreateWithURL((CFURLRef)pdfURL); CFRelease(pdfURL); counts+=1; self.pageNumber = counts; self.backgroundColor = nil; self.opaque = NO; self.userInteractionEnabled = NO; } return self; } (void)drawRect:(CGRect)rect { // Drawing code CGContextRef context = UIGraphicsGetCurrentContext(); // PDF page drawing expects a Lower-Left coordinate system, so we flip the coordinate system // before we start drawing. // Grab the first PDF page CGPDFPageRef page = CGPDFDocumentGetPage(pdf, pageNumber); CGContextTranslateCTM(context, 0, self.bounds.size.height); CGContextScaleCTM(context, 1, -1); // We're about to modify the context CTM to draw the PDF page where we want it, so save the graphics state in case we want to do more drawing CGContextSaveGState(context); // CGPDFPageGetDrawingTransform provides an easy way to get the transform for a PDF page. It will scale down to fit, including any // base rotations necessary to display the PDF page correctly. CGAffineTransform pdfTransform = CGPDFPageGetDrawingTransform(page, kCGPDFMediaBox, self.bounds, 0, true); // And apply the transform. CGContextConcatCTM(context, pdfTransform); // Finally, we draw the page and restore the graphics state for further manipulations! CGContextDrawPDFPage(context, page); CGContextRestoreGState(context); } Please help me solve this problem

    Read the article

  • identation control while developing a small python like language

    - by sap
    Hello, Im developing a small python like language using flex, byacc (for lexical and parsing) and C++, but i have a few questions regarding scope control. just as python it uses white spaces (or tabs) for identation, not only that but i want to implement index breaking like for instance if you type "break 2" inside a while loop thats inside another while loop it would not only break from the last one but from the first loop as well (hence the number 2 after break) and so on. example: while 1 while 1 break 2 end end #after break 2 it would jump right here but since i dont have an "anti" tab character to check when a scope ends (like C for example i would just use the '}' char) i was wondering if this method would the the best: i would define a global variable, like "int tabIndex" on my yacc file that i would access in my lex file using extern. then everytime i find a tab character on my lex file i would increment that variable by 1. when parsing on my yacc file if i find a "break" keyword i would decrement by the amount typed after it from the tabIndex variable, and when i reach and EOF after compiling and i get a tabIndex != 0 i would output compilation error. now the problem is, whats the best way to see if the identation got reduced, should i read \b (backspace) chars from lex and then reduce the tabIndex variable (when the user doesnt use break)? another method to achieve this? also just another small question, i want every executable to have its starting point on the function called start() should i hardcode this onto my yacc file? sorry for the long question any help is greatly appretiated. also if someone can provide an yacc file for python would be nice as a guideline (tried looking on google and had no luck). thanks in advance.

    Read the article

  • Building error in Eclipse with build.xml

    - by Zachary
    I am working on a Java project with Eclipse. This project requires a second project (not mine), named sams in its build-path. The sams is provided with a build.xml file and it should generate some code using Apache CXF when building it. When I use Apache ANT on Eclipse and run the cxf.generated command from its build file I get the following error: Buildfile: C:\Docs\ZacRocha\Desktop\sams\build.xml cxf.generated: [echo] Generating code using Apache CXF wsdl2java... [java] 16-Jun-2010 16:04:08 org.apache.cxf.binding.corba.CorbaConduit prepare [java] SEVERE: Could not resolve target object [java] 16-Jun-2010 16:04:08 org.apache.cxf.binding.corba.CorbaConduit prepare [java] SEVERE: Could not resolve target object [java] WSDLToJava Error: org.apache.cxf.wsdl11.WSDLRuntimeException: Fail to create wsdl definition from : file:/C:/Docs/ZacRocha/Desktop/sams/$%7barchivesoftware.wsdl%7d [java] Caused by : WSDLException: faultCode=PARSER_ERROR: Problem parsing 'file:/C:/Docs/ZacRocha/Desktop/sams/$%7barchivesoftware.wsdl%7d'.: java.io.FileNotFoundException: C:\Docs\ZacRocha\Desktop\sams\${archivesoftware.wsdl} (The system cannot find the file specified) [java] 16-Jun-2010 16:04:10 org.apache.cxf.binding.corba.CorbaConduit prepare [java] SEVERE: Could not resolve target object [java] 16-Jun-2010 16:04:10 org.apache.cxf.binding.corba.CorbaConduit prepare [java] SEVERE: Could not resolve target object [java] WSDLToJava Error: org.apache.cxf.wsdl11.WSDLRuntimeException: Fail to create wsdl definition from : file:/C:/Docs/ZacRocha/Desktop/sams/$%7barchivehardware.wsdl%7d [java] Caused by : WSDLException: faultCode=PARSER_ERROR: Problem parsing 'file:/C:/Docs/ZacRocha/Desktop/sams/$%7barchivehardware.wsdl%7d'.: java.io.FileNotFoundException: C:\Docs\ZacRocha\Desktop\sams\${archivehardware.wsdl} (The system cannot find the file specified) BUILD SUCCESSFUL Total time: 4 seconds I am used to program on Eclipse and I know very little about building with Apache ANT. Can someone tell me where exactly the problem may be? Thanks in advance!

    Read the article

  • mod_rewrite to nginx rewrite rules

    - by Andrew Bestic
    I have converted most of my Apache HTTPd mod_rewrite rules over to nginx's HttpRewrite module (which calls PHP-FPM via FastCGI on every dynamic request). Simple rules which are defined by hard locations work fine: location = /favicon.ico { rewrite ^(.*)$ /_core/frontend.php?type=ico&file=include__favicon last; } I am still having trouble with regular expressions, which are parsed in mod_rewrite like this (note that I am accepting trailing slashes within the rules, as well as appending the query string to every request): mod_rewrite # File handler RewriteRule ^([a-z0-9-_,+=]+)\.([a-z]+)$ _core/frontend.php?type=$2&file=$1 [QSA,L] # Page handler RewriteRule ^([a-z0-9-_,+=]+)$ _core/frontend.php?route=$1 [QSA,L] RewriteRule ^([a-z0-9-_,+=]+)\/$ _core/frontend.php?route=$1 [QSA,L] RewriteRule ^([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)$ _core/frontend.php?route=$1/$2 [QSA,L] RewriteRule ^([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)\/$ _core/frontend.php?route=$1/$2 [QSA,L] RewriteRule ^([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)$ _core/frontend.php?route=$1/$2/$3 [QSA,L] RewriteRule ^([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)\/$ _core/frontend.php?route=$1/$2/$3 [QSA,L] I have come up with the following server configuration for the site, but I am met with unmatched rules after parsing a request (eg; GET /user/auth): attempted nginx rewrite location / { # File handler rewrite ^([a-z0-9-_,+=]+)\.([a-z]+)?(.*)$ /_core/frontend.php?type=$2&file=$1&$3 break; # Page handler rewrite ^([a-z0-9-_,+=]+)(\/*)?(.*)$ /_core/frontend.php?route=$1&$2 break; rewrite ^([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)(\/*)?(.*)$ /_core/frontend.php?route=$1/$2&$3 break; rewrite ^([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)(\/*)?(.*)$ /_core/frontend.php?route=$1/$2/$3&$4 break; } What would you suggest for dealing with my File Handler (which is just filename.ext), and my Page Handler (which is a unique route request with up to 3 properties defined by a forward slash)? As I haven't gotten a response from this yet, I am also unsure if this will override my PHP parser which is defined with location ~ \.php {}, which is included before these rewrite rules. Bonus points if I can solve the parsing issues without the need to use a new rule for each number of route properties.

    Read the article

  • J2ME web service client

    - by Wasim
    Hi all , I started to use the Netbeans 6.8 web service client wizard . I created a web service (.Net web service) witch return an object Data. The Data class , contains fileds with many types : int , string , double , Person object and array of Person object. I created the J2ME client code through the wizard and every thing seems ok . I see in the Netbeans project the service and the stub . When I try to call my web service method , say Data GetData() ; Then I have a problem with parsing the returned data from the web service and the client data object . As follows : After finishing to call the web service method , in the stub code : public Data helloWorld() throws java.rmi.RemoteException { Object inputObject[] = new Object[] { }; Operation op = Operation.newInstance( _qname_operation_HelloWorld, _type_HelloWorld, _type_HelloWorldResponse ); _prepOperation( op ); op.setProperty( Operation.SOAPACTION_URI_PROPERTY, "http://tempuri.org/HelloWorld" ); Object resultObj; try { resultObj = op.invoke( inputObject ); } catch( JAXRPCException e ) { Throwable cause = e.getLinkedCause(); if( cause instanceof java.rmi.RemoteException ) { throw (java.rmi.RemoteException) cause; } throw e; } return Data_fromObject((Object[])resultObj); } private static Data Data_fromObject( Object obj[] ) { if(obj == null) return null; Data result = new Data(); result.setIntData(((Integer )obj[0]).intValue()); result.setStringData((String )obj[1]); result.setDoubleData(((Double )obj[2]).doubleValue()); return result; } I debug the code and in Run time resultObj has one element witc is an array of the Data object , so in parsing the values in the Data_fromObject method it expect many cells like we see in the code obj[0] , obj[1] , obj [2] but in realtime it has obj[0][0] , obj[0][1] , obj[0][2] What can be the problem , how can check the code generation issue ? Is it a problem with the web service side. Please help. Thanks in advance...

    Read the article

  • uninstall string

    - by Sakhawat Ali
    Hi experts, I am developing an desktop based application using VB.NET, similar to add/remove program. everything was working fine until i start working on uninstall feature. Now what am i doing is that i get the uninstall string of the specific application from the registry and use System.Diagnostics.Process to run UninstallString. Dim proc As New Process() proc.StartInfo.FileName =UninstallString proc.Start() proc.WaitForExit() proc.Close() latter i found that it only work for straight file paths only, i mean with no command line argument like: C:\program files\someApp\uninstall.exe I make a list of list of all UninstallStrings of all application installed on my machine. i found few things like application installed using MSI, some were with rundll32 and few were with straight file path with some command argument like: My Silverlight SDK UninstallString, MSI Example MsiExec.exe /X{2012098D-EEE9-4769-8DD3-B038050854D4} My JetAudio UninstallString, RunDll32 Example RunDll32 C:\PROGRA~1\COMMON~1\INSTAL~1\engine\6\INTEL3~1\Ctor.dll,LaunchSetup "C:\Program Files\InstallShield Installation Information{91F34319-08DE-457A-99C0-0BCDFAC145B9}\Setup.exe" -l0x9 My Google Chrome UninstallString, straight file path with command argument example "C:\Program Files\Google\Chrome\Application\5.0.375.55\Installer\setup.exe" -uninstall The code i mentioned above does not work for these. i did some string parsing, separate two thing from UninstallString one is Filename and other is Arguments. like for MSI, filename is MSIEXEC.EXE and argument will be rest of the string, same for RunDLL32, same for straight file path with command argument. Now what am i facing is that, after every 2 or 3 days i come to know that this type of unistallstring is also not working. and why is that not working because it is a new type maybe abc C:\program files\someapp.exe -ddd so parse it too. is there any better way of doing that rather then parsing the string.

    Read the article

  • LPX-00607 for ora:contains in java but not sqlplus

    - by Windle
    Hey all, I am trying to doing some sql querys out of Oracle 11g and am having issues using ora:contains. I am using spring's jdbc impl and my code generates the sql statement: select * from view_name where column_a = ? and column_b = ? and existsNode(xmltype(clob_column), 'record/name [ora:contains(text(), "name1") 0]', 'xmlns:ora="http://xmlns.oralce.com/xdb"') = 1 I have removed the actual view / column names obviously, but when I copy that into sqlplus and substitute in random values, the select executes properly. When I try to run it in my DAO code I get this stack trace: org.springframework.jdbc.UncatergorizedSQLException: PreparedStatementCallback; uncatergorizedSQLException for SQL [the big select above]; SQL state [99999]; error code [31011]; ORA-31011: XML parsing failed. ORA-19202: Error occured in XML processing LPX-00607: Invalid reference: 'contains' ;nested exception is java.sql.SQLException: ORA-31011: XML parsing failed ORA-19202: Error occured in XML processing LPX-00607: Invalid reference: 'contains' (continues on like this for awhile....) I think it is worth mentioning that I am using maven and it is possible I am missing some dependency that is required for this. Sorry the post is so long, but I wanted to err on the side of too much info. Thanks for taking the time to read this at least =) -Windle

    Read the article

  • Handling Incoming Mail to Multiple Recipients in PHP

    - by Navarr
    Alright, this may take a moment or two to explain: I'm working on creating an Email<SMS Bridge (like Teleflip). I have a few set parameters to work in: Dreamhost Webhosting PHP 5 (without PEAR) Postfix What I have right now, is a catch-all email address that forwards the email sent to a shell account. The shell account in turn forwards it to my PHP script. The PHP script reads it, strips a few Email Headers in an effort to make sure it sends properly, then forwards it to the number specified as the recipient. [email protected] of course sends an SMS to +1 (555) 123-4567. This works really well, as I am parsing the To field and grabbing just the email address it is sending to. However, what I realized that I did not account for is multiple recipients. For example, an email sent to both 5551234567 and 1235554567 (using the To line, the CC line, or any combination of those). The way email works of course, is I get two emails received, end up parsing each of them separately, and 5551234567 ends up getting the same message twice. What is the best way to handle this situation, so that each number specified in TO and CC can get one copy of the message. In addition, though I doubt its possible: Is there a way to handle BCC the same way?

    Read the article

  • iPhone: variable type returned by yajl

    - by Luc
    Hello, I'm quite new to iphone programming and I want to do the following stuff: get data from a JSON REST web server parse the received data using YAJL Draw a graph with those data using core-plot So, 1th item is fine, I use ASIHttpRequest which runs as espected 3rd is almost fine (I still have to learn how to tune core-plot). The problem I have is regarding 2nd item. I use YAJL as it seems to be the faster parser, so why not give it a try :) Here is the part of code that gets the data from the server and parse them: // Get server data response_data = [request responseData]; // Parse JSON received self.arrayFromData = [response_data yajl_JSON]; NSLog(@"Array from data: %@", self.arrayFromData); The parsing works quite well in fact, the NSLog output is something like: 2010-06-14 17:56:35.375 TEST_APP[3733:207] Array from data : { data = ( { val = 1317; date = "2010-06-10T15:50:01+02:00"; }, { val = 1573; date = "2010-06-10T16:20:01+02:00"; }, ........ { val = 840; date = "2010-06-11T14:50:01+02:00"; }, { val = 1265; date = "2010-06-11T15:20:01+02:00"; } ); from = "2010-06-10T15:50:01+02:00"; to = "2010-06-11T15:20:01+02:00"; max = "2590"; } According to th yajl-objc explanations http://github.com/gabriel/yajl-objc, the parsing returns a NSArray... The thing is... I do not know how to get all the values from it as for me it looks more like a NSDictionary than a NSArray... Could you please help ? Thanks a lot, Luc edit1: it happens that this object is actually a NSCFDictionary (!), I am still not able to get value from it, when I try the objectFromKey method (that should work on a Dictionary, no ?) it fails.

    Read the article

  • Python minidom and UTF-8 encoded XML with hash references

    - by Jakob Simon-Gaarde
    Hi I am experiencing some difficulty in my home project where I need to parse a SOAP request. The SOAP is generated with gSOAP and involves string parameters with special characters like the danish letters "æøå". gSOAP builds SOAP requests with UTF-8 encoding by default, but instead of sending the special chatacters in raw format (ie. bytes C3A6 for the special character "æ") it sends what I think is called character hash references (ie. &#195;&#166;). I don't completely understand why gSOAP does it this way as I can see that it has marked the incomming payload as being UTF-8 encoded anyway (Content-Type: text/xml; charset=utf-8), but this is besides the question (I think). Anyway I guess gSOAP probably is obeying transport rules, or what? When I parse the request from gSOAP in python with xml.dom.minidom.parseString() I get element values as unicode objects which is fine, but the character hash references are not decoded as UTF-8 character codes. It unescapes the character hash references, but does not decode the string afterwards. In the end I have a unicode string object with UTF-8 encoding: So if the string "æble" is contained in the XML, it comes like this in the request: "&#195;&#166;ble" After parsing the XML the unicode string in the DOM Text Node's data member looks like this: u'\xc3\xa6ble' I would expect it to look like this: u'\xe6ble' What am I doing wrong? Should I unescape the SOAP XML before parsing it, or is it somewhere else I should be looking for the solution, maybe gSOAP? Thanks in advance. Best regards Jakob Simon-Gaarde

    Read the article

  • Slowdowns when reading from an urlconnection's inputstream (even with byte[] and buffers)

    - by user342677
    Ok so after spending two days trying to figure out the problem, and reading about dizillion articles, i finally decided to man up and ask to for some advice(my first time here). Now to the issue at hand - I am writing a program which will parse api data from a game, namely battle logs. There will be A LOT of entries in the database(20+ million) and so the parsing speed for each battle log page matters quite a bit. The pages to be parsed look like this: http://api.erepublik.com/v1/feeds/battle_logs/10000/0. (see source code if using chrome, it doesnt display the page right). It has 1000 hit entries, followed by a little battle info(lastpage will have <1000 obviously). On average, a page contains 175000 characters, UTF-8 encoding, xml format(v 1.0). Program will run locally on a good PC, memory is virtually unlimited(so that creating byte[250000] is quite ok). The format never changes, which is quite convenient. Now, I started off as usual: //global vars,class declaration skipped public WebObject(String url_string, int connection_timeout, int read_timeout, boolean redirects_allowed, String user_agent) throws java.net.MalformedURLException, java.io.IOException { // Open a URL connection java.net.URL url = new java.net.URL(url_string); java.net.URLConnection uconn = url.openConnection(); if (!(uconn instanceof java.net.HttpURLConnection)) { throw new java.lang.IllegalArgumentException("URL protocol must be HTTP"); } conn = (java.net.HttpURLConnection) uconn; conn.setConnectTimeout(connection_timeout); conn.setReadTimeout(read_timeout); conn.setInstanceFollowRedirects(redirects_allowed); conn.setRequestProperty("User-agent", user_agent); } public void executeConnection() throws IOException { try { is = conn.getInputStream(); //global var l = conn.getContentLength(); //global var } catch (Exception e) { //handling code skipped } } //getContentStream and getLength methods which just return'is' and 'l' are skipped Here is where the fun part began. I ran some profiling (using System.currentTimeMillis()) to find out what takes long ,and what doesnt. The call to this method takes only 200ms on avg public InputStream getWebPageAsStream(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; WebObject wobj = new WebObject(url, 10000, 10000, true, "Mozilla/5.0 " + "(Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 ( .NET CLR 3.5.30729)"); wobj.executeConnection(); l = wobj.getContentLength(); // global variable return wobj.getContentStream(); //returns 'is' stream } 200ms is quite expected from a network operation, and i am fine with it. BUT when i parse the inputStream in any way(read it into string/use java XML parser/read it into another ByteArrayStream) the process takes over 1000ms! for example, this code takes 1000ms IF i pass the stream i got('is') above from getContentStream() directly to this method: public static Document convertToXML(InputStream is) throws ParserConfigurationException, IOException, SAXException { DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.parse(is); doc.getDocumentElement().normalize(); return doc; } this code too, takes around 920ms IF the initial InputStream 'is' is passed in(dont read into the code itself - it just extracts the data i need by directly counting the characters, which can be done thanks to the rigid api feed format): public static parsedBattlePage convertBattleToXMLWithoutDOM(InputStream is) throws IOException { // Point A BufferedReader br = new BufferedReader(new InputStreamReader(is)); LinkedList ll = new LinkedList(); String str = br.readLine(); while (str != null) { ll.add(str); str = br.readLine(); } if (((String) ll.get(1)).indexOf("error") != -1) { return new parsedBattlePage(null, null, true, -1); } //Point B Iterator it = ll.iterator(); it.next(); it.next(); it.next(); it.next(); String[][] hits_arr = new String[1000][4]; String t_str = (String) it.next(); String tmp = null; int j = 0; for (int i = 0; t_str.indexOf("time") != -1; i++) { hits_arr[i][0] = t_str.substring(12, t_str.length() - 11); tmp = (String) it.next(); hits_arr[i][1] = tmp.substring(14, tmp.length() - 9); tmp = (String) it.next(); hits_arr[i][2] = tmp.substring(15, tmp.length() - 10); tmp = (String) it.next(); hits_arr[i][3] = tmp.substring(18, tmp.length() - 13); it.next(); it.next(); t_str = (String) it.next(); j++; } String[] b_info_arr = new String[9]; int[] space_nums = {13, 10, 13, 11, 11, 12, 5, 10, 13}; for (int i = 0; i < space_nums.length; i++) { tmp = (String) it.next(); b_info_arr[i] = tmp.substring(space_nums[i] + 4, tmp.length() - space_nums[i] - 1); } //Point C return new parsedBattlePage(hits_arr, b_info_arr, false, j); } I have tried replacing the default BufferedReader with BufferedReader br = new BufferedReader(new InputStreamReader(is), 250000); This didnt change much. My second try was to replace the code between A and B with: Iterator it = IOUtils.lineIterator(is, "UTF-8"); Same result, except this time A-B was 0ms, and B-C was 1000ms, so then every call to it.next() must have been consuming some significant time.(IOUtils is from apache-commons-io library). And here is the culprit - the time taken to parse the stream to string, be it by an iterator or BufferedReader in ALL cases was about 1000ms, while the rest of the code took 0ms(e.g. irrelevant). This means that parsing the stream to LinkedList, or iterating over it, for some reason was eating up a lot of my system resources. question was - why? Is it just the way java is made...no...thats just stupid, so I did another experiment. In my main method I added after the getWebPageAsStream(): //Point A ba = new byte[l]; // 'l' comes from wobj.getContentLength above bytesRead = is.read(ba); //'is' is our URLConnection original InputStream offset = bytesRead; while (bytesRead != -1) { bytesRead = is.read(ba, offset - 1, l - offset); offset += bytesRead; } //Point B InputStream is2 = new ByteArrayInputStream(ba); //Now just working with 'is2' - the "copied" stream The InputStream-byte[] conversion took again 1000ms - this is the way many ppl suggested to read an InputStream, and stil it is slow. And guess what - the 2 parser methods above (convertToXML() and convertBattlePagetoXMLWithoutDOM(), when passed 'is2' instead of 'is' took, in all 4 cases, under 50ms to complete. I read a suggestion that the stream waits for connection to close before unblocking, so i tried using HttpComponentsClient 4.0 (http://hc.apache.org/httpcomponents-client/index.html) instead, but the initial InputStream took just as long to parse. e.g. this code: public InputStream getWebPageAsStream2(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; HttpClient httpclient = new DefaultHttpClient(); HttpGet httpget = new HttpGet(url); HttpParams p = new BasicHttpParams(); HttpConnectionParams.setSocketBufferSize(p, 250000); HttpConnectionParams.setStaleCheckingEnabled(p, false); HttpConnectionParams.setConnectionTimeout(p, 5000); httpget.setParams(p); HttpResponse response = httpclient.execute(httpget); HttpEntity entity = response.getEntity(); l = (int) entity.getContentLength(); return entity.getContent(); } took even longer to process(50ms more for just the network) and the stream parsing times remained the same. Obviously it can be instantiated so as to not create HttpClient and properties every time(faster network time), but the stream issue wont be affected by that. So we come to the center problem - why does the initial URLConnection InputStream(or HttpClient InputStream) take so long to process, while any stream of same size and content created locally is orders of magnitude faster? I mean, the initial response is already somewhere in RAM, and I cant see any good reasong why it is processed so slowly compared to when a same stream is just created from a byte[]. Considering I have to parse million of entries and thousands of pages like that, a total processing time of almost 1.5s/page seems WAY WAY too long. Any ideas? P.S. Please ask in any more code is required - the only thing I do after parsing is make a PreparedStatement and put the entries into JavaDB in packs of 1000+, and the perfomance is ok ~ 200ms/1000entries, prb could be optimized with more cache but I didnt look into it much.

    Read the article

  • Could not determine the size of this expression.

    - by Søren
    Hi, I have resently started to use MATLAB Simulink, and my problem is that i can't implement an AMDF function, because simulink compiler cannot determine the lengths. Simulink errors: |--------------------------------------------------------------------------------------- Could not determine the size of this expression. Function 'Embedded MATLAB Function2' (#38.728.741), line 33, column 32: "1:flength-k+1" Errors occurred during parsing of Embedded MATLAB function 'Embedded MATLAB Function2'(#38) Embedded MATLAB Interface Error: Errors occurred during parsing of Embedded MATLAB function 'Embedded MATLAB Function2'(#38) . |--------------------------------------------------------------------------------------- MY CODE: |--------------------------------------------------------------------------------------- persistent sLength persistent fLength persistent amdf % Length of the frame flength = length(frame); % Pitch period is between 2.5 ms and 19.5 ms for LPC-10 algorithm % This because this algorithm assumes the frequencyspan is 50 and 400 Hz pH = ceil((1/min(fspan))*fs); if(pH flength) pH = flength; end; pL = ceil((1/max(fspan))*fs); if(pL <= 0 || pL = flength) pL = 0; end; sLength = pH - pL; % Normalize the frame frame = frame/max(max(abs(frame))); % Allocating memory for the calculation of the amdf %amdf = zeros(1,sLength); %%%%%%%% amdf = 0; % Calculating the AMDF with unbiased normalizing for k = (pL+1):pH amdf(k-pL) = sum(abs(frame(1:flength-k+1) - frame(k:flength)))/(flength-k+1); end; % Output of the AMDF if(min(amdf) < lvlThr) voiced = 1; else voiced = 0; end; % Output of the minimum of the amdf minAMDF = min(amdf); |---------------------------------------------------------------------------------------- HELP Kind regards Søren

    Read the article

  • passing java mail message object from between applications

    - by jezhilvalan
    I'm using java mail api 1.4.1 to obtain new emails. Two classes are being used to obtain emails and then parsing it. "GetMail" class communicates with mail server(Gmail,yahoo etc) and obtains the message object. Then the message object is passed to yet another class "MailFormatter" class, which then parses the message object, obtains the email headers (From,To,Subject etc) and then it parses the Multipart content to obtain the main body and attachments.Since both "Mail getting" and "Mail formatting" process are very resource intensive, these classes are going to be implemented as separate web applications.This application is going to monitor new emails for numerous email ids.If these ("GetMail" and "MailFormatter") are implemented as separate web applications, how can I pass the message object from "GetMail" app to "MailFormatter" app ? Is there a way through which I can persist the obtained message object in a certain location (a location which is common to both "GetMail" and "MailFormatter" applications), so that "GetMail" can persist the message object in that location, and then "MailFormatter" app can read "Message" objects from that location and carry out the parsing process. Message objects cannot be serialized. If they cannot be serialized how can I persist the state of java mail message object? please do help me to resolve this issue.

    Read the article

  • Issue with XSLT Processing on PHP

    - by monksy
    I'm getting a few errors from XSLTProcessor: XSLTProcessor::transformToDoc() [<a href='function.XSLTProcessor-transformToDoc'>function.XSLTProcessor-transformToDoc</a>]: Invalid or inclomplete context XSLTProcessor::transformToDoc() [<a href='function.XSLTProcessor-transformToDoc'>function.XSLTProcessor-transformToDoc</a>]: XSLTProcessor::transformToDoc() [<a href='function.XSLTProcessor-transformToDoc'>function.XSLTProcessor-transformToDoc</a>]: xsltValueOf: text copy failed in Which is parsing this XSLT Line: <xsl:apply-templates select="page/sections/section" mode="subset"/> The section is: <xsl:template match="page/sections/section" mode="subset"> <a href="#{shorttitle}"> <xsl:value-of select="title"/> </a> <xsl:if test="position() != last()"> | </xsl:if> </xsl:template> The XML that the section is parsing is: <shorttitle>About</shorttitle> <title>#~ About</title> The PHP XSLT Code is: $xslt = new XSLTProcessor(); $XSL = new DOMDocument(); $XSL->load( $xsltFile, LIBXML_NOCDATA); $xslt->importStylesheet( $XSL ); print $xslt->transformToXML( $XML ); My suspicion about the the errors is due to content. I'm not getting these errors with Firefox's XSLT rendering, nor am I getting an invalid XML document on the backend.I'm not getting errors on the load, its just on the transformToXML function. Does anyone have a clue on how to solve this? This is with PHP5.

    Read the article

  • Recursively MySQL Query

    - by Rachel
    How can I implement recursive MySQL Queries. I am trying to look for it but resources are not very helpful. Trying to implement similar logic. public function initiateInserts() { //Open Large CSV File(min 100K rows) for parsing. $this->fin = fopen($file,'r') or die('Cannot open file'); //Parsing Large CSV file to get data and initiate insertion into schema. $query = ""; while (($data=fgetcsv($this->fin,5000,";"))!==FALSE) { $query = $query + "INSERT INTO dt_table (id, code, connectid, connectcode) VALUES (" + $data[0] + ", " + $data[1] + ", " + $data[2] + ", " + $data[3] + ")"; } $stmt = $this->prepare($query); // Execute the statement $stmt->execute(); $this->checkForErrors($stmt); } @Author: Numenor Error Message: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '0' at line 1 This Approach inspired to look for an MySQL recursive query approach.

    Read the article

  • Nested parsers in happy / infinite loop?

    - by McManiaC
    I'm trying to write a parser for a simple markup language with happy. Currently, I'm having some issues with infinit loops and nested elements. My markup language basicly consists of two elements, one for "normal" text and one for bold/emphasized text. data Markup = MarkupText String | MarkupEmph [Markup] For example, a text like Foo *bar* should get parsed as [MarkupText "Foo ", MarkupEmph [MarkupText "bar"]]. Lexing of that example works fine, but the parsing it results in an infinite loop - and I can't see why. This is my current approach: -- The main parser: Parsing a list of "Markup" Markups :: { [Markup] } : Markups Markup { $1 ++ [$2] } | Markup { [$1] } -- One single markup element Markup :: { Markup } : '*' Markups1 '*' { MarkupEmph $2 } | Markup1 { $1 } -- The nested list inside *..* Markups1 :: { [Markup] } : Markups1 Markup1 { $1 ++ [$2] } | Markup1 { [$1] } -- Markup which is always available: Markup1 :: { Markup } : String { MarkupText $1 } What's wrong with that approach? How could the be resolved? Update: Sorry. Lexing wasn't working as expected. The infinit loop was inside the lexer. Sorry. :) Update 2: On request, I'm using this as lexer: lexer :: String -> [Token] lexer [] = [] lexer str@(c:cs) | c == '*' = TokenSymbol "*" : lexer cs -- ...more rules... | otherwise = TokenString val : lexer rest where (val, rest) = span isValidChar str isValidChar = (/= '*') The infinit recursion occured because I had lexer str instead of lexer cs in that first rule for '*'. Didn't see it because my actual code was a bit more complex. :)

    Read the article

  • problem with fork()

    - by john
    I'm writing a shell which forks, with the parent reading the input and the child process parsing and executing it with execvp. pseudocode of main method: do{ pid = fork(); print pid; if (p<0) { error; exit; } if (p>0) { wait for child to finish; read input; } else { call function to parse input; exit; } }while condition return; what happens is that i never seem to enter the child process (pid printed is always positive, i never enter the else). however, if i don't call the parse function and just have else exit, i do correctly enter parent and child alternatingly. full code: int main(int argc, char *argv[]){ char input[500]; pid_t p; int firstrun = 1; do{ p = fork(); printf("PID: %d", p); if (p < 0) {printf("Error forking"); exit(-1);} if (p > 0){ wait(NULL); firstrun = 0; printf("\n> "); bzero(input, 500); fflush(stdout); read(0, input, 499); input[strlen(input)-1] = '\0'; } else exit(0); else { if (parse(input) != 0 && firstrun != 1) { printf("Error parsing"); exit(-1); } exit(0); } }while(strcmp(input, "exit") != 0); return 0; }

    Read the article

  • Is my method for avoiding dynamic_cast<> faster than dynamic_cast<> itself ?

    - by ereOn
    Hi, I was answering a question a few minutes ago and it raised to me another one: In one of my projects, I do some network message parsing. The messages are in the form of: [1 byte message type][2 bytes payload length][x bytes payload] The format and content of the payload are determined by the message type. I have a class hierarchy, based on a common class Message. To instanciate my messages, i have a static parsing method which gives back a Message* depending on the message type byte. Something like: Message* parse(const char* frame) { // This is sample code, in real life I obviously check that the buffer // is not NULL, and the size, and so on. switch(frame[0]) { case 0x01: return new FooMessage(); case 0x02: return new BarMessage(); } // Throw an exception here because the mesage type is unknown. } I sometimes need to access the methods of the subclasses. Since my network message handling must be fast, I decived to avoid dynamic_cast<> and I added a method to the base Message class that gives back the message type. Depending on this return value, I use a static_cast<> to the right child type instead. I did this mainly because I was told once that dynamic_cast<> was slow. However, I don't know exactly what it really does and how slow it is, thus, my method might be as just as slow (or slower) but far more complicated. What do you guys think of this design ? Is it common ? Is it really faster than using dynamic_cast<> ? Any detailed explanation of what happen under the hood when one use dynamic_cast<> is welcome !

    Read the article

  • Why am i getting same values of different JSON date values?

    - by Maxood
    I do not know the reason why am i getting same values of different JSON date values. Here is my code for parsing date values in JSON date format: package com.jsondate.parsing; import java.text.SimpleDateFormat; import java.util.Date; import android.app.Activity; import android.os.Bundle; import android.util.Log; import android.widget.TextView; public class JSONDateParsing extends Activity { /** Called when the activity is first created. */ String myString; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); TextView tv = new TextView(this); //Date d = (new Date(1266029455L)); //Date d = (new Date(1266312467L)); Date d = (new Date(1266036226L)); //String s = d.getDate() + "-" + d.getMonth() + "-" + d.getYear() + d.getHours() + d.getMinutes() + d.getSeconds(); // SimpleDateFormat sdf=new SimpleDateFormat("yyyy MMM dd @ hh:mm aa"); //Toast.makeText(this, d.toString(), Toast.LENGTH_SHORT); Log.e("Value:", d.toString()); myString = d.toString(); String []words = myString.split(" "); for(int i = 0;i < words.length; i++) Log.e("Value:", words[i]); myString = words[2] + "-" + words[1] + "-" + words[5] + " " + words[3]; tv.setText(myString); setContentView(tv); } }

    Read the article

  • Indentation control while developing a small python like language

    - by sap
    Hello, I'm developing a small python like language using flex, byacc (for lexical and parsing) and C++, but i have a few questions regarding scope control. just as python it uses white spaces (or tabs) for indentation, not only that but i want to implement index breaking like for instance if you type "break 2" inside a while loop that's inside another while loop it would not only break from the last one but from the first loop as well (hence the number 2 after break) and so on. example: while 1 while 1 break 2 'hello world'!! #will never reach this. "!!" outputs with a newline end 'hello world again'!! #also will never reach this. again "!!" used for cout end #after break 2 it would jump right here but since I don't have an "anti" tab character to check when a scope ends (like C for example i would just use the '}' char) i was wondering if this method would the the best: I would define a global variable, like "int tabIndex" on my yacc file that i would access in my lex file using extern. then every time i find a tab character on my lex file i would increment that variable by 1. when parsing on my yacc file if i find a "break" keyword i would decrement by the amount typed after it from the tabIndex variable, and when i reach and EOF after compiling and i get a tabIndex != 0 i would output compilation error. now the problem is, whats the best way to see if the indentation got reduced, should i read \b (backspace) chars from lex and then reduce the tabIndex variable (when the user doesn't use break)? another method to achieve this? also just another small question, i want every executable to have its starting point on the function called start() should i hardcode this onto my yacc file? sorry for the long question any help is greatly appreciated. also if someone can provide an yacc file for python would be nice as a guideline (tried looking on Google and had no luck). thanks in advance.

    Read the article

  • Implementing eval and load functions inside a scripting engine with Flex and Bison.

    - by Simone Margaritelli
    Hy guys, i'm developing a scripting engine with flex and bison and now i'm implementing the eval and load functions for this language. Just to give you an example, the syntax is like : import std.*; load( "some_script.hy" ); eval( "foo = 123;" ); println( foo ); So, in my lexer i've implemented the function : void hyb_parse_string( const char *str ){ extern int yyparse(void); YY_BUFFER_STATE prev, next; /* * Save current buffer. */ prev = YY_CURRENT_BUFFER; /* * yy_scan_string will call yy_switch_to_buffer. */ next = yy_scan_string( str ); /* * Do actual parsing (yyparse calls yylex). */ yyparse(); /* * Restore previous buffer. */ yy_switch_to_buffer(prev); } But it does not seem to work. Well, it does but when the string (loaded from a file or directly evaluated) is finished, i get a sigsegv : Program received signal SIGSEGV, Segmentation fault. 0xb7f2b801 in yylex () at src/lexer.cpp:1658 1658 if ( YY_CURRENT_BUFFER_LVALUE->yy_buffer_status == YY_BUFFER_NEW ) As you may notice, the sigsegv is generated by the flex/bison code, not mine ... any hints, or at least any example on how to implement those kind of functions? PS: I've succesfully implemented the include directive, but i need eval and load to work not at parsing time but execution time (kind of PHP's include/require directives).

    Read the article

  • How to fix IllegalStateException error when trying to update a listview?

    - by Michael Vetrano
    Hi guys, I am trying to get this code to run, but I get an IllegalStateException when I run this code saying that the content of the listview wasn't notified, yet I have a notification upon updating the data. This is a custom listview adapter. Here is the relevant part of my code: class LoadingThread extends Thread { public void run() { int itemsOriginallyLoaded = 0; synchronized( items ) { itemsOriginallyLoaded = items.size(); } for( int i = itemsOriginallyLoaded ; i < itemsToLoad ; ++i ) { Log.d( LOG_TAG, "Loading item #"+i ); //String item = "FAIL"; //try { String item = "FAIL"; try { item = stockGrabber.getStockString(dataSource.get(i)); } catch (ApiException e) { Log.e(LOG_TAG, "Problem making API request", e); } catch (ParseException e) { Log.e(LOG_TAG, "Problem parsing API request", e); } //} catch (ApiException e) { // Log.e(TAG, "Problem making API request", e); //} catch (ParseException e) { // Log.e(TAG, "Problem parsing API request", e); //} synchronized( items ) { items.add( item ); } itemsLoaded = i+1; uiHandler.post( updateTask ); Log.d( LOG_TAG, "Published item #"+i ); } if( itemsLoaded >= ( dataSource.size() - 1 ) ) allItemsLoaded = true; synchronized( loading ) { loading = Boolean.FALSE; } } } class UIUpdateTask implements Runnable { public void run() { Log.d( LOG_TAG, "Publishing progress" ); notifyDataSetChanged(); } }

    Read the article

  • Altering an embedded truetype font so it will be useable by Windows GDI

    - by Ritsaert Hornstra
    I am trying to render PDF content to a GDI device context (a 24bit bitmap to be exact). Parsing the PDF stream into PDF objects and rendering the PDF commands from the content dictionary works well, inclduing font rendering. Embedded fonts are decompressed from their FontFile streams and "loaded" using AddFontMemResourceEx. Now some embedded fonts remove some TrueType tables that are needed by GDI, like the NAME table. Because of this, I tried to modify the font by parsing the TrueType subset font into it's tables and modify those tables that have data missing / missing tables are regenerated with as correct information as possible. I use the Microsoft Font Validator tool to see how "correct" the generated font is. I still get a few errors, like for the maxp table the max values are usually too large (it is a subset) or The xAvgCharWidth field does not equal the calculated value of the OS/2 table is not correct but this does not stop other embedded fonts to be useable.The fonts embedded using PDFCreator are the ones that are problematic. Question: - How can I determine what I need to change to the font file in order for GDI to be able to use it? - Are there any other font validation tools that might give me insight into what is still wrong with the fontfile? If needed: I can make an original fontfile and an altered fontfile available for download somewhere.

    Read the article

  • Boost Binary Endian parser not working?

    - by Hai
    I am studying how to use boost spirit Qi binary endian parser. I write a small test parser program according to here and basics examples, but it doesn't work proper. It gave me the msg:"Error:no match". Here is my code. #include "boost/spirit/include/qi.hpp" #include "boost/spirit/include/phoenix_core.hpp" #include "boost/spirit/include/phoenix_operator.hpp" #include "boost/spirit/include/qi_binary.hpp" // parsing binary data in various endianness template '<'typename P, typename T void binary_parser( char const* input, P const& endian_word_type, T& voxel, bool full_match = true) { using boost::spirit::qi::parse; char const* f(input); char const* l(f + strlen(f)); bool result1 = parse(f,l,endian_word_type,voxel); bool result2 =((!full_match) || (f ==l)); if ( result1 && result2) { //doing nothing, parsing data is pass to voxel alreay } else { std::cerr << "Error: not match!!" << std::endl; exit(1); } } typedef boost::uint16_t bs_int16; typedef boost::uint32_t bs_int32; int main ( int argc, char *argv[] ) { namespace qi = boost::spirit::qi; namespace ascii = boost::spirit::ascii; using qi::big_word; using qi::big_dword; boost::uint32_t ui; float uf; binary_parser("\x01\x02\x03\x04",big_word,ui); assert(ui=0x01020304); binary_parser("\x01\x02\x03\x04",big_word,uf); assert(uf=0x01020304); return 0; }' I almost copy the example, but why this binary parser doesn't work. I use Mac OS 10.5.8 and gcc 4.01 compiler.

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >