Search Results

Search found 3284 results on 132 pages for 'parser generator'.

Page 58/132 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • XML::LibXML::XPathContext - Question

    - by sid_com
    This script works with and without XPathContext. Why should I use it with XPathContext? #!/usr/bin/env perl use warnings; use strict; use XML::LibXML; use 5.012; my $parser = XML::LibXML->new; my $doc = $parser->parse_string(<<EOT); <?xml version="1.0"?> <xml> Text im Dokument <element id="myID" name="myname" style="old" /> <object objid="001" objname="Object1" /> <element id="002" name="myname" /> </xml> EOT #/ # without XPathContext my $nodes = $doc->findnodes( '/xml/element[@id=002]' ); # with XPathContext #my $root = $doc->documentElement; #my $xc = XML::LibXML::XPathContext->new( $root ); #my $nodes = $xc->findnodes( '/xml/element[@id=002]' ); for my $node ( $nodes->get_nodelist ) { say "Node: ", $node->nodeName; print "Attribute: "; print $_->getName, '=', $_->getValue, ' ' for $node->attributes; say ""; }

    Read the article

  • Render User Controls and call their Page_Load

    - by David Murdoch
    The following code will not work because the controls (page1, page2, page3) require that their "Page_Load" event gets called. using System; using System.IO; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using iTextSharp.text; using iTextSharp.text.html.simpleparser; using iTextSharp.text.pdf; public partial class utilities_getPDF : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { // add user controls to the page AddContentControl("controls/page1"); AddContentControl("controls/page2"); AddContentControl("controls/page3"); // set up the response to download the rendered PDF... Response.ContentType = "application/pdf"; Response.ContentEncoding = System.Text.Encoding.UTF8; Response.AddHeader("Content-Disposition", "attachment;filename=FileName.pdf"); Response.Cache.SetCacheability(HttpCacheability.NoCache); // get the rendered HTML of the page System.IO.StringWriter stringWrite = new StringWriter(); System.Web.UI.HtmlTextWriter htmlWrite = new HtmlTextWriter(stringWrite); this.Render(htmlWrite); StringReader reader = new StringReader(stringWrite.ToString()); // write the PDF to the OutputStream... Document doc = new Document(PageSize.A4); HTMLWorker parser = new HTMLWorker(doc); PdfWriter.GetInstance(doc, Response.OutputStream); doc.Open(); parser.Parse(reader); doc.Close(); } // the parts are probably irrelevant to the question... private const string contentPage = "~/includes/{0}.ascx"; private void AddContentControl(string page) { content.Controls.Add(myLoadControl(page)); } private Control myLoadControl(string page) { return TemplateControl.LoadControl(string.Format(contentPage, page)); } } So my question is: How can I get the controls HTML?

    Read the article

  • Parsing unicode XML with Python SAX on App Engine

    - by Derek Dahmer
    I'm using xml.sax with unicode strings of XML as input, originally entered in from a web form. On my local machine (python 2.5, using the default xmlreader expat, running through app engine), it works fine. However, the exact same code and input strings on production app engine servers fail with "not well-formed". For example, it happens with the code below: from xml import sax class MyHandler(sax.ContentHandler): pass handler = MyHandler() # Both of these unicode strings return 'not well-formed' # on app engine, but work locally xml.parseString(u"<a>b</a>",handler) xml.parseString(u"<!DOCTYPE a[<!ELEMENT a (#PCDATA)> ]><a>b</a>",handler) # Both of these work, but output unicode xml.parseString("<a>b</a>",handler) xml.parseString("<!DOCTYPE a[<!ELEMENT a (#PCDATA)> ]><a>b</a>",handler) resulting in the error: File "<string>", line 1, in <module> File "/base/python_dist/lib/python2.5/xml/sax/__init__.py", line 49, in parseString parser.parse(inpsrc) File "/base/python_dist/lib/python2.5/xml/sax/expatreader.py", line 107, in parse xmlreader.IncrementalParser.parse(self, source) File "/base/python_dist/lib/python2.5/xml/sax/xmlreader.py", line 123, in parse self.feed(buffer) File "/base/python_dist/lib/python2.5/xml/sax/expatreader.py", line 211, in feed self._err_handler.fatalError(exc) File "/base/python_dist/lib/python2.5/xml/sax/handler.py", line 38, in fatalError raise exception SAXParseException: <unknown>:1:1: not well-formed (invalid token) Any reason why app engine's parser, which also uses python2.5 and expat, would fail when inputting unicode?

    Read the article

  • Parsing some particular statements with antlr3 in C target

    - by JCD
    Hello all! I have some questions about antlr3 with tree grammar in C target. I have almost done my interpretor (functions, variables, boolean and math expressions ok) and i have kept the most difficult statements for the end (like if, switch, etc.) 1) I would like interpreting a simple loop statement: repeat: ^(REPEAT DIGIT stmt); I've seen many examples but nothing about the tree walker (only a topic here with the macros MARK() / REWIND(m) + @init / @after but not working (i've antlr errors: "unexpected node at offset 0")). How can i interpret this statement in C? 2) Same question with a simple if statement: if: ^(IF condition stmt elseifstmt* elsestmt?); The problem is to skip the statement if the condition is false and test the other elseif/else statements. 3) I have some statements which can stop the script (like "break" or "exit"). How can i interrupt the tree walker and skip the following tokens? 4) When a lexer or parser error is detected, antlr returns an error. But i would like to make my homemade error messages. How can i have the line number where parser crashed? Ask me if you want more details. Thanks you very much (and i apologize for my poor english)

    Read the article

  • Java 1.4 singleton containing a mutable field

    - by Philippe
    Hi, I'm working on a legacy Java 1.4 project, and I have a factory that instantiates a csv file parser as a singleton. In my csv file parser, however, I have a HashSet that will store objects created from each line of my CSV file. All that will be used by a web application, and users will be uploading CSV files, possibly concurrently. Now my question is : what is the best way to prevent my list of objects to be modified by 2 users ? So far, I'm doing the following : final class MyParser { private File csvFile = null; private static Set myObjects = Collections.synchronizedSet(new HashSet); public synchronized void setFile(File file) { this.csvFile = file; } public void parse() FileReader fr = null; try { fr = new FileReader(csvFile); synchronized(myObjects) { myObjects.clear(); while(...) { // foreach line of my CSV, create a "MyObject" myObjects.add(new MyObject(...)); } } } catch (Exception e) { //... } } } Should I leave the lock only on the myObjects Set, or should I declare the whole parse() method as synchronized ? Also, how should I synchronize - both - the setting of the csvFile and the parsing ? I feel like my actual design is broken because threads could modify the csv file several times while a possibly long parse process is running. I hope I'm being clear enough, because myself am a bit confused on those multi-synchronization issues. Thanks ;-)

    Read the article

  • NULL value when using NSDateFormatter after setting NSDate property via XML parsing

    - by David A Gibson
    Hello, I am using the following code to try and display in a time in a table cell. TimeSlot *timeSlot = [timeSlots objectAtIndex:indexPath.row]; NSDateFormatter *timeFormat = [[NSDateFormatter alloc] init]; [timeFormat setDateFormat:@"HH:mm:ss"]; NSLog(@"Time: %@", timeSlot.time); NSDate *mydate = timeSlot.time; NSLog(@"Time: %@", mydate); NSString *theTime = [timeFormat stringFromDate:mydate]; NSLog(@"Time: %@", theTime); The log output is this: 2010-04-14 10:23:54.626 MyApp[1080:207] Time: 2010-04-14T10:23:54 2010-04-14 10:23:54.627 MyApp[1080:207] Time: 2010-04-14T10:23:54 2010-04-14 10:23:54.627 MyApp[1080:207] Time: (null) I am new to developing for the iPhone and as it all compiles with no errors or warnings I am at a loss as to why I am getting NULL in the log. Is there anything wrong with this code? Thanks Further Info I used the code exactly from your answer lugte098 just to check and I was getting dates which leads me to believe that my TimeSlot class can't have a date correctly set in it's NSDate property. So my question becomes - how from XML do I set a NSDate property? I have this code (abbreviated): -(void)parser:(NSXMLParser *)parser foundCharacters:(NSString *) string { if ([currentElement isEqualToString:@"Time"]) { currentTimeSlot.time = string } } Thanks

    Read the article

  • What is the fastest way to pull a few element values out of XML files in Perl?

    - by Anon Guy
    I have a bunch of XML files that are about 1-2 megabytes in size. Actually, more than a bunch, there are millions. They're all well-formed and many are even validated against their schema (confirmed with libxml2). All were created by the same app, so they're in a consistent format (though this could theoretically change in the future). I want to check the values of one element in each file from within a Perl script. Speed is important (I'd like to take less than a second per file) and as noted I already know the files are well-formed. I am sorely tempted to simply 'open' the files in Perl and scan through until I see the element I am looking for, grab the value (which is near the start of the file), and close the file. On the other hand, I could use an XML parser (which might protect me from future changes to the XML formatting) but I suspect it will be slower than I'd like. Can anyone recommend an appropriate approach and/or parser? Thanks in advance.

    Read the article

  • Grails + GAE - Issue using app.servlet.version=2.5

    - by Taylor L
    Updating the servlet version in application.properties to 2.5 has no affect on the generated web.xml. The generated web.xml is still version 2.4. app.servlet.version=2.5 Also, if I try to execute "run-app" I get the exception below: Running Grails application.. Starting AppEngine generated indices thread. Starting reload monitor thread. [java] Jan 26, 2010 5:27:05 AM com.google.apphosting.utils.jetty.JettyLogger warn [java] WARNING: Failed startup of context com.google.apphosting.utils.jetty.DevAppEngineWebAppContext@4178460d{/,C:\Users\Taylor Leese\workspace\test-gae\web-app} [java] java.lang.IllegalStateException: No such servlet: grails [java] at org.mortbay.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:953) [java] at org.mortbay.jetty.servlet.ServletHandler.setServletMappings(ServletHandler.java:1037) [java] at org.mortbay.jetty.webapp.WebXmlConfiguration.initialize(WebXmlConfiguration.java:305) [java] at org.mortbay.jetty.webapp.WebXmlConfiguration.configure(WebXmlConfiguration.java:222) [java] at org.mortbay.jetty.webapp.WebXmlConfiguration.configureWebApp(WebXmlConfiguration.java:180) [java] at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1215) [java] at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:500) [java] at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:448) [java] at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) [java] at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:117) [java] at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) [java] at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:117) [java] at org.mortbay.jetty.Server.doStart(Server.java:217) [java] at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) [java] at com.google.appengine.tools.development.JettyContainerService.startContainer(JettyContainerService.java:188) [java] at com.google.appengine.tools.development.AbstractContainerService.startup(AbstractContainerService.java:120) [java] at com.google.appengine.tools.development.DevAppServerImpl.start(DevAppServerImpl.java:217) [java] at com.google.appengine.tools.development.DevAppServerMain$StartAction.apply(DevAppServerMain.java:162) [java] at com.google.appengine.tools.util.Parser$ParseResult.applyArgs(Parser.java:48) [java] at com.google.appengine.tools.development.DevAppServerMain.<init>(DevAppServerMain.java:113) [java] at com.google.appengine.tools.development.DevAppServerMain.main(DevAppServerMain.java:89) [java] The server is running at http://localhost:8080/ Any ideas how to resolve these issues?

    Read the article

  • In-document schema declarations and lxml

    - by shylent
    As per the official documentation of lxml, if one wants to validate a xml document against a xml schema document, one has to construct the XMLSchema object (basically, parse the schema document) construct the XMLParser, passing the XMLSchema object as its schema argument parse the actual xml document (instance document) using the constructed parser There can be variations, but the essense is pretty much the same no matter how you do it, - the schema is specified 'externally' (as opposed to specifying it inside the actual xml document). If you follow this procedure, the validation occurs, sure enough, but if I understand it correctly, that completely ignores the whole idea of the schemaLocation and noNamespaceSchemaLocation attributes from xsi. This introduces a whole bunch of limitations, starting with the fact, that you have to deal with instance<-schema relation all by yourself (either store it externally or write some hack to retrieve the schema location from the root element of the instance document), you can not validate the document using multiple schemata (say, when each schema governs its own namespace) and so on. So the question is: maybe I am missing something completely trivial or doing it wrong? Or are my statements about lxml's limitations regarding schema validation true? To recap, I'd like to be able to: have the parser use the schema location declarations in the instance document at parse/validation time use multiple schemata to validate a xml document declare schema locations on non-root elements (not of extreme importance) Maybe I should look for a different library? Although, that'd be a real shame, - lxml is a de-facto xml processing library for python and is regarded by everyone as the best one in terms of performace/features/convenience (and rightfully so, to a certain extent)

    Read the article

  • How do I detect if there is already a similar document stored in Lucene index.

    - by Jenea
    Hi. I need to exclude duplicates in my database. The problem is that duplicates are not considered exact match but rather similar documents. For this purpose I decided to use FuzzyQuery like follows: var fuzzyQuery = new global::Lucene.Net.Search.FuzzyQuery( new Term("text", queryText), 0.8f, 0); hits = _searcher.Search(query); The idea was to set the minimal similarity to 0.8 (that I think is high enough) so only similar documents will be found excluding those that are not sufficiently similar. To test this code I decided to see if it finds already existing document. To the variable queryText was assigned a value that is stored in the index. The code from above found nothing, in other words it doesn't detect even exact match. Index was build by this code: doc.Add(new global::Lucene.Net.Documents.Field( "text", text, global::Lucene.Net.Documents.Field.Store.YES, global::Lucene.Net.Documents.Field.Index.TOKENIZED, global::Lucene.Net.Documents.Field.TermVector.WITH_POSITIONS_OFFSETS)); I followed recomendations from bellow and the results are: TermQuery doesn't return any result. Query contructed with var _analyzer = new RussianAnalyzer(); var parser = new global::Lucene.Net.QueryParsers .QueryParser("text", _analyzer); var query = parser.Parse(queryText); var _searcher = new IndexSearcher (Settings.General.Default.LuceneIndexDirectoryPath); var hits = _searcher.Search(query); Returns several results with the maximum score the document that has exact match and other several documents that have similar content.

    Read the article

  • Is this a valid XPath expression?

    - by sid_com
    Is this xpath a valid XPath expression? (It does what it should ). #!/usr/bin/env perl use strict; use warnings; use 5.012; use XML::LibXML; my $string =<<EOS; <result> <cd> <artists> <artist class="1">Pumkinsingers</artist> <artist class="2">Max and Moritz</artist> </artists> <title>Hello, Hello</title> </cd> <cd> <artists> <artist class="3">Green Trees</artist> <artist class="4">The Leons</artist> </artists> <title>The Shield</title> </cd> </result> EOS #/ my $parser = XML::LibXML->new(); my $doc = $parser->load_xml( string => $string ); my $root = $doc->documentElement; my $xpath = '/result/cd[artists[artist[@class="2"]]]/title'; my @nodes = $root->findnodes( $xpath ); for my $node ( @nodes ) { say $node->textContent; }

    Read the article

  • asp.Net 2005 "Could not load type"

    - by Melody Friedenthal
    Although I have seen dozens of forum questions relating to "Could not load type", none of the advice in them seemed to apply to my situation. I wrote a new web application using aspx.net VB 2005. It is tiny, with just 2 pages, 1 of which has no code-behind. It runs aok in the IDE but when I installed it on my pc (and also when installed on the production server), and tried to run it this error came up: Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Could not load type 'EMTTrainingDatabase.pageMain'. Source Error: Line 1: <%@ Page Language="vb" AutoEventWireup="false" CodeBehind="pageMain.aspx.vb" Inherits="EMTTrainingDatabase.pageMain" %> Line 2: Line 3: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> Source File: /EMTTrainingDatabase/pageMain.aspx Line: 1 -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:2.0.50727.3603; ASP.NET Version:2.0.50727.3082 I checked the web site properties in IIS and the correct ASP.net version is specified: 2.0.50727. I checked the virtual path and it looks correct too: /EMTTrainingDatabase pageMain source code header looks like: <%@ Page Language="vb" AutoEventWireup="false" CodeBehind="pageMain.aspx.vb" Inherits="EMTTrainingDatabase.pageMain" %> Some posters suggest that the bin is in the wrong folder or the bin doesn't contain the rigt contents. I don't have enough knowledge to evaulate this. Can someone help? Thank you.

    Read the article

  • Yacc and Lex inclusion confusion

    - by thejohnmeister
    I am wondering how to correctly compile a program with a Makefile that has calls to yyparse in it? This is what I do: I have a Makefile that compiles all my regular files and they have no connections to y.tab.c or lex.yy.c (Am I supposed to have them?) I do this on top of my code: #include "y.tab.c" #include "lex.yy.c" #include "y.tab.h" This is what happens when I try to make the program: When I just type in "make", it gives me many warnings. Some of the examples are shown below. In function yywrap': /src/parser.y:12: multiple definition ofyywrap' server.o :/src/parser.y:12: first defined here utils.o: In function yyparse': /src/y.tab.c:1175: multiple definition ofyyparse' server.o:/src/y.tab.c:1175: first defined here utils.o I get many different errors referring to different yy_* files. I have compiled successfully with multiple calls to yyparse in the past, but this time seems to be different. It seems an awful lot like an inclusion problem somewhere but I can't tell what it is. All my header files have ifndef conditions, as well. Thanks for your help!

    Read the article

  • Bison input analyzer - basic question on optional grammer and input interpretation

    - by kumar_m_kiran
    Hi All, I am very new to Flex/Bison, So it is very navie question. Pardon me if so. May look like homework question - but I need to implement project based on below concept. My question is related to two parts, Question 1 In Bison parser, How do I provide rules for optional input. Like, I need to parse the statment Example : -country='USA' -state='INDIANA' -population='100' -ratio='0.5' -comment='Census study for Indiana' Here the ratio token can be optional. Similarly, If I have many tokens optional, then How do I provide the grammer in the parser for the same? My code looks like, %start program program : TK_COUNTRY TK_IDENTIFIER TK_STATE TK_IDENTIFIER TK_POPULATION TK_IDENTIFIER ... where all the tokens are defined in the lexer. Since there are many tokens which are optional, If I use "|" then there will be many different ways of input combination possible. Question 2 There are good chance that the comment might have quotes as part of the input, so I have added a token -tag which user can provide to interpret the same, Example : -country='USA' -state='INDIANA' -population='100' -ratio='0.5' -comment='Census study for Indiana$'s population' -tag=$ Now, I need to reinterpret Indiana$'s as Indiana's since -tag=$. Please provide any input or related material for to understand these topic. Thanks for your input in advance.

    Read the article

  • Filtering Wikipedia's XML dump: error on some accents

    - by streetpc
    I'm trying to index Wikpedia dumps. My SAX parser make Article objects for the XML with only the fields I care about, then send it to my ArticleSink, which produces Lucene Documents. I want to filter special/meta pages like those prefixed with Category: or Wikipedia:, so I made an array of those prefixes and test the title of each page against this array in my ArticleSink, using article.getTitle.startsWith(prefix). In English, everything works fine, I get a Lucene index with all the pages except for the matching prefixes. In French, the prefixes with no accent also work (i.e. filter the corresponding pages), some of the accented prefixes don't work at all (like Catégorie:), and some work most of the time but fail on some pages (like Wikipédia:) but I cannot see any difference between the corresponding lines (in less). I can't really inspect all the differences in the file because of its size (5 GB), but it looks like a correct UTF-8 XML. If I take a portion of the file using grep or head, the accents are correct (even on the incriminated pages, the <title>Catégorie:something</title> is correctly displayed by grep). On the other hand, when I rectreate a wiki XML by tail/head-cutting the original file, the same page (here Catégorie:Rock par ville) gets filtered in the small file, not in the original… Any idea ? Alternatives I tried: Getting the file (commented lines were tried wihtout success): FileInputStream fis = new FileInputStream(new File(xmlFileName)); //ReaderInputStream ris = ReaderInputStream.forceEncodingInputStream(fis, "UTF-8" ); //(custom function opening the stream, reading it as UFT-8 into a Reader and returning another byte stream) //InputSource is = new InputSource( fis ); is.setEncoding("UTF-8"); parser.parse(fis, handler); Filtered prefixes: ignoredPrefix = new String[] {"Catégorie:", "Modèle:", "Wikipédia:", "Cat\uFFFDgorie:", "Mod\uFFFDle:", "Wikip\uFFFDdia:", //invalid char "Catégorie:", "Modèle:", "Wikipédia:", // UTF-8 as ISO-8859-1 "Image:", "Portail:", "Fichier:", "Aide:", "Projet:"}; // those last always work

    Read the article

  • DOJO : How do you reinitiate form elements after ajax call ?

    - by Dural
    I'm trying to do a couple of things using Zend Framework & Dojo Toolkit, and any help would be appreciated. Here's the problem: I have a form that is rendered with the Zend Framework form class, which has an ajax radio button selection. Clicking one of these radio buttons will send an ajax request to another controller, which has no layout, just a rendered form. The ajax request will then populate a div with the new form options. Problem is, when I replace the innerHTML of the div with the ajax response, all the form inputs and elements are not inheriting the same Dojo styling and form validation. I was wondering if there is a way to reinitate form elements after an ajax call? I have tried to use the code attached which I found and modified slightly for this example, however it did not work. If I use the line dojo.parser.parse( div ); nothing changes (rg_adress in the example is the ID of a form element that is placed on the DOM). Here is the console.log of rg_address: <input type="text" dojotype="dijit.form.ValidationTextBox" required="1" invalidmessage="The first name of the recipient" value="" name="rg_address" id="rg_address" class="textbox"/> onClick=' dojo.xhrGet( { url: "/transfer/newrecipient/", handleAs: "text", timeout: 10000, // Time in milliseconds // The LOAD function will be called on a successful response. load: function(response, ioArgs) { $("#newRecipient").html(response); $("#newPayMethod").html(""); $("#newPayDetail").html(""); var div = dojo.byId("rg_address"); console.log( div ); dojo.parser.parse( div ); return response; }, // The ERROR function will be called in an error case. error: function(response, ioArgs) { $("#newRecipient").html("Error loading registration form"); $("#newPayMethod").html(""); $("#newPayDetail").html(""); return response; } });' Thanks, Dural

    Read the article

  • Extracting specific nodes from XML using XML::Twig

    - by pratz
    i was trying to extract a particular set of nodes from the following XML structure using XML::Twig, but have been stuck ever since. I need to extract the 'player' nodes from the following structure and do a string match/replace on each of these node values. <pep:record> <agency> <subrecord type="scout"> <isnum>123XXX (print)</isnum> <isnum>234YYY (mag)</isnum> </subrecord> <subrecord type="group"> </subrecord> </agency </record> I tried using the following code, but I get pointed to a hash reference rather than actual string. my $parser = XML::Twig->new(twig_handlers => { isnum => sub { print $_->text."::" }, }); foreach my $rec (split(/::/, $parser->parse($my_xml))) { if ($rec =~ m/print/) { ($print = $rec) =~ s/( \(print\))//; } elsif($rec =~ m/mag/) { ($mag = $rec) =~ s/( \(mag\))//; } }

    Read the article

  • Objective-C Implementation Pointers

    - by Dwaine Bailey
    Hi, I am currently writing an XML parser that parses a lot of data, with a lot of different nodes (the XML isn't designed by me, and I have no control over the content...) Anyway, it currently takes an unacceptably long time to download and read in (about 13 seconds) and so I'm looking for ways to increase the efficiency of the read. I've written a function to create hash values, so that the program no longer has to do a lot of string comparison (just NSUInteger comparison), but this still isn't reducing the complexity of the read in... So I thought maybe I could create an array of IMPs so that, I could then go something like: for(int i = 0; i < [hashValues count]; i ++) { if(currHash == [[hashValues objectAtIndex:i] unsignedIntValue]) { [impArray objectAtIndex:i]; } } Or something like that. The only problem is that I don't know how to actually make the call to the IMP function? I've read that I perform the selector that an IMP defines by going IMP tImp = [impArray objectAtIndex:i]; tImp(self, @selector(methodName)); But, if I need to know the name of the selector anyway, what's the point? Can anybody help me out with what I want to do? Or even just some more ways to increase the efficiency of the parser...

    Read the article

  • DQL delete from multiple tables (doctrine)

    - by singer
    Need to perform DQL delete from multple related tables. In SQL it is something like this: DELETE r1,r2 FROM ComRealty_objects r1, com_realty_objects_phones r2 WHERE r1.id IN (10,20) AND r2.id_object IN (10,20) I need to perform this statement using DQL, but I'm stuck on this :( <?php $dql = Doctrine_Query::create() ->delete('phones, comrealtyobjects') ->from('ComRealtyObjects comrealtyobjects') ->from('ComRealtyObjectsPhones phones') ->whereIn("comrealtyobjects.id", $ids) ->whereIn("phones.id_object", $ids); echo($dql->getSqlQuery()); ?> But DQL parser gives me this result: DELETE FROM `com_realty_objects_phones`, `ComRealty_objects` WHERE (`id` IN (?) AND `id_object` IN (?)) Searching google and stack overflow I found this(useful) topic: http://stackoverflow.com/questions/2247905/what-is-the-syntax-for-a-multi-table-delete-on-a-mysql-database-using-doctrine But this is not exactly my case - there was delete from single table. If there is a way to override dql parser behaviour? Or maybe some other way to delete records from multiple tables using doctrine. Note: If you are using doctrine behaviours(Doctrine_Record_Generator) you need first to initialize those tables using Doctrine_Core::initializeModels() to perform DQL operations on them.

    Read the article

  • how do i filter my lucene search results?

    - by Andrew Bullock
    Say my requirement is "search for all users by name, who are over 18" If i were using SQL, i might write something like: Select * from [Users] Where ([firstname] like '%' + @searchTerm + '%' OR [lastname] like '%' + @searchTerm + '%') AND [age] >= 18 However, im having difficulty translating this into lucene.net. This is what i have so far: var parser = new MultiFieldQueryParser({ "firstname", "lastname"}, new StandardAnalyser()); var luceneQuery = parser.Parse(searchterm) var query = FullTextSession.CreateFullTextQuery(luceneQuery, typeof(User)); var results = query.List<User>(); How do i add in the "where age = 18" bit? I've heard about .SetFilter(), but this only accepts LuceneQueries, and not IQueries. If SetFilter is the right thing to use, how do I make the appropriate filter? If not, what do I use and how do i do it? Thanks! P.S. This is a vastly simplified version of what I'm trying to do for clarity, my WHERE clause is actually a lot more complicated than shown here. In reality i need to check if ids exist in subqueries and check a number of unindexed properties. Any solutions given need to support this. Thanks

    Read the article

  • Memory allocation in detached NSThread to load an NSDictionary in background?

    - by mobibob
    I am trying to launch a background thread to retrieve XML data from a web service. I developed it synchronously - without threads, so I know that part works. Now I am ready to have a non-blocking service by spawning a thread to wait for the response and parse. I created an NSAutoreleasePool inside the thread and release it at the end of the parsing. The code to spawn and the thread are as follows: Spawn from main-loop code: . . [NSThread detachNewThreadSelector:@selector(spawnRequestThread:) toTarget:self withObject:url]; . . Thread (inside 'self'): -(void) spawnRequestThread: (NSURL*) url { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; parser = [[NSXMLParser alloc] initWithContentsOfURL:url]; [self parseContentsOfResponse]; [parser release]; [pool release]; } The method parseContentsOfResponse fills an NSMutableDictionary with the parsed document contents. I would like to avoid moving the data around a lot and allocate it back in the main-loop that spawned the thread rather than making a copy. First, is that possible, and if not, can I simply pass in an allocated pointer from the main thread and allocate with 'dictionaryWithDictionary' method? That just seems so inefficient. Are there perferred designs?

    Read the article

  • Boost causes an invalid block while overloading new/delete operators

    - by user555746
    Hi, I have a problem which appears to a be an invalid memory block that happens during a Boost call to Boost:runtime:cla::parser::~parser. When that global delete is called on that object, C++ asserts on the memory block as an invalid: dbgdel.cpp(52): /* verify block type */ _ASSERTE(_BLOCK_TYPE_IS_VALID(pHead->nBlockUse)); An investigation I did revealed that the problem happened because of a global overloading of the new/delete operators. Those overloadings are placed in a separate DLL. I discovered that the problem happens only when that DLL is compiled in RELEASE while the main application is compiled in DEBUG. So I thought that the Release/Debug build flavors might have created a problem like this in Boost/CRT when overloading new/delete operators. So I then tried to explicitly call to _malloc_dbg and _free_dbg withing the overloading functions even in release mode, but it didn't solve the invalid heap block problem. Any idea what the root cause of the problem is? is that situation solvable? I should stress that the problem began only when I started to use Boost. Before that CRT never complained about any invalid memory block. So could it be an internal Boost bug? Thanks!

    Read the article

  • global asax error

    - by asksuperuser
    I've suddenly got this error: Server Error in '/' Application. Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Could not load type 'MyApp.Global'. Source Error: Line 1: <%@ Application Codebehind="Global.asax.cs" Inherits="MyApp.Global" Language="C#" %> Source File: /global.asax Line: 1 Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.1 I have not done anything to Global.asax. I tried to change the namespace from MyFramework to MyApp but It didn't work either so I put it back like it is below: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Security; using System.Web.SessionState; namespace MyFramework { public class Global : System.Web.HttpApplication { void Application_Start(object sender, EventArgs e) { // Code that runs on application startup } void Application_End(object sender, EventArgs e) { // Code that runs on application shutdown } void Application_Error(object sender, EventArgs e) { // Code that runs when an unhandled error occurs } void Session_Start(object sender, EventArgs e) { // Code that runs when a new session is started } void Session_End(object sender, EventArgs e) { // Code that runs when a session ends. // Note: The Session_End event is raised only when the sessionstate mode // is set to InProc in the Web.config file. If session mode is set to StateServer // or SQLServer, the event is not raised. } } }

    Read the article

  • Why can't I access elements inside an XML file with XPath in XML::LibXML?

    - by John
    I have an XML file, part of which looks like this: <wave waveID="1"> <well wellID="1" wellName="A1"> <oneDataSet> <rawData>0.1123975676</rawData> </oneDataSet> </well> ... more wellID's and rawData continues here... I am trying to parse the file with Perl's libXML and output the wellName and the rawData using the following: use XML::LibXML; my $parser = XML::LibXML->new(); my $doc = $parser->parse_file('/Users/johncumbers/Temp/1_12-18-09-111823.orig.xml'); my $xc = XML::LibXML::XPathContext->new( $doc->documentElement() ); $xc->registerNs('ns', 'http://moleculardevices.com/microplateML'); my @n = $xc->findnodes('//ns:wave[@waveID="1"]'); #xc is xpathContent # should find a tree from the node representing everything beneath the waveID 1 foreach $nod (@n) { my @c = $nod->findnodes('//rawData'); #element inside the tree. print @c; } It is not printing out anything right now and I think I have a problem with my Xpath statements. Please can you help me fix it, or can you show me how to trouble shoot the xpath statements? Thanks.

    Read the article

  • Python/YACC: Resolving a shift/reduce conflict

    - by Rosarch
    I'm using PLY. Here is one of my states from parser.out: state 3 (5) course_data -> course . (6) course_data -> course . course_list_tail (3) or_phrase -> course . OR_CONJ COURSE_NUMBER (7) course_list_tail -> . , COURSE_NUMBER (8) course_list_tail -> . , COURSE_NUMBER course_list_tail ! shift/reduce conflict for OR_CONJ resolved as shift $end reduce using rule 5 (course_data -> course .) OR_CONJ shift and go to state 7 , shift and go to state 8 ! OR_CONJ [ reduce using rule 5 (course_data -> course .) ] course_list_tail shift and go to state 9 I want to resolve this as: if OR_CONJ is followed by COURSE_NUMBER: shift and go to state 7 else: reduce using rule 5 (course_data -> course .) How can I fix my parser file to reflect this? Do I need to handle a syntax error by backtracking and trying a different rule?

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >