Search Results

Search found 14253 results on 571 pages for 'css parsing'.

Page 152/571 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Is there any open source tool that automatically 'detects' email threading like Gmail?

    - by Chris W.
    For instance, if the original message (message 1) is... Hey Jon, Want to go get some pizza? -Bill And the reply (message 2) is... Bill, Sorry, I can't make lunch today. Jonathon Parks, CTO Acme Systems On Wed, Feb 24, 2010 at 4:43 PM, Bill Waters wrote: Hey John, Want to go get some pizza? -Bill In Gmail, the system (a) detects that message 2 is a reply to message 1 and turns this into a 'thread' of sorts and (b) detects where the replied portion of the message actually is and hides it from the user. (In this case the hidden portion would start at "On Wed, Feb..." and continue to the end of the message.) Obviously, in this simple example it would be easy to detect the "On <Date, <Name wrote:" or the "" character prefixes. But many email systems have many different style of marking replies (not to mention HTML emails). I get the feeling that you would have to have some damn smart string parsing algorithms to get anywhere near how good GMail's is. Does this technology already exist in an open source project somewhere? Either in some library devoted to this exclusively or perhaps in some open source email client that does similar message threading? Thanks.

    Read the article

  • Perl XML SAX parser emulating XML::Simple record for record

    - by DVK
    Short Q summary: I am looking a fast XML parser (most likely a wrapper around some standard SAX parser) which will produce per-record data structure 100% identical to those produced by XML::Simple. Details: We have a large code infrastructure which depends on processing records one-by-one and expects the record to be a data structure in a format produced by XML::Simple since it always used XML::Simple since early Jurassic era. An example simple XML is: <root> <rec><f1>v1</f1><f2>v2</f2></rec> <rec><f1>v1b</f1><f2>v2b</f2></rec> <rec><f1>v1c</f1><f2>v2c</f2></rec> </root> And example rough code is: sub process_record { my ($obj, $record_hash) = @_; # do_stuff } my $records = XML::Simple->XMLin(@args)->{root}; foreach my $record (@$records) { $obj->process_record($record) }; As everyone knows XML::Simple is, well, simple. And more importantly, it is very slow and a memory hog - due to being a DOM parser and needing to build/store 100% of data in memory. So, it's not the best tool for parsing an XML file consisting of large amount of small records record-by-record. However, re-writing the entire code (which consist of large amount of "process_record"-like methods) to work with standard SAX parser seems like an big task not worth the resources, even at the cost of living with XML::Simple. What I'm looking for is an existing module which will probably be based on a SAX parser (or anything fast with small memory footprint) which can be used to produce $record hashrefs one by one based on the XML pictured above that can be passed to $obj->process_record($record) and be 100% identical to what XML::Simple's hashrefs would have been. I don't care much what the interface of the new module is - e.g whether I need to call next_record() or give it a callback coderef accepting a record.

    Read the article

  • Extracting images from a PDF

    - by sagar
    My Query I want to extract only images from a PDF document, using Objective-C in an iPhone Application. My Efforts I have gone through the info on this link, which has details regarding different operators on PDF documents. I also studied this document from Apple about PDF parsing with Quartz. I also went through the entire PDF reference document from the Adobe site. According to that document, for each image there are the following operators: q Q BI EI I have created a table to get the image: myTable = CGPDFOperatorTableCreate(); CGPDFOperatorTableSetCallback(myTable, "q", arrayCallback2); CGPDFOperatorTableSetCallback(myTable, "TJ", arrayCallback); CGPDFOperatorTableSetCallback(myTable, "Tj", stringCallback); I use this method to get the image: void arrayCallback2(CGPDFScannerRef inScanner, void *userInfo) { // THIS DOESN'T WORK // CGPDFStreamRef stream; // represents a sequence of bytes // if (CGPDFDictionaryGetStream (d, "BI", &stream)){ // CGPDFDataFormat t=CGPDFDataFormatJPEG2000; // CFDataRef data = CGPDFStreamCopyData (stream, &t); // } } This method is called for the operator "q", but I don't know how to extract an image from it. What should be the solution for extracting the images from the PDF documents? Thanks in advance for your kind help.

    Read the article

  • Metamorphs Messing Up CSS in Ember.js Views

    - by Austin Fatheree
    I'm using Ember.js / handlebars to loop through a collection and spit out some items that I'd like bootstrap to handle nice and responsive like. Here is the issue: The bootstrap-responsive css has some declrations in it like: .row-fluid > [class*="span"]:first-child { margin-left: 0; } and .row-fluid:before, .row-fluid:after { display: table; content: ""; } These rules seem to target the first children. When I loop through my collection in handlebars I end up with a bunch of metamorph code around my items: <div class="row-fluid"> {{#each restaurantList}} {{view GS.vHomePageRestList content=this class="span6"}} {{/each}} </div> Here is what is produced: <div class="row-fluid"> <script id="metamorph-9-start" type="text/x-placeholder"></script> <script id="metamorph-104-start" type="text/x-placeholder"></script> <div id="ember2527" class="ember-view span6"> My View </div> <script id="metamorph-104-end" type="text/x-placeholder"></script> <script id="metamorph-105-start" type="text/x-placeholder"></script> <div id="ember2574" class="ember-view span6"> My View 2 </div> <script id="metamorph-105-end" type="text/x-placeholder"></script> <script id="metamorph-9-end" type="text/x-placeholder"></script> </div> So my question is this: 1. How can I tell css to ignore script tags? or 2. How can I edit the css bindings so that they skip over script tags when selecting the first or first child? or 3. How can I structure this so that Ember uses fewer/no metamorph tags? Here is a fiddle: http://jsfiddle.net/skilesare/SgwsJ/

    Read the article

  • Perl XML SAX parser emulating XML::Simple record for record

    - by DVK
    Short Q summary: I am looking a fast XML parser (most likely a wrapper around some standard SAX parser) which will produce per-record data structure 100% identical to those produced by XML::Simple. Details: We have a large code infrastructure which depends on processing records one-by-one and expects the record to be a data structure in a format produced by XML::Simple since it always used XML::Simple since early Jurassic era. An example simple XML is: <root> <rec><f1>v1</f1><f2>v2</f2></rec> <rec><f1>v1b</f1><f2>v2b</f2></rec> <rec><f1>v1c</f1><f2>v2c</f2></rec> </root> And example rough code is: sub process_record { my ($obj, $record_hash) = @_; # do_stuff } my $records = XML::Simple->XMLin(@args)->{root}; foreach my $record (@$records) { $obj->process_record($record) }; As everyone knows XML::Simple is, well, simple. And more importantly, it is very slow and a memory hog - due to being a DOM parser and needing to build/store 100% of data in memory. So, it's not the best tool for parsing an XML file consisting of large amount of small records record-by-record. However, re-writing the entire code (which consist of large amount of "process_record"-like methods) to work with standard SAX parser seems like an big task not worth the resources, even at the cost of living with XML::Simple. What I'm looking for is an existing module which will probably be based on a SAX parser (or anything fast with small memory footprint) which can be used to produce $record hashrefs one by one based on the XML pictured above that can be passed to $obj->process_record($record) and be 100% identical to what XML::Simple's hashrefs would have been.

    Read the article

  • How to parse text fragments located outside tags (inbetween tags) by simplehtmldom?

    - by moogeek
    Hello! I'm using simplehtmldom to parse html and I'm stuck in parsing plaintext located outside of any tag (but between two different tags): <div class="text_small"> <b>?dress:</b> 7 Hange Road<br> <b>Phone:</b> 415641587484<br> <b>Contact:</b> Alex<br> <b>Meeting Time:</b> 12:00-13:00<br> </div> Is it possible to get these values of Adress, Phone, Contact, Meeting Time? I wonder if there is a opportunity to pass CSS Selectors into nextSibling/previousSibling functions... foreach($html->find('div.text_small') as $div_descr) { foreach($div_descr->find('b') as $b) { if ($b->innertext=="?dress:") {//someaction } if ($b->innertext=="Phone:") { //someaction } if ($b->innertext=="Contact:") { //someaction } if ($b->innertext=="Meeting Time:") { //someaction } } } What I should use instead "someaction" ? upd. Yes, I don't have an access for editing the target page. Otherwise, would it be worth to? :)

    Read the article

  • Problem using NSXMLParser with NOAA data on iPhone

    - by Amagrammer
    Can anyone help me see why NSXMLParser is not causing these methods parser:didStartElement:namespaceURI:qualifiedName:attributes: parser:didEndElement:namespaceURI:qualifiedName:attributes: to fire for the part of the following data: <?xml version="1.0" encoding="ISO-8859-1"?><SOAP-ENV:Envelope SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/"><SOAP-ENV:Body><ns1:NDFDgenResponse xmlns:ns1=""><dwmlOut xsi:type="xsd:string"><?xml version="1.0"?> <dwml version="1.0" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://www.nws.noaa.gov/forecasts/xml/DWMLgen/schema/DWML.xsd"> (body excluded) </dwml> </dwmlOut></ns1:NDFDgenResponse></SOAP-ENV:Body></SOAP-ENV:Envelope> I'm not an XML expert, but to me, the part looks like just a regular element, to be parsed just like the parts before it. I do get two parser:parseErrorOccurred: errors, #200 and #201, but they occur during the parsing of the <SOAP-ENV:Body> element, not the element, so I'm not sure if they are relevant. Thanks for any help you can give me.

    Read the article

  • Line Break in XML?

    - by ew89
    Hello, I'm a beginner in web development, and I'm trying to insert line breaks in my XML file. This is what my XML looks like: Song Title Lyrics <song> <title>Song Title</title> <lyric>Lyrics</lyric> <song> <title>Song Title</title> <lyric>Lyrics</lyric> <song> <title>Song Title</title> <lyric>Lyrics</lyric> I want to have line breaks in between the sentences for the lyrics. I tried everything from /n, and other codes similar to it, PHP parsing, etc., and nothing works! Have been googling online for hours and can't seem to find the answer. I'm using the XML to insert data to an HTML page using Javascript. Does anyone know how to solve this problem? Thanks before :)

    Read the article

  • Why does 12:20 PM parse to 0:20 on the next day?

    - by Hanno Fietz
    I'm using java.text.SimpleDateFormat to parse string representations of date/time values inside an XML document. I'm seeing all times that have an hour value of 12 shifted by 12 hours into the future, i. e. 20 minutes past noon gets parsed to mean 20 minutes past midnight the following day. I wrote a unit test which seems to confirm that the error is made upon parsing (I checked the return values from getTime() with the linux shell command date). Now I'm wondering: is there a bug in the parse() method? is there something wrong with the input string? am I using the wrong format string for the input? The input data is taken from Yahoo's YWeather service. Here's the test and its output: public class YWeatherReaderTest { public static final String[] rgDateSamples = { "Thu, 08 Apr 2010 12:20 PM CEST", "Thu, 08 Apr 2010 12:20 AM CEST" }; public void dateParsing() throws ParseException { DateFormat formatter = new SimpleDateFormat("EEE, dd MMM yyyy K:m a z", Locale.US); for (String dtsSrc : YWeatherReaderTest.rgDateSamples) { Date dt = formatter.parse(dtsSrc); String dtsDst = formatter.format(dt); System.out.println(dtsSrc); System.out.println(dtsDst); System.out.println(); } } } Thu, 08 Apr 2010 12:20 PM CEST Fri, 09 Apr 2010 0:20 AM CEST Thu, 08 Apr 2010 12:20 AM CEST Thu, 08 Apr 2010 0:20 PM CEST The second output line of the second iteration is slightly weird, because 00:20 isn't PM. The milliseconds value of the Date object, however, corresponds to the (wrong) time of 20 minutes past noon.

    Read the article

  • Can't read some attributes with SAX

    - by akappa
    Hi all, I'm trying to parse that document with SAX: <scxml version="1.0" initialstate="start" name="calc"> <datamodel> <data id="expr" expr="0" /> <data id="res" expr="0" /> </datamodel> <state id="start"> <transition event="OPER" target="opEntered" /> <transition event="DIGIT" target="operand" /> </state> <state id="operand"> <transition event="OPER" target="opEntered" /> <transition event="DIGIT" /> </state> </scxml> I read all the attributes well, except "initialstate" and "name"... I get the attributes with the startElement handler, but the size of the attribute list for scxml is zero. Why? How I can overcome that problem? Edit: public void startElement(String uri, String localName, String qName, Attributes attributes){ System.out.println(attributes.getValue("initialstate")); System.out.println(attributes.getValue("name")); } that, when parsing the first tag, doesn't work (prints "null" two times). In fact, attributes.getLength(); evaluates to zero. Thanks

    Read the article

  • Can Haskell's Parsec library be used to implement a recursive descent parser with backup?

    - by Thor Thurn
    I've been considering using Haskell's Parsec parsing library to parse a subset of Java as a recursive descent parser as an alternative to more traditional parser-generator solutions like Happy. Parsec seems very easy to use, and parse speed is definitely not a factor for me. I'm wondering, though, if it's possible to implement "backup" with Parsec, a technique which finds the correct production to use by trying each one in turn. For a simple example, consider the very start of the JLS Java grammar: Literal: IntegerLiteral FloatingPointLiteral I'd like a way to not have to figure out how I should order these two rules to get the parse to succeed. As it stands, a naive implementation like this: literal = do { x <- try (do { v <- integer; return (IntLiteral v)}) <|> (do { v <- float; return (FPLiteral v)}); return(Literal x) } Will not work... inputs like "15.2" will cause the integer parser to succeed first, and then the whole thing will choke on the "." symbol. In this case, of course, it's obvious that you can solve the problem by re-ordering the two productions. In the general case, though, finding things like this is going to be a nightmare, and it's very likely that I'll miss some cases. Ideally, I'd like a way to have Parsec figure out stuff like this for me. Is this possible, or am I simply trying to do too much with the library? The Parsec documentation claims that it can "parse context-sensitive, infinite look-ahead grammars", so it seems like something like I should be able to do something here.

    Read the article

  • How to make a small engine like Wolfram|Alpha?

    - by Koning WWWWWWWWWWWWWWWWWWWWWWW
    Lets say I have three models/tables: operating_systems, words, and programming_languages: # operating_systems name:string created_by:string family:string Windows Microsoft MS-DOS Mac OS X Apple UNIX Linux Linus Torvalds UNIX UNIX AT&T UNIX # words word:string defenitions:string window (serialized hash of defenitions) hello (serialized hash of defenitions) UNIX (serialized hash of defenitions) # programming_languages name:string created_by:string example_code:text C++ Bjarne Stroustrup #include <iostream> etc... HelloWorld Jeff Skeet h AnotherOne Jon Atwood imports 'SORULEZ.cs' etc... When a user searches hello, the system shows the defenitions of 'hello'. This is relatively easy to implement. However, when a user searches UNIX, the engine must choose: word or operating_system. Also, when a user searches windows (small letter 'w'), the engine chooses word, but should also show Assuming 'windows' is a word. Use as an <a href="etc..">operating system</a> instead. Can anyone point me in the right direction with parsing and choosing the topic of the search query? Thanks. Note: it doesn't need to be able to perform calculations as WA can do.

    Read the article

  • XML: what processing rules apply for values intertwined with tags?

    - by iCE-9
    I've started working on a simple XML pull-parser, and as I've just defuzzed my mind on what's correct syntax in XML with regards to certain characters/sequences, ignorable whitespace and such (thank you, http://www.w3schools.com/xml/xml_elements.asp), I realized that I still don't know squat about what can be sketched up as the following case (which Validome finds well-formed very much; note that I only want to use xml files for data storage, no entities, DTD or Schemas needed): <bookstore> <book id="1"> <author>Kurt Vonnegut Jr.</author> <title>Slapstick</title> </book> We drop a pie here. <book id="2">Who cares anyway? <author>Stephen King</author> <title>The Green Mile</title> </book> And another one here. <book id="3"> <author>Next one</author> <title>This time with its own title</title> </book> </bookstore> "We drop a pie here." and "And another one here." are values of the 'bookstore' element. "Who cares anyway?" is a value related to the second 'book' element. How are these processed, if at all? Will "We drop a pie here." and "Another one here." be concatenated to form one value for the 'bookstore' element, or are they treated separately, stored somewhere, affecting the outcome of the parsing of the element they belong to, or...?

    Read the article

  • BeautifulSoup can't parse a webpage?

    - by JLTChiu
    I am using beautiful soup for parsing webpage now, I've heard it's very famous and good, but it doesn't seems works properly. Here's what I did import urllib2 from bs4 import BeautifulSoup page = urllib2.urlopen("http://www.cnn.com/2012/10/14/us/skydiver-record-attempt/index.html?hpt=hp_t1") soup = BeautifulSoup(page) print soup.prettify() I think this is kind of straightforward. I open the webpage and pass it to the beautifulsoup. But here's what I got: Warning (from warnings module): File "C:\Python27\lib\site-packages\bs4\builder\_htmlparser.py", line 149 "Python's built-in HTMLParser cannot parse the given document. This is not a bug in Beautiful Soup. The best solution is to install an external parser (lxml or html5lib), and use Beautiful Soup with that parser. See http://www.crummy.com/software/BeautifulSoup/bs4/doc/#installing-a-parser for help.")) ... HTMLParseError: bad end tag: u'</"+"script>', at line 634, column 94 I thought CNN website should be well designed, so I am not very sure what's going on though. Does anyone has idea about this?

    Read the article

  • Best and simple way to handle JSON in Django

    - by primal
    Hi, As part of the application we are developing (with android client and Django server) a json object which contains user name and pass word is sent to server from android client as follows HttpPost post = new HttpPost(URL); /*Adding key value pairs */ json.put("username", un); json.put("password", pwd); StringEntity se = new StringEntity(json.toString()); post.setEntity(se); response = client.execute(post); The response is parsed like this result = responsetoString(response.getEntity().getContent()); //Converts response to String jObject = new JSONObject(result); JSONObject post = jObject.getJSONObject("post"); username = post.getString("username"); message = post.getString("message"); Hope upto this everything is fine. The problem comes when parsing or sending JSON responses in Django server. Whats the best way to do this? We tried using SimpleJSON and it turned out not to be so simple as we didn't find any good tutorials or sample code for the same? Are there any python functions similiar to get,put and opt in java for JSON? Any help would be much appreciated..

    Read the article

  • Using JavaCC to infer semantics from a Composite tree

    - by Skice
    Hi all, I am programming (in Java) a very limited symbolic calculus library that manages polynomials, exponentials and expolinomials (sums of elements like "x^n * e^(c x)"). I want the library to be extensible in the sense of new analytic forms (trigonometric, etc.) or new kinds of operations (logarithm, domain transformations, etc.), so a Composite pattern that represent the syntactic structure of an expression, together with a bunch of Visitors for the operations, does the job quite well. My problem arise when I try to implement operations that depends on the semantics more than on the syntax of the Expression (like integrals, for instance: there are a lot of resolution methods for specific classes of functions, but these same classes can be represented with more than a single syntax). So I thought I need something to "parse" the Composite tree to infer its semantics in order to invoke the right integration method (if any). Someone pointed me to JavaCC, but all the examples I've seen deal only with string parsing; so, I don't know if I'm digging in the right direction. Some suggestions? (I hope to have been clear enough!)

    Read the article

  • Technique to remove common words(and their plural versions) from a string

    - by Jake M
    I am attempting to find tags(keywords) for a recipe by parsing a long string of text. The text contains the recipe ingredients, directions and a short blurb. What do you think would be the most efficient way to remove common words from the tag list? By common words, I mean words like: 'the', 'at', 'there', 'their' etc. I have 2 methodologies I can use, which do you think is more efficient in terms of speed and do you know of a more efficient way I could do this? Methodology 1: - Determine the number of times each word occurs(using the library Collections) - Have a list of common words and remove all 'Common Words' from the Collection object by attempting to delete that key from the Collection object if it exists. - Therefore the speed will be determined by the length of the variable delims import collections from Counter delim = ['there','there\'s','theres','they','they\'re'] # the above will end up being a really long list! word_freq = Counter(recipe_str.lower().split()) for delim in set(delims): del word_freq[delim] return freq.most_common() Methodology 2: - For common words that can be plural, look at each word in the recipe string, and check if it partially contains the non-plural version of a common word. Eg; For the string "There's a test" check each word to see if it contains "there" and delete it if it does. delim = ['this','at','them'] # words that cant be plural partial_delim = ['there','they',] # words that could occur in many forms word_freq = Counter(recipe_str.lower().split()) for delim in set(delims): del word_freq[delim] # really slow for delim in set(partial_delims): for word in word_freq: if word.find(delim) != -1: del word_freq[delim] return freq.most_common()

    Read the article

  • Why am I getting a ParseException when using SimpleDateFormat to format a date and then parse it?

    - by Greg
    I have been debugging some existing code for which unit tests are failing on my system, but not on colleagues' systems. The root cause is that SimpleDateFormat is throwing ParseExceptions when parsing dates that should be parseable. I created a unit test that demonstrates the code that is failing on my system: import java.text.DateFormat; import java.text.ParseException; import java.text.SimpleDateFormat; import java.util.Date; import java.util.TimeZone; import junit.framework.TestCase; public class FormatsTest extends TestCase { public void testParse() throws ParseException { DateFormat formatter = new SimpleDateFormat("yyyyMMddHHmmss.SSS Z"); formatter.setTimeZone(TimeZone.getDefault()); formatter.setLenient(false); formatter.parse(formatter.format(new Date())); } } This test throws a ParseException on my system, but runs successfully on other systems. java.text.ParseException: Unparseable date: "20100603100243.118 -0600" at java.text.DateFormat.parse(DateFormat.java:352) at FormatsTest.testParse(FormatsTest.java:16) I have found that I can setLenient(true) and the test will succeed. The setLenient(false) is what is used in the production code that this test mimics, so I don't want to change it.

    Read the article

  • PyParsing: Not all tokens passed to setParseAction()

    - by Rosarch
    I'm parsing sentences like "CS 2110 or INFO 3300". I would like to output a format like: [[("CS" 2110)], [("INFO", 3300)]] To do this, I thought I could use setParseAction(). However, the print statements in statementParse() suggest that only the last tokens are actually passed: >>> statement.parseString("CS 2110 or INFO 3300") Match [{Suppress:("or") Re:('[A-Z]{2,}') Re:('[0-9]{4}')}] at loc 7(1,8) string CS 2110 or INFO 3300 loc: 7 tokens: ['INFO', 3300] Matched [{Suppress:("or") Re:('[A-Z]{2,}') Re:('[0-9]{4}')}] -> ['INFO', 3300] (['CS', 2110, 'INFO', 3300], {'Course': [(2110, 1), (3300, 3)], 'DeptCode': [('CS', 0), ('INFO', 2)]}) I expected all the tokens to be passed, but it's only ['INFO', 3300]. Am I doing something wrong? Or is there another way that I can produce the desired output? Here is the pyparsing code: from pyparsing import * def statementParse(str, location, tokens): print "string %s" % str print "loc: %s " % location print "tokens: %s" % tokens DEPT_CODE = Regex(r'[A-Z]{2,}').setResultsName("DeptCode") COURSE_NUMBER = Regex(r'[0-9]{4}').setResultsName("CourseNumber") OR_CONJ = Suppress("or") COURSE_NUMBER.setParseAction(lambda s, l, toks : int(toks[0])) course = DEPT_CODE + COURSE_NUMBER.setResultsName("Course") statement = course + Optional(OR_CONJ + course).setParseAction(statementParse).setDebug()

    Read the article

  • How to parse phpDoc style comment block with php?

    - by Reveller
    Please consider the following code with which I'm trying to parse only the first phpDoc style comment (noy using any other libraries) in a file (file contents put in $data variable for testing purposes): $data = " /** * @file A lot of info about this file * Could even continue on the next line * @author [email protected] * @version 2010-05-01 * @todo do stuff... */ /** * Comment bij functie bar() * @param Array met dingen */ function bar($baz) { echo $baz; } "; $data = trim(preg_replace('/\r?\n *\* */', ' ', $data)); preg_match_all('/@([a-z]+)\s+(.*?)\s*(?=$|@[a-z]+\s)/s', $data, $matches); $info = array_combine($matches[1], $matches[2]); print_r($info) This almose works, except for the fact that everything after @todo (including the bar() comment block and code) is considered the value of @todo: Array ( [file] => A lot of info about this file Could even continue on the next line [author] => [email protected] [version] => 2010-05-01 [todo] => do stuff... / /** Comment bij functie bar() [param] => Array met dingen / function bar() { echo ; } ) How does my code need to be altered so that only the first comment block is being parsed (in other words: parsing should stop after the first "*/" encountered?

    Read the article

  • get value from css using document.getElementById().style.height javascript

    - by Jamex
    Hi, Please offer insight into this mystery. I am trying to get the height value from a div box by var high = document.getElementById("hintdiv").style.height; alert(high); I can get this value just fine if the attribute is contained within the div tag, but it returns a blank value if the attribute is defined in the css section. This is fine, it shows 100px as a value. The value can be accessed. <div id="hintdiv" style="height:100px; display: none;"> . . var high = document.getElementById("hintdiv").style.height; alert(high); This is not fine, it shows an empty alert screen. The value is practically 0. #hintdiv { height:100px display: none; } <div id="hintdiv"> . . var high = document.getElementById("hintdiv").style.height; alert(high); But I have no problem accessing/changing the "display:none" attribute whether it is in the tag or in the css section. The div box displays correctly by both attribute definition methods (inside the tag or in the css section). I also tried to access the value by other variations, but no luck document.getElementById("hintdiv").style.height.value ----> undefined document.getElementById("hintdiv").height ---->undefined document.getElementById("hintdiv").height.value ----> error, no execution Any solution? TIA.

    Read the article

  • How to parse the second child node from xml page in iphone

    - by Warrior
    I am new to iphone development.I want parse an you-tube XML page and retrieve its contents and display in a RSS feed. my xml page is <entry> <id>xxxxx</id> <title>xxx xxxx xxxx</title> <content>xxxxxxxxxxx</content> <media:group> <media:thumbnail url="http://tiger.jpg"/> </media:group> </entry> To retrieve the content i am using xml parsing. - (void)parser:(NSXMLParser *)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName attributes:(NSDictionary *)attributeDict{ currentElement = [elementName copy]; if ([elementName isEqualToString:@"entry"]) { entry = [[NSMutableDictionary alloc] init]; currentTitle = [[NSMutableString alloc] init]; currentcontent = [[NSMutableString alloc] init]; } } - (void)parser:(NSXMLParser *)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName{ if ([elementName isEqualToString:@"entry"]) { [entry setObject:currentTitle forKey:@"title"]; [entry setObject:currentDate forKey:@"content"]; [stories addObject:[entry copy]]; }} - (void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)string{ if ([currentElement isEqualToString:@"title"]) { [currentTitle appendString:string]; } else if ([currentElement isEqualToString:@"content"]) { [currentLink appendString:string]; } } I am able to retrieve id , title and content value and display it in a table-view.How can i retrieve tiger image URL and display it in table-view.Please help me out.Thanks.

    Read the article

  • Pull specific information from a long list with Perl

    - by melignus
    The file that I've got to work with here is the result of an LDAP extraction but I need to ultimately get the information formatted over to something that a spreadsheet can use. So, the data is as follows: DataDataDataDataDataDataDataDataDataDataDataDataDataDataDataData DataDataDataDataDataDataDataDataDataDataDataDataDataDataDataData displayName: John Doe name: ##userName DataDataDataDataDataDataDataDataDataDataDataDataDataDataDataData DataDataDataDataDataDataDataDataDataDataDataDataDataDataDataData displayName: Jane Doe name: ##userName DataDataDataDataDataDataDataDataDataDataDataDataDataDataDataData DataDataDataDataDataDataDataDataDataDataDataDataDataDataDataData displayName: Ted Doe name: ##userName The format that I need to export to is: firstName lastName userName firstName lastName userName firstName lastName userName Where the spaces are tabs so I can then impor that file into a database. I have experience doing this in VBScript but I'm trying to switch over to using Perl for as much server administration as possible. I'm not sure on the syntax for what I want which is basically while not endoffile{ detect "displayName: " & $firstName & " " & $lastName detect "name: ##" & $userName write $firstName tab $lastName tab $userName to file } Also if someone could point me to a resource specifically on the text parsing syntax that Perl uses, I'd be very grateful. Most of the resources that I've come across haven't been very helpful.

    Read the article

  • asp.net mvc stand alone ascx control how do i link (css and js) most efficiently

    - by Julian
    Hi, I need some advice. I have developed some asp.net mvc web pages. Each page has a master and some ascx controls (between 2 - 6) embedded into it a js and css file. Up to now every thing was fine. In order to improve modularity, flexibility and testability the ascx's are now expected to be able to work as stand alone controls. (Each ascx has also got its own css and js files in some cases it has another control inside it) In order to meet this requirement we call the controller with the relevant parameters and it returns the ascx (partial) directly to the browser without all of the other parts of the original page . In order to get it to display correctly (css) and act correctly (js/jquery) all of the relevant files need to be added (as links or scripts eg. href="<%= ResolveUrl(styleSheet)%>") to the user control. This is "contradicting" the concept of positioning the files at the most logical place (could be the master page for example). How can I overcome this problem? Keep in mind that this is relevant for each "control" ascx file. Any thoughts will be appreciated.

    Read the article

  • JS/CSS include section replacement, Debug vs Release

    - by Bayard Randel
    I'd be interested to hear how people handle conditional markup, specifically in their masterpages between release and debug builds. The particular scenario this is applicable to is handling concatenated js and css files. I'm currently using the .Net port of YUI compress to produce a single site.css and site.js from a large collection of separate files. One thought that occurred to me was to place the js and css include section in a user control or collection of panels and conditionally display the <link> and <script> markup based on the Debug or Release state of the assembly. Something along the lines of: #if DEBUG pnlDebugIncludes.visible = true #else pnlReleaseIncludes.visible = true #endif The panel is really not very nice semantically - wrapping <script> tags in a <div> is a bit gross; there must be a better approach. I would also think that a block level element like a <div> within <head> would be invalid html. Another idea was this could possibly be handled using web.config section replacements, but I'm not sure how I would go about doing that.

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >