Programmaticaly grabbing text from a web page that is dynamically generated.

Posted by bstullkid on Stack Overflow See other posts from Stack Overflow or by bstullkid
Published on 2010-04-16T17:38:32Z Indexed on 2010/04/16 18:03 UTC
Read the original article Hit count: 317

Filed under:
|
|
|

There is a website I am trying to pull information from in perl, however the section of the page I need is being generated using javascript so all you see in the source is

<div id="results"></div>

I need to somehow pull out the contents of that div and save it to a file using perl/proxies/whatever. e.g. the information I want to save would be

document.getElementById('results').innerHTML;

I am not sure if this is possible or if anyone had any ideas or a way to do this. I was using a lynx source dump for other pages but since I cant straight forward screen scrape this page I came here to ask about it!

If anyone is interested, the page is http://downloadcenter.trendmicro.com/index.php?clk=left_nav&clkval=pattern_file&regs=NABU and the info I am trying to get is the row about the ConsumerOPR

© Stack Overflow or respective owner

Related posts about perl

Related posts about html