With the help from two previous questions, I now have a working HTML scrapper that feeds product information into a database. What I am now trying to do is improve efficiently by wrapping my brain around with getting my scrapper working with pcntl_fork.
If I split my php5-cli script into 10 separate chunks, I improve total runtime by a large factor so I know I am not i/o or cpu bound but just limited by the linear nature of my scraping functions.
Using code I've cobbled together from multiple sources, I have this working test:
<?php
libxml_use_internal_errors(true);
ini_set('max_execution_time', 0);
ini_set('max_input_time', 0);
set_time_limit(0);
$hrefArray = array("http://slashdot.org", "http://slashdot.org", "http://slashdot.org", "http://slashdot.org");
function doDomStuff($singleHref,$childPid) {
$html = new DOMDocument();
$html->loadHtmlFile($singleHref);
$xPath = new DOMXPath($html);
$domQuery = '//div[@id="slogan"]/h2';
$domReturn = $xPath->query($domQuery);
foreach($domReturn as $return) {
$slogan = $return->nodeValue;
echo "Child PID #" . $childPid . " says: " . $slogan . "\n";
}
}
$pids = array();
foreach ($hrefArray as $singleHref) {
$pid = pcntl_fork();
if ($pid == -1) {
die("Couldn't fork, error!");
} elseif ($pid > 0) {
// We are the parent
$pids[] = $pid;
} else {
// We are the child
$childPid = posix_getpid();
doDomStuff($singleHref,$childPid);
exit(0);
}
}
foreach ($pids as $pid) {
pcntl_waitpid($pid, $status);
}
// Clear the libxml buffer so it doesn't fill up
libxml_clear_errors();
Which raises the following questions:
1) Given my hrefArray contains 4 urls - if the array was to contain say 1,000 product urls this code would spawn 1,000 child processes? If so, what is the best way to limit the amount of processes to say 10, and again 1,000 urls as an example split the child work load to 100 products per child (10 x 100).
2) I've learn that pcntl_fork creates a copy of the process and all variables, classes, etc. What I would like to do is replace my hrefArray variable with a DOMDocument query that builds the list of products to scrape, and then feeds them off to child processes to do the processing - so spreading the load across 10 child workers.
My brain is telling I need to do something like the following (obviously this doesn't work, so don't run it):
<?php
libxml_use_internal_errors(true);
ini_set('max_execution_time', 0);
ini_set('max_input_time', 0);
set_time_limit(0);
$maxChildWorkers = 10;
$html = new DOMDocument();
$html->loadHtmlFile('http://xxxx');
$xPath = new DOMXPath($html);
$domQuery = '//div[@id=productDetail]/a';
$domReturn = $xPath->query($domQuery);
$hrefsArray[] = $domReturn->getAttribute('href');
function doDomStuff($singleHref) {
// Do stuff here with each product
}
// To figure out: Split href array into $maxChilderWorks # of workArray1, workArray2 ... workArray10.
$pids = array();
foreach ($workArray(1,2,3 ... 10) as $singleHref) {
$pid = pcntl_fork();
if ($pid == -1) {
die("Couldn't fork, error!");
} elseif ($pid > 0) {
// We are the parent
$pids[] = $pid;
} else {
// We are the child
$childPid = posix_getpid();
doDomStuff($singleHref);
exit(0);
}
}
foreach ($pids as $pid) {
pcntl_waitpid($pid, $status);
}
// Clear the libxml buffer so it doesn't fill up
libxml_clear_errors();
But what I can't figure out is how to build my hrefsArray[] in the master/parent process only and feed it off to the child process. Currently everything I've tried causes loops in the child processes. I.e. my hrefsArray gets built in the master, and in each subsequent child process.
I am sure I am going about this all totally wrong, so would greatly appreciate just general nudge in the right direction.