robots.txt, how effective is it and how long does it take?
- by Stefan
We recently updated the site to a single page site using jQuery to slide between "pages".
So we now have only index.php.
When you search the company on engines such as Google, you get the site and a listing of its sub pages which now lead to outdated pages.
Our plan doesn't allow us to edit the .htaccess and the old pages are .html docs so I cannot use PHP redirects either.
So if I put in place a robots.txt telling the engines to not crawl beyond index.php, how effective will this be in preventing/removing crawled sub pages.
And rough guess, how long before the search engines would update?