Working with Google Webmaster Tools
Posted
by
com
on Pro Webmasters
See other posts from Pro Webmasters
or by com
Published on 2011-11-30T14:24:56Z
Indexed on
2011/11/30
18:16 UTC
Read the original article
Hit count: 220
google-webmaster-tools
My first question is about Crawl errors in Google Webmaster Tools. Crawl errors is devided into few sections. One of them is HTTP. I assume that all broken links in HTTP was somehow found by crawler, this is not the links from sitemap. If this was found by scanning all sitemap pages for links, why it doesn't mention what was the source page, like in sitemap section with column Linked From. And what the meaning of Linked From, I thought if the name of section is sitemap, therefore all URLs should be taken from sitemap, so why there is Linked From?
The second question, what is the best way to trreat searching on the site. How come the searching result page are getting indexed? Because of the fact that all searching result page are getting indexed, I have to many page in Linked From. What's the right practice?
Question three: In order to improve response time in WMT, can I redirect all crawler's requests to designated free web server? Is this good practice?
Question four: How should I treat Google Analytics Code (with parameters PageView, PageLoadTime), in the case user request non existing page, should I render Google code or not? Right now I use Google Analytics Code on the common template page, such that every page, also non existing page with error message contains Google Analytics Code, it seems like it has influence on WMT.
© Pro Webmasters or respective owner