Search Results

Search found 806 results on 33 pages for 'webmaster'.

Page 8/33 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Poor backlink profile - search rankings not updated for 2+ months

    - by fistameeny
    I am carrying out some work on a website that is a PR2 with a few good quality, relevant backlinks (PR4-6). It has a presence on Twitter that is updated regularly, a Google Places listing, and listings on some decent directories (Qype etc). The site was rebuilt into Drupal 7 two months ago, with all the basics done - URL rewriting, XML Sitemap submitted to Google, and most importantly, good quality, structured content. I've noticed that Google is still showing "old" URL's from the previous version of the site that was ditched 8 weeks ago. I think the site may be penalised under the Penguin update, as a previous SEO company created many low quality links from link farms/directories. My question is what the correct way to deal with this is. Bing Webmaster Tools can "disavow" links, and I guess I can attempt to contact the link farms to have them removed. I've already submitted a request to Google to request that we have the penalty removed as we're trying to tidy up a bad history. We submit updated sitemaps to Google and Bing daily, and have built some further decent quality, relevant links. Is there anything further I can do?

    Read the article

  • Should I, and how do I incorporate microdata into my asp.net website with 47 pages?

    - by Jason Weber
    I have an asp.net (vb) with 47 pages. The problem is that it's in 10 different languages, although 98% just use English. I have 5 master pages. I've read Google Webmaster Tools, but I'm still confounded. I'm reading about how microdata is the way to go. Does this mean I should put itemtype and itemprop span and div tags in my master pages, or should I do all of my 47 pages (.resx resource files) separately? The main key phrase I want throughout search results is "machine vision". For instance, the first couple sentences on my "about.aspx" page are: <span itemprop="name">USS Vision Inc.</span> (USS) is a privately-owned company with headquarters in <span itemprop="locality">Detroit, Michigan, USA</span>. We design, engineer, produce, and integrate special machine vision error-proofing products and <a href="http://www.ussvision.com/services/" target="_self" itemprop="url">services</a> that create lean factories by improving the quality of manufactured products, and by significantly reducing manufacturing costs through advanced automation. Am I doing this right, or how would I do this if I'm not? Should I use the itemprop="url" or other rich snippets for every link in my website? I mean, do I need to add an itemprop to just about everything, or can I just alter my master pages? Any guidance in this regard to help improve my SEO and SERPS would be greatly appreciated!

    Read the article

  • Request Removal of naked domain from Google Index

    - by Pedr
    I have a site which was temporarily available at both example.com and www.example.com. All traffic to example.com is now redirected to www.example.com, however during the brief period that the site was available at the naked domain, Google indexed it. So Google now has two versions of every page indexed: www.example.com www.example.com/about_us www.example.com/products/something ... and example.com example.com/about_us example.com/products/something ... For obvious reasons, this is a bad situation, so how can I best resolve it? Should I request removal of these pages from the index? There is still content at these URLs, but they now redirect to the www subdomain equivalent. The site has many hundreds of pages, but the only way I can see to request removal is via the Remove outdated content screen in Webmaster Tools, one URL at a time. How can I request removal of an entire domain (ie. the naked domain) without it effecting the true site located at the www subdomain? Is this the correct strategy given that all the naked domains now redirect to their www equivalent?

    Read the article

  • Somehow Google considers a properly 301'd URL as 200 and is still indexing the new content in old page?

    - by user2178914
    We redirected all the old URL's to new ones properly using htaccess. The problem is Google, somehow is still finding content in the old page(which it shouldn't) and stores it in the cache rather than the new URL. For eg: Old Page- http://www.natures-energies.com/iching.htm New Page- http://www.natures-energies.com/index.php?option=com_content&view=article&id=760 If you type the old URL into the browser it redirects If you fetch the old URL as Googlebot in the webmaster tools the header says 301/permanently redirected. If I try to crawl as any other bot it still says 301 redirected. Even if you click the old link in Google it redirects to the new URL. Only in its cache it shows the old URL and moreover it shows the new content in it! I am stumped on how Google manages to grab the new content and puts in the old URL instead of the new one! One more interesting thing is that if I try a cache for the new page it shows the cache of the new content with old URL! Any help would be appreciated. I am at end of my wits. I think i have tried almost everything. Is there anything that I'm missing to see? You can use this search to find the old url's. Maybe you'll some patterns that i missed. site:www.natures-energies.com inurl:htm -inurl:https|index

    Read the article

  • Meaning of Crawl errors

    - by com
    My question is about definition of Crawl errors in Google Webmaster Tools. Crawl errors is devided into few sections. Let's first consider HTTP section. I assume that all broken links in this section was somehow found by crawler, this is not the links from sitemap. If all this links was found by scanning pages from sitemap for links, why it doesn't mention what was the source page, like in sitemap section with column Linked From. Please correct me if I am wrong. Sitemap section. Looks like all those links came from my sitemap. But there is Linked From column, I already know, that all those broken links is from sitemap, so in order to fix the error, I should revise my sitemap. Am I wrong? Not followed section. I don't know what does it mean. Looks like it accumulates all links that caused redirect, but for some reason Google considers all those redirect as wrong redirect. Do you know if there are any set of rules how to determine wrong redirect. Actually I found were was my mistake, I tried to normalize URL and redirect it to the right URL, but I did normalization in a wrong way. Not found section. This section like HTTP section but with 404 errors. This section has Linked From column. But very often Linked From has unavailable. What does it mean, Google can not say me how it found this non existing page. How this section related to sitemap section. Does this section contains all 404 links from sitemap too. But there is too many 404 links, much more than in sitemap. I tried to take a look what we have in Linked From, and I saw that this link came from sitemap two month ago. But why Google keeps it indexed, the link is already dead, new sitemap doesn't have it. If there is any expire date for old links? Unreachable section. Looks like this section for 500 errors. This section doesn't contain Linked From column. There are too many completely meaningless links, I really don't know where this stuff came from, and without Linked From I am not able to figure out how to deal with it. Sorry for such a big topic, but I just want to make it clear, what every section stands for, because it's extremely crucial in order to deal with all those problems. Hopefully it will be useful not just for me. Thanks!

    Read the article

  • GWT: Generate more complete crawl error report

    - by Mike
    I'm a developer in charge of managing Webmasters and related issues (including correcting crawl errors) for dozens (hundreds, maybe?) of active sites and as part of my duties I create a report of every discrepancy, including all pages generating a 404 and all pages that link to those pages. Currently within Webmaster Tools I'm able to download a csv file of all pages with a 404 response, but I'm then having to manually click on every single one of those links and copy the "linked from" field to paste into my spreadsheet. This is extremely tedious and seems unnecessary; I would expect the ability to download all that data at once. I'm ultimately looking for the end result of one csv file that has every url with a 404, but also has every url that links to each one of them. Am I overlooking this functionality somewhere or does anyone have a good solution? Edit 1 (2/11/2013): Example of what the csv output looks like now: URL,Response Code,News Error,Detected,Category http://www.abcdef.com/123.php,404,,11/12/13,Not found http://www.abcdef.com/456.php,404,,11/12/13,Not found Which is great, but let's say 123.php has 5 pages that link to it. Now I have to duplicate that row in my spreadsheet 4 more times, then go into Webmasters, get all the url's that link to the page, and add that data to my spreadsheet. The output I would prefer: URL,Response Code,Linked From,News Error,Detected,Category http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found Note the (hypothetical) addition of a "Linked From" column, as well as the fact there are only 2 unique URL's now (like before) but all of the "Linked To" pages are shown in one report. Edit 2 (2/12/2013): To clarify, my question is less about detecting and correcting 404's, but more about generating a report of what Google has listed as errors. Oftentimes, these errors aren't even valid anymore but I still need documentation to show that Google detected a problem and that problem is now fixed. Many of the "linked from" url's I find are actually outdated, cached resources. For example, I'll frequently see that the linked-from url is the sitemap, which is actually an old sitemap cached by Google that points to an old page. Neither the sitemap or old page exist, but they still appear in my crawl error reports because they are cached resources.

    Read the article

  • Is the job title of "Webmaster" an anachronism?

    - by Phil.Wheeler
    I've worked with a few people who have the word "webmaster" either as part of their formal job title or as their actual title. The type of work these people do does relate loosely to the web, but I suspect better, more appropriate titles that more accurately reflect the job function would make more sense. Is the "webmaster" moniker still relevant today?

    Read the article

  • Google webmastertools soft 404 - How to update google search after updating it to 200

    - by Jayapal Chandran
    My site has many modules which are indexed by google. Recently there has been a database problem so the site was not appearing well like many links returned 404 i think. Now i have make it working and all the content what previously google indexed are as it were. How do we update google that i have corrected and the pages which sent 404 are not 200? That is now i wanted to tell google that the urls which sent 404 are now working fine so that google will update it soon before it removes from its database.

    Read the article

  • The Sitemap Paradox

    - by Jeff Atwood
    We use a sitemap on Stack Overflow, but I have mixed feelings about it. Web crawlers usually discover pages from links within the site and from other sites. Sitemaps supplement this data to allow crawlers that support Sitemaps to pick up all URLs in the Sitemap and learn about those URLs using the associated metadata. Using the Sitemap protocol does not guarantee that web pages are included in search engines, but provides hints for web crawlers to do a better job of crawling your site. Based on our two years' experience with sitemaps, there's something fundamentally paradoxical about the sitemap: Sitemaps are intended for sites that are hard to crawl properly. If Google can't successfully crawl your site to find a link, but is able to find it in the sitemap it gives the sitemap link no weight and will not index it! That's the sitemap paradox -- if your site isn't being properly crawled (for whatever reason), using a sitemap will not help you! Google goes out of their way to make no sitemap guarantees: "We cannot make any predictions or guarantees about when or if your URLs will be crawled or added to our index" citation "We don't guarantee that we'll crawl or index all of your URLs. For example, we won't crawl or index image URLs contained in your Sitemap." citation "submitting a Sitemap doesn't guarantee that all pages of your site will be crawled or included in our search results" citation Given that links found in sitemaps are merely recommendations, whereas links found on your own website proper are considered canonical ... it seems the only logical thing to do is avoid having a sitemap and make damn sure that Google and any other search engine can properly spider your site using the plain old standard web pages everyone else sees. By the time you have done that, and are getting spidered nice and thoroughly so Google can see that your own site links to these pages, and would be willing to crawl the links -- uh, why do we need a sitemap, again? The sitemap can be actively harmful, because it distracts you from ensuring that search engine spiders are able to successfully crawl your whole site. "Oh, it doesn't matter if the crawler can see it, we'll just slap those links in the sitemap!" Reality is quite the opposite in our experience. That seems more than a little ironic considering sitemaps were intended for sites that have a very deep collection of links or complex UI that may be hard to spider. In our experience, the sitemap does not help, because if Google can't find the link on your site proper, it won't index it from the sitemap anyway. We've seen this proven time and time again with Stack Overflow questions. Am I wrong? Do sitemaps make sense, and we're somehow just using them incorrectly?

    Read the article

  • Upgraded to new Google Admob, now cannot resubmit Google Adsense application

    - by GPS
    I tried to apply for a google adsense account some time ago, but it was rejected due to some policy issues. Then I started using Legacy Admob account for the same email id. It was working fine. But now Google has deprecated the Legacy Admob so I upgraded to the new Google Admob. But now I want to resubmit my application for Adsense but whenever I go to the link https://www.google.com/adsense/ it takes to my homepage, where it shows older message that My account was not approved. It does not show the option to resubmit the application. Second way it shows to go to My Ads tab and then Under “Add AdSense for content", click Apply now. Complete the AdSense application form, then click Submit my application." But I cannot see Submit My Application or Add Adsense for content option in my My Ads Tab. Please can anybody tell me what should I do? Thanks.

    Read the article

  • Google Webmasters tools search queries position

    - by user1592845
    In my website account on Google Webmasters tools, some search queries show average position 1.0. This make me understand that it should be displayed as the first result. When I search for this query I could not able to find my website's page listed as a result?! In some cases I navigate to the third or the fourth result page and I could not find it! What are factors that make my website loss its average position for a search query? and when Google webmasters tools updates their values?

    Read the article

  • Can´t verify my site on Google (error 403 Forbidden). I have other sites in the same host with no problems whatsoever

    - by Rosamunda Rosamunda
    I can´t verify my site on Google. I´ve done this several times for several sites, all inside the same host. I´ve tried the HTML tag method, HTML upload, Domain Name provider (I canp´t find the options that Google tell me that I should activate...), and Google Analytics. I always get this response: Verification failed for http://www.mysite.com/ using the Google Analytics method (1 minute ago). Your verification file returns a status of 403 (Forbidden) instead of 200 (OK). I´ve checked the server headers, and I get this result: REQUESTING: http://www.mysite.com GET / HTTP/1.1 Connection: Keep-Alive Keep-Alive: 300 Accept:/ Host: www.mysite.com Accept-Language: en-us Accept-Encoding: gzip, deflate User-Agent: Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 6.0) SERVER RESPONSE: HTTP/1.1 403 Forbidden Date: Wed, 19 Sep 2012 03:25:22 GMT Server: Apache/2.2.19 (Unix) mod_ssl/2.2.19 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 PHP/5.2.17 Connection: close Content-Type: text/html; charset=iso-8859-1 Final Destination Page (It shows my actual homepage). What can I do? The hosting is the very same as in my other sites, where I didn´t have any issue at all! Thanks for your help! Note: As I have a Drupal 7 site, I´ve tried a "Drupal solution" first, but haven´t found any that solved this issue... How can it be forbidden when I can access the link perfectly ok? Is there any solution to this? Thanks!

    Read the article

  • The number of soft 404 errors is increasing because of redirects to the home page

    - by Stevie G
    I have an increase in soft 404 errors. Using Apache in my .htaccess file I have: Redirect 301 /test.html? /page/pop/test Redirect 301 /about.html? /about I have also tried: Redirect 301 http://www.example.co.za/test.html? http://www.example.co.za/services/test however whenever I go to: http://www.example.co.za/test.html http://www.example.co.za/about.html it just redirects to the home page I also have: RewriteRule ^.*$ index.php [NC,L] in .htaccess

    Read the article

  • google changing crawl speed: doesn't seem to work. Why?

    - by Olivier Pons
    I've changed 3 days ago the google crawling speed of mywebsite. Here it is: This means: 2 demands by second. I've got the message on the google webmasters tools that the change speed has been taken in account: But after more than three days, nothing happens: still one request every ten seconds See here: My webserver is very fast and can handle up to twenty simultaneous connexions. And my website is brand new, this means google is almost the only one here crawling my website. After more than 30000 successful requests (= no 404), I think there's something going on... or maybe this is just a bug? Has anyone ever had this problem?

    Read the article

  • De-index URL parameters by value

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have parameters with certain values appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • Pay Per Click Software

    - by Eddy Freeman
    What software do sites like www.shopzilla.com, www.become.com, www.kelkoo.com etc.. use for the Pay-Per-Click product listing campaigns they offer for their retailers. I am asking what kind of software do they use to know that a certain retailer's products has been clicked 50 times or 100 times(and then the cost of the click is deducted from his money-account) etc... Can someone point me to those kind of softwares? EDIT *Some Explanation :: * In a site like www.shopzilla.com, retailers will upload thier products(list their products on the site). Anytime a buyer clicks on a product to go the retailer's website, an amount of money(say $0.20) is deducted from his account(the money he has deposited in his account with shopzilla). A retailer can see how many times buyers have clicked on his products and how much money remains in his shopzilla accounts. Am looking for such softwares that comparison sites like shopzilla uses to run this type of campaigns. I hope it is clear now.

    Read the article

  • 3 language website using subdomains and mapped domains. Add subdomains or mapped domains to WMT?

    - by Owen Mclaughlin
    I have a new wordpress multisite setup. Main language Italian and 2 subdomains using en and de for english and german. There is no auto translation plugins being used. The wordpress theme being used is by Studiopress.com and have SEO built in. I am a little confused as which domains to use in Webmasters Tools. If I use the subdomains (en and de) which have the seo setup, then google will index and show the en.example.it wont know about the mapped domains or display them. If I use the mapped domains then won't google not see the seo for the subdomains. I am muddled with this. What do??

    Read the article

  • Submitting a sitemap to take care of inherited Google crawler errors

    - by leeand00
    I have an awful lot of Google Crawler errors (1000 or so) after I inherited a site that the previous owner migrated without moving much of their content. Would generating a map of the current site and submitting it to Google help fix this? Is there any quicker, automated way to eliminate errors other than clicking each and every site error? Note: I have already tried automating this on my own.

    Read the article

  • What constitutes a "substantial, good-faith effort to remove the links"

    - by Luke McCallum
    We engaged the services of a 3rd party SEO consultant to assist us in managing our Meta data and to write regular blogs on our site http://cyberdesignworks.com.au Without our authorisation, the SEO also ran a link building campaign which has seen us Penguin slapped and we no longer appear in Google for a number of our core keywords. Since notification by Google that we have "unnatural links" back in March we have undertaken a significant campaign to rid ourselves of these dodgy backlinks by a number of methods. I have just received feedback on my 4th or 5th resubmission which is still advising that we need to make a "substantial, good-faith effort to remove the links" before Google will reconsider us for inclusion. After the effort that I have gone through to get links removed, I am now at a loss as to what else I can do to demonstrate "substantial, good-faith effort to remove the links". Below is a summary of the actions that we have taken to date. According to http://removem.com we had about 5584 back-linking domains. Of those we have successfully contacted and had removed links from 344 domains We ignored links from 625 domains as they were either legitimate press releases, natural backlinks or client websites containing an attribution link in the footer that points back to us. Due to our efforts, or the sites simply becoming defunct, removem.com reports that links from 3262 domains have been removed. We have contacted but are yet to receive feedback from 1666 domains so we can assume that the backlinks remain. We have configured an automatic 301 redirect for each of the links from these 1666 domains to point to http://redirects.sanscode.com/ which we are calling our Bad Link Catcher (a stroke of genius I thought). i.e http://www.mysimplewebdesign.com/create-a-perfect-webpage-with-four-important-tips-from-sydney-web-development-service-companies.php As we are a web design agency, we have a large number of client websites which contain an attribution link in their footer which points back to us. We have gone through the vast majority of these and updated these links to replace anchor text with an image and rel="nofollow" link. i.e <a rel="nofollow" target="_blank" href="http://www.cyberdesignworks.com.au/"><img src="https://sessions.sanscode.com/site/assets/media/badges/Badge_CDW_SANSCODE.png"></a> See http://www.milkatwork.com.au/ An export from http://removem.com detailing the number of times we have contacted each link and whether it is still found or not was also supplied with each resubmission. The total back links reported in Google Web Master Tools has dropped from over 100K to 87K and I expect it to drop significantly lower once Google re-crawls each back-linking page. Based on all of the above, I am not sure what else I can do to to demonstrate a "substantial, good-faith effort to remove the links". I would sincerely appreciate any feedback or suggestions that you may have as I am out of ideas.

    Read the article

  • Does sitewide html refactoring affect Google traffic?

    - by Name
    Good morning, I have recently made a big structural change on my site and the very next day the number of Google impressions went from 75.000 to 3.000, with a proportional drop of traffic from searches. No URLs were changed, neither were the page titles or descriptions. Everything is exactly the same, but different looking, except that it does barely appear on Google anymore. Anybody has a clue to why?

    Read the article

  • Migrating from a wordpress.com to wordpress.org blog without harming SEO

    - by kikio
    I've had a Wordpress.com weblog for 3 years. And its pages have a good pagerank and are shown in first search results pages. Because of the limitations, I should migrate to my own WordPress. How to migrate safely with the minimum SEO problems? (I know how to export content in wordpress.com and import it to a new wordpress.org blog.) Note 1: links structure and site design are different on the new wordpress blog. (I don't like wordpress.com links structure :| ) Note 2: as you know, it's not possible to edit .htaccess file on wordpress.com. so I can't use 301 redirects.

    Read the article

  • The sharp decline Statistics of website

    - by Erfan Safarpoor
    My website has had 10 months ago, the statistics are very high. Very high ... But after 10 days of server failure, Marm was 20 times less. I got lost for a long time without making a mistake, do ... I am the source of links that they've hired a writer to pen the final results are seen. But a strange thing: Approximately every two months and was hit again 20 more times and then low again after 10 days! my website url : www.sooran.com (food.sooran.com)

    Read the article

  • 410 Responses when your CMS host doesn't support them?

    - by leeand00
    Sending a 410 responses for a page that no longer exist should make Google stop crawling for that page. The site I am working on has been recently migrated, and very little of the content was migrated. I've already turned the existing content into 301 redirects (the content that is on both the old and the new site), but now I would like to flush the old content from Google's memory by placing 410 responses in it's path when it returns to crawl for them and finds a 404 response. However, I asked our CMS host about it, and they said that our CMS does not support 410 responses. Is there some other way to post a 410 response, like making a dead link 301 redirect to a page that a 410 response in the form of a meta tag?

    Read the article

  • How do search engines segment against locale?

    - by Hope I Helped
    Assume I run a website with multiple language modes. If I had a Spanish section, it should be included in Spanish-segmented search engines such as Google Spain, Google Peru, Google El Salvador, etc. and excluded in the others. Likewise, even though the website would have content in Chinese, multilingual countries such as Singapore should feature content in their main language (English in this case). What is the best approach to ensure the appropriate language is associated with the various geographically segmented search engines?

    Read the article

  • when will google revert back page rank after i cleared network unrechable error

    - by Jayapal Chandran
    For the past one month i was getting network unreachable error. I contacted my web hosting and they said that google bots were blocked if it were causing more traffic. And then they witelisted google bots. Now the errors did not appear but my ranking and search results went down to more than 6 pages or they did not appear at all. Now google is able to read my robots and sitemap. Just yesterday. when will search results and page rank gets to its previous positions? like it were before a month? Most links did not appear in google search result.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >