Search Results

Search found 4781 results on 192 pages for 'seo audit'.

Page 93/192 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Switching to HTTPS - redirect question

    - by seengee
    Following the recent Google announcements about improved ranking for sites running on https we have a number of clients asking about this. Is it safe to just 301 redirect all pages to their SSL equivalent, for example in a common PHP include file: if($_SERVER['HTTPS']!="on"){ $redirect= "https://".$_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI']; header("Location:$redirect",true,301); exit(); } Obviously I'm aware this is also possible within a .htaccess file but that cannot be modified in our case. Obviously all internal links would be switched to https:// links but obviously we need to sort out incoming links from Google and elsewhere. Is this a sound approach? Are there any other gotchas to be aware of?

    Read the article

  • Creating sitemap for google bot - how to mark dynamic content / dynamic subpages?

    - by ojek
    I have a website that is internet forum. This forum has many categories, and single category page contains alot of subpages with listed threads. This internet forum is brand new, and about a week ago I filled it with few hundred thousands threads. I then looked at google webmasters page to see any changes in indexing, but the index went up from 300 to about 1200, so that means it did not index my added threads (although it added something). This is what my sitemap.xml contains which I uploaded on their website (of course there is a lot more of the code, this is just a snipped for a single category, in my real sitemap file I have all the categories listed as below): <url> <loc>http://mysite.com/Forums/Physics</loc> <changefreq>hourly</changefreq> </url> Now, I would expect google bot to go into http://mysite.com/Forums/Physics, and move through all the subpages with thread links, and then get inside of each thread and index it's content. How can I do this? Also if this will be unclear, I will add a real link to my website.

    Read the article

  • href="x-default" for english version which isn't an auto-redirecting homepage or country selector?

    - by Noam
    for each url on my site, I'm auto-redirecting according to header accept language. The site arch is english version: http://mydomain.com/page spanish version http://es.mydomaina.com/page etc.. The english version is displayed unless I'm seeing a specific language other than en and that I support in the header, and then a redirect occurs. Google says this: For language/country selectors or auto-redirecting homepages, you should add an annotation for the hreflang value "x-default" as well: My pages aren't language selectors, nor are they the homepage. But I am auto-redirecting. My question is, should my english version be hreflang="x-default" or/and hrefland="en"?

    Read the article

  • Is it safe to Block These URLs with Robots.txt?

    - by Edgar Quintero
    I have a website that has all URLs optimized and 301 redirected from nasty URLs to clean ones. However, everywhere throughout the site the unclean URLs are linked in menus, content, products, etc. Google currently has all clean URLs indexed, along with a few unclean URLs too. So the site still has linked everywhere the old URLs (ideally this wouldn't be the case but this is how it is ATM). I would like to block the unclean URLs with robots.txt. The question: If I block these unclean URLs with the robots.txt, when the entire website is linked with them (but they all redirect to the clean version), will this affect the indexing status at all?

    Read the article

  • Directing crawlers to content in language per language sub-domain

    - by Noam
    I have a site with multilingual website with many pages (40M). The site has UGC, and each translation is actually for the titles. Each sub-domain points to the same content with different titles per language. As far as I understand, each sub-domain should be indexed by search engines, meaning they will actually need to crawl 40M x supported-languages. So I thought it might be best to direct each subdomain crawler, to pages that are fully in that language (titles + UGC). Is there a way to do this? Should search engines understand this on their own?

    Read the article

  • Google indexed site's address by accident. What do I do now?

    - by AndrejaKo
    I was making a site for a friend of mine and he wanted to be able to see my progress as I worked on the site, so I decided to put the site on a server on my computer and enable access by a domain name registered to me. It turns out that I forgot to set up a robots.txt file for the site and somehow Google indexed the site. My question is: What do I do now? As I understand it, Google doesn't like duplicate content and my friend could have problems when I upload the new site to his server. Right now his current site, which only has a work in progress page, is first on Google when searching for relevant keywords and I really really don't want to damage that. Is there anything else I need to be concerned about?

    Read the article

  • Ideas to tackle unwanted bad press/review on Google's SERP?

    - by Rob
    After Googling our company name to our horror we've found someone on Yelp.co.uk has reviewed our company. On the SERP your eye is immediately drawn to the 2 star review some complete stranger has written, which to be honest is pure slander! The most infuriating thing is the person who reviewed our company has never even been a client/customer. It's a bit like me reviewing a restaurant having never eaten or even been in there! We've sent her a private message on Yelp to remove the review and also sent a complaint to Yelp themselves but have yet to get a reply. We've resisted going mad at the reviewer and also requested that she re-review us having just relaunched our new website (it still riles us that she's not even a client though!). We've had genuine customers/clients review us on Yelp yet this 2 star review remains on Google's SERP. Roughly how long would it take to for our new reviews to over take this review? Does anyone have any suggestions as to how we can push the review off the 1st page of Google's SERP or any creative ways in which we can tackle this issue?

    Read the article

  • How do I find information on who links to my sites?

    - by bobdobbs
    I'm trying to figure out if there's a free way to get information on backlinks to my site. I've had webmaster tools and google analytics set up for years. But I can't find access to data about site backlinks in either toolset. Webmaster tools, under 'traffic'-'links to your site' gives me the same message for all of my sites: "No data available". I haven't been able to find anything in GA that gives any information on backlinks. I've heard of using "links:" as an operator in google search, but for each of my sites, this returns either zero or very few results in cases when I know I have many backlinks. Most of the links simple aren't shown. My thinking is that google maintains a graph of who links to my site, so I figured that they might let me see it. But I can't figure out how. I've found this tool on a spammy website: http://www.backlinkwatch.com. It offers more data than google on my backlines, and offers more results in exchange for a paid subscription. The data it offers for free looks good, but the results are limited and the site has popups and obnoxious ads. So, in short: how do I get data on who links to me? Is there a free way?

    Read the article

  • URL-rewriting on Plesk using ISAPI_rewrite3 Lite

    - by Anusha
    I am using Plesk Windows based web server with Windows 2008 server OS with IIS-6 for my e-commerce website. I want to rewrite URLs for all dynamic pages, So I installed ISAPI_Rewrite 3 Lite on my web server also I had uploaded the .htaccess file with the basic rules as follows RewriteEngine on RewriteRule ^contact\.html$ contactus.php? [NC,R] I never worked before with ISAPI neither on URL- rewriting. My doubt is How should I proceed after installation. Should I upload .htaccess or httpd.conf file OR This s/w has ISAPI_Rewrite Manager which gives place to edit httpd.conf, Should I write rules on this. Anyways I had tried all these steps but unfortunately I couldn't find any remedies. Any immediate solution will be appreciable.

    Read the article

  • What are the Consequences for using Relative Location Headers?

    - by Alan Storm
    According to the spec, Location headers used in a redirect require a server name HTTP/1.1 301 Moved Permanently ... Location: http://example.com/foo/baz/bar However, in 2012, most web browsers will recognize a relative path and redirect you to the new location using the original server name HTTP/1.1 301 Moved Permanently ... Location: /foo/baz/bar Are there any negative/surprising consequences to using the relative URLs in the Location headers? My particular concern is how Google/search-engines will interpret this, but if there's anything else I'm not thinking about I'd love to hear it.

    Read the article

  • Wordpress Website issue [duplicate]

    - by David
    This question already has an answer here: What are the best ways to increase a site's position in Google? 18 answers I have my website in WordPress. Now the problem is if I search any keywords in Google related to website webpages then it doesn't show any webpage result in web results. But if I search in Google blog result then It is showing my webpages in Google blog results. I want to know what is problem with my webpages. Why they are coming in Google blog search instead of Google web search?

    Read the article

  • ecommerce item deleted by user, 301 rediret to HOME PAGE or 404 not found?

    - by Marco Demaio
    I know this question is someway similar to this one where they reccomend using 404, but after reading this other one where they suggest to use 301 when changing site urls (in the specific case was due to redesign/refactoring) I get a bit of confused and I hope someone could clarify for this specific example: Let's say I have an ecommerce site, let's also say the final user inserted some interesting items in the site and the ecommerce webapp created the item pages at the urls: http://...?id=20, http://...?id=30 etc. Now let's say some of these interesting items got many external links toward them from many other sites because some people found those items very interesting and linked to them. After some years the final user deletes those items, so obviously the pages/urls http://...?id=20, http://...?id=30, etc. now do not exist anymore, but still many pages on the web are linking toward them. What should the ecommerce site do now, just show a 404 page for those items? But, I'm confused, wouldn't this loose all the Google PR passed by the external links to the items pages? So isn't it better to use 301 redirect to HOME PAGE that at least passes the PR to the HOME PAGE? Thanks, EDIT: Well, according to answeres the best thing to do so far is to do a 404/410. In order to make this question more complete, I would like to talk about a special case, just to make sure I understood. properly. Let's say the user creates those items again (the ones he previously deleted at point 4), maybe he changes a bit their names and description, but they are basically the same items. The webapp has no way to know these new added items were the old items so it obviously create them as new items with new urls http://...?id=100, http://...?id=101, does it makes sense at this point to redirect 301 the old urls to the new ones? MORE EDIT (It would be VERY IMPORTANT TO UNDERSTAND): Well according to the clever answers received so far it seems for the special case, explained in my last EDIT, I could use 301, since it's something of not deceptive cause basically the new pages is a replacement for the old page in term of contents. This is basically done to keep the PR passed from external link and also for better user experience. But beside the user experince, that is discussible (*1), in order to preserve PR from external broken linlks why not just always use 301, In my understanding Google dislikes duplicated contents, but are we sure that 301 redirect to HOME PAGE is seen as duplicated contents for Google?! Google itself suggests to redircet 301 index.html to document root so if they consider 301 as duplicated contents wouldn't that be considered duplicated contents too?! Why do they suggest it? Let me provoke you: “why not just add a 301 to HOME PAGE for every not found page?” (*1) as a user, when I follo a broken url from some external link to some website's page I would stick more on this website if I get redirected to HOME PAGE rather than seeing a 404 page where I would think the webiste does not even exist anymore and maybe I don't even try to go to HOME PAGE of the website.

    Read the article

  • 30x Redirects and google page ranking

    - by Mechaflash
    I'm building a Drupal site where much of the content is going to be in static on specific pages. In Drupal, each piece of content (whether you like it or not) gets created its own page (node). To ensure that users do not view these nodes, I'm thinking about setting up a 30x redirect or a flat out 30x not found. Will this method effect me negatively for google? Is there a different method that you could propose that may be better?

    Read the article

  • Is it a good idea to add robots "noindex" meta tags to deep low content pages, e.g. product model data

    - by Cognize
    I'm considering adding robots "noindex, follow" tags to the very numerous product data pages that are linked from the product style pages in our online store. For example, each product style has a page with full text content on the product: http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE Then many data pages with technical data for each model code is linked from the product style page. http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-1 http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-2 http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-3 It is these technical data pages that I intend to add the no index code to, as I imagine that this might stop these pages from cannibalizing keyword authority for more important content rich pages on the site. Any advice appreciated.

    Read the article

  • Can I make query strings produce separate pages?

    - by John Smith
    I have a profile page with a URL like so: localhost/profile.php/?username=Bob I was wondering, if I had a separate <title> which changed according to the username, would they produce separate pages in the google search results? How do I tell Google to only use the username string or does it search within the title? On a similar note, how would I create a separate page with the username like so: localhost/bob instead of a query string like facebook does. Do that make a new file for each user?

    Read the article

  • How to show the right country domain in Google Places?

    - by Baumr
    Background A site has multiple ccTLDs: example.com for the US, example.co.uk for UK users, example.de for Germans, etc. Googling for certain city keywords will return rich snippets with a list of Google Places: Problem When searching on Google Germany, the domain for US users (example.com) appears instead of the corresponding ccTLD (example.de). This is not good user experience, as users would most likely like to book on a site localized for them (e.g. language and currency). Question What solutions are there? Is it possible to return different ccTLDs in rich snippets for Google searches in Germany/UK? Ideas Would implementing the hreflang annotation resolve this? What about entering multiple corresponding URLs in the structured data markup?

    Read the article

  • In Linux, which tools are free to use to make Web site mockups?

    - by user11173
    I am using Ubuntu/Fedora. Which available mock-up builders i can use before making a website? Follow up: Adobe AIR for Linux is no longer supported. To access older, unsupported versions, please read the AIR archive. Different operating system? Downloaded: http://www.balsamiq.com/download Direct Links Mockups for Desktop: Cross-Platform: MockupsForDesktop.air Windows: MockupsForDesktop.exe Mac OSX: MockupsForDesktop.dmg Linux 32bit: MockupsForDesktop32bit.deb Linux 64bit: MockupsForDesktop64bit.deb Windows with Adobe Air bundled: MockupsForDesktopInstallerWin.zip (for offline installations).

    Read the article

  • Asterix in URL?

    - by KajMagnus
    Are there any reasons I shouldn't use an asterix (*) in a URL? Background: With asterixes, I could provide these nice and user friendly (or what do you think??) URLs: example.com/some/folder/search-phrase* means search for pages with names starting with "search-phrase", located in /some/folder/. example.com/some/**/*search-phrase* means search for any page with "search-phrase" anywhere in its name. example.com/some/folder/* means list all pages in /some/folder/ (rather than showing the /some/folder/index page).

    Read the article

  • Tips for managing internal and external links using WordPress [closed]

    - by keruilin
    So I'm looking for ways to optimize my site for user and search engine purposes. I've read several articles and looked at several different plugins. To say the least, I'm thoroughly confused as what are the best practices for managing internal and external links. Here is a list of some of my questions: Which internal links should be set to "nofollow"? Which external links should be set to "nofollow"? To what degree does actively managing links contribute to your PR? Should you use "nofollow" blindly on all links in comments? If a link to an external site is broken (404 or whatever), should you "nofollow" that link? What about "noindex"? As you can see, lots of questions. I'm hoping that you experienced webmasters can give a newb some best-practice advice.

    Read the article

  • How do I control how often search engines visit my site?

    - by Nick
    I've been using the following line in the <head> of my sites for years: <meta name="revisit-after" content="3 days" /> I recently discovered that it's not one of the meta tags that Google understands, which I take to mean that there's no point in including it, and that it's been doing no good at all for years. How often do search engines crawl a website by default, and what reliable ways are there to increase or decrease that frequency?

    Read the article

  • How do I stop Google indexing my main page as https [duplicate]

    - by user2897488
    This question already has an answer here: https:// search results appearing on Google for purely http:// site 2 answers Due to historic reasons, we have things set up so that "www.mydomain.com" redirects to "store.mydomain.com". This has worked perfectly fine until recently, when Google appears to be sending visitors to "https:// www.mydomain.com" which doesn't have an SSL-certificate (and never has). Strangely, its only the first link that goes to "https:// www.mydomain.com", all other links point correctly to "http:// store.mydomain.com". Because there is no certificate on the "www" version, users are getting an error message. How do I make Google revert to pointing the main link at "http:// store.mydomain.com" (or even "http:// www.mydomain.com.") If I remove "https:// www.mydomain.com" from Google webmaster tools, will this also remove the redirected page ("http:// store.mydomain.com)? Thanks.

    Read the article

  • SEO Mapping, Tracking and Reporting

    Linking the pages of a website is done because search engines will be more aware of a site's presence when its pages are found at the other end of industry terms in anchor text contained with content at other locations. The total and quality of those links are factors that help promote rankings; when placed for SEO purposes they should be one-way links rather than reciprocal since reciprocal links are not any help in ranking brownie points and it is prohibitively time-consuming to administer a thousand of them. This is not to be confused with link exchanges; when you can...

    Read the article

  • How to make Google recognize language for a multilingual website?

    - by Julien Fouilhé
    Few weeks ago, I implemented translation functionality for the website of my company. The website is now available in french and english and I did look on the internet the best way to do if we want to do not lose any ranking and to have our pages on Google. Here is what I did: I did set a response header: Content-Language:en and Content-Language:fr My URLs are formatted as: http://www.website.com/en/... and http://www.website.com/fr/... My html tag is set with a lang attribute: <html lang="en"> and <html lang="fr"> There is a <link rel="alternate" hreflang="en" href="EnglishPageUrl"> on french pages and a <link rel="alternate" hreflang="en" href="frenchPageUrl"> on english pages. But Google keeps referring to some english pages when I'm doing a search on french engine, knowing that the website was first only available in english. Is that normal? Do I have to wait still, it has been now almost one month, I thought it would be okay...? Thank you.

    Read the article

  • Recommendations for a network of student-related content

    - by Javier Marín
    I am running a network of websites with notes, homeworks, essays, etc. where users share their own content. I'm having real trouble with the latest Google updates (penguin, panda, etc) because the content is mainly poor-quality and with the same topic. For that reason, I want to create more websites and have more probabilites to appear in the SERPs. My question is: does Google analyzes related websites in order to exclude it from the results? I've think about distribute the websites around the world, in different hostings, but I'm afraid that Google would link it by their analytics, webmaster tools or adsense account, is that possible? What other recommendations do you have?

    Read the article

  • Is there any advantage/disadvantage to using robots.txt to disallow access to legal pages such as terms, privacy policy, etc.?

    - by CaptainCodeman
    As I understand, having repetitive content is a detriment to search engine placement. Given that many websites that use similar or even identical "Terms and Conditions" and "Privacy Policy" pages due to similar legal wording or due to copy & pasting from the same source, would it be a good idea to disallow access to these pages via robots.txt, in order to avoid being penalized for "non-original content"? Or, on the contrary, could the search engines identify this as circumvention and penalize the site for trying to hide content? Or does it not matter?

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >