Search Results

Search found 20852 results on 835 pages for 'local seo'.

Page 23/835 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • List of freely available SEO tools (software) for keyword rank checking? [closed]

    - by Craig
    Possible Duplicate: can anyone reccommend a Google SERP tracker? Requirements: Analysis of site positions on the list of keywords in different search engines; Track keyword positions on search engines. I want see if my keyword rankings have moved up or down; Creating reports. I use Excel + Rank Checker addon for Firefox, to analyze the position of the site in search engines for my keyword list. Are there any tools which tested and working properly. Thanks.

    Read the article

  • SEO blog Indexing: wordpress.com subdomain vs a registered domain?

    - by rumspringa00
    I've used WordPress for a few of my client's sites, mostly small businesses and eCommerce sites. I have found through Google Analytics as well as the All in One Webmaster plugin that when it comes to social media, using WordPress is a surefire way of getting your site indexed by Google and occasionally Bing and Yahoo. Since I am a heavy WP user, I'd like to contribute by registering a dot WordPress domain for my portfolio. When using a WP installation concurrently with a WP domain, e.g. myportfolio.wordpress.com, will the site be more or less likely to be indexed rather a generic myportfolio.com domain? I've seen mixed opinions where people seem to favor a WP domain for URL output where others say that it's a moot point, and that Google will not favor a WP domain over a dot com domain as long as your meta tags are updated and content is keyword optimized. I tend to disagree and believe a WP domain would more likely be indexed and output more URLs over an individual, laconic domain like myportfolio.com. Am I wrong?

    Read the article

  • SEO - PageRank on Facebook pages, but pages have no back links to them?

    - by Marco Demaio
    have a look at these two pages: 1) http://it-it.facebook.com/jeanchristophe.cataliotti (PageRank 2 from Google toolbar) Amazingly it has got NO links to it: http://siteexplorer.search.yahoo.com/search?p=it-it.facebook.com/jeanchristophe.cataliotti&fr=sfp 2) http://www.facebook.com/group.php?gid=18463182878&v=wall&viewas=0 (PageRank 1 from Google toolbar) Still amazingly it has got NO links to it: http://siteexplorer.search.yahoo.com/search?p=www.facebook.com/group.php?gid=18463182878&v=wall&viewas=0&fr=sfp How do you explain this? Hoping for an explanation that goes beyond just saying that the PR in Goole toolbar it's not updated, because it can not be the reason for this!

    Read the article

  • SEO disasters moving domain for a high traffic website?

    - by chrism2671
    We're looking at moving our website from http://www.wikijob.co.uk to http://www.wikijob.com/uk as we spread our wings internationally. Our .co.uk website has a PR6 and received around 1/2 million visitors a month, 40% international. The wikijob.com domain, while registered for a while, has not been used nor promoted. I am concerned that moving domain could really haemorrhage our traffic and result in a loss of goodwill from Google, even if we use a 301, but equally, if we could transfer that pagerank to the .com domain, that would give us a massive head start around the world. Should we do it, or should we start over with .com and leave .co.uk as is?

    Read the article

  • SEO: Is there a limit to how long titles/descriptions should be?

    - by jiewmeng
    I am trying to fill up title and meta description for my Tumblr blog. The way I will do that is via the themes. For the title, it isn't too much of an issue, I can just get the title for the post although some post types do not have titles. The main question is about the description. I am thinking of using the start of the body content. For now, there is no way to limit the length of body content. I wonder if its OK to have the whole body in the description? I have contacted Tumblr support to suggest that they have a way of creating limiting text length in their template tags.

    Read the article

  • How can I clone or mirror a site without SEO penalties for duplicate content?

    - by Amanda
    I am a web developer and I want to create clones of the sites I've developed for clients, so that I have an "original copy" on a subdomain of my own website, so that I can showcase my work to new clients. What is the best way to not get my clients original websites penalised for duplicate content? I am planning to have a robots.txt file that disallows all robots, as well as using <link href="http://www.client-canonical-site.com/" rel="canonical" /> in the <head> of the pages. Is that sufficient? Should I use rel=nofollow on all the links as well?

    Read the article

  • Question aboud Headings For Professionals <H1>... <H9> in SEO & Browsercompatibility Differences

    - by Sam
    We all know the importance ans significance of Headings for Professional Webmasters. These were known for professional developers as <h1>Heading 1</h1> h2 ... h6. As a daring webdeveloper I lately needed more short headings for complex structured document and i thought what the hell and went ahead and used in css h1,h2,h3,h4,h5,h6{ } h7{ } h8{ } h9{ } My experiment turned out to pay back. But only in Firefox, Safari, Chrome etc, not in Internet Explorer 8. Q1. Who(&When) decided that All headings should go upto h6, and not h4 or h7? Q2. Why h7 -h9 work perfect in all major browsers, except IE8? Q3. What is the significance for Bing,Yahoo and Googld in terms of recognition or headings h1 ~ h9? obviously h1 is more important than 2, but do they differentiate between h5 and h6? or not anymore after h3?

    Read the article

  • Using Google Webmaster & Analytics, what data to look at to improve website performance?

    - by Rob
    Using data from Google Analytics and Webmaster tools, what data should I be looking at to improve my websites performance? I want to improve the SEO, usability and just general performance of my website. EDIT: It's a portfolio website that we've done the initial SEO for, also optimised all images etc and made the site as fast as possible. What kind of things should I be looking out for in the analytics and webmaster data to improve performance for both the SEO and each individual page.

    Read the article

  • SEO: 301 for a page which has no mirrow path?

    - by Alex
    Hello, I just did a 301 and the domain and some pages which have a mirror file path are fine. But I have one directory which is not going to be part of the new site and I don't know how to redirect the old files that were there. I need something like this: oldDomain/oldDir/file.php and I need to make it redirect to newDomain/differentDir/file.php Is that possible? What is the 301 redirect rule for that? update I just added this rule as suggested by @Itai and it didn't work redirectMatch permanent ^/outdoors/trees/tanoak.php$ http://www.comehike.com/outdoors/trees/129/Tanoak any idea why?

    Read the article

  • Category to Page and blocking category url via robots.txt -Good for SEO?

    - by user2952353
    I am using a template which in the pages it allows me to add sidebars / more content under and above the content I want to pull from a category which is very helpful. If I create pages to display my categories content wont the page urls go in conflict with the category urls? By conflict I mean causing a duplicate content error? What I thought might help was to block from robots.txt the category urls of the blog ex. /category/books /category/music Would that be a good practice in order to avoid the duplicate content penalty? Any tips appreciated.

    Read the article

  • How can I clone or mirror a site without SEO penalties for duplicate content?

    - by Amanda
    I am a web developer and I want to create clones of the sites I've developed for clients, so that I have an "original copy" on a subdomain of my own website, so that I can showcase my work to new clients. What is the best way to not get my clients original websites penalised for duplicate content? I am planning to have a robots.txt file that disallows all robots, as well as using <link href="http://www.client-canonical-site.com/" rel="canonical" /> in the <head> of the pages. Is that sufficient? Should I use rel=nofollow on all the links as well?

    Read the article

  • Is a 302 redirect to a random URL from the homepage an SEO problem?

    - by CookieMonster
    I originally posted this on Stackoverflow, but I believe here is a better place to ask. My web application is very similar to notepad.cc which redirects to a randomly generated URL upon access, e.g. http://myapp.com/roTr94h4Gd. (Please note that notepad.cc is not my site.) Probably because of this redirect feature, when I do "fetch as Google" or "fetch as Bingbot", I get a 302 and no html content. Not even a <html></html> tag. HTTP/1.1 302 Moved Temporarily Server: nginx/1.4.1 Date: Tue, 01 Oct 2013 04:37:37 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.4.17-1~dotdeb.1 Set-Cookie: PHPSESSID=vp99q5e5t5810e3bnnnvi6sfo2; expires=Thu, 03-Oct-2013 04:37:37 GMT; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /roTr94h4Gd How should I avoid 302 in this case? I suppose I could modify my site to prevent the redirect, but it is a necessary feature of my web app to generate a random URL on each access. I added <meta name="fragment" content="!"> tag into my index page and set it to return a static snapshot of my page when the flag is set. But this still returns a 302. I also added a header to return 200 before redirecting, but this had no effect, either. Could someone tell me a good suggestion to solve this problem?

    Read the article

  • Is a subdomain per service a good idea for SEO?

    - by Kennie R.
    I am creating a site with quite a few services, such as a free account service, and of course a subdomain for my site's blog and then for article base and other related services, would having them all on subdomains be a good idea? Are there any caveats you are aware of in existing search engines for this? I believe mapping foo.example.com to example.com/foo to provide an alternative just in case is a good idea for sitemaps, I like to keep things clean.

    Read the article

  • SEO penalty for "duplicate" content when a site's also accessible via another domain name?

    - by tog22
    While testing searches for keywords on my site, I notice that a mirror of it at http://a8.8d.344a.static.theplanet.com/ sometimes appears at the top result rather than my primary domain. It looks like this is an alternative address for my server. Will the presence of identical content at this domain and at my primary domain result in a Google penalty? If so, what can I do about it? Thanks for any help...

    Read the article

  • What is the proper SEO way to add and remove links to our site and sitemap?

    - by ElHaix
    As content fluctuates on our site, we will obviously add the titles to the new pages to our sitemap, and link list for search engine indexing. Through time, certain links will become less relevant, and would like to know how to avoid them being crawled. The links themselves may not be removed - but we don't want to dilute the link list with less relevant links as time passes. I'm guessing the status code of the page would change - to what? Should they also removed from the sitemap?

    Read the article

  • How should I study a competitor's off page SEO?

    - by Chris Adragna
    What are the things I need to do, and with what tools to know what a competitor has working for him/her off-page (free and paid tools -- please suggest both)? First of all, I'm supposing I want to see all of the sites linking in and see what anchor text is used. Is there something that would report on the anchor text linking in, such as counting the keyword phrases used as anchor text? Also, it would be helpful to see where the PR is coming from by PR, such as, listing inbound links by PR of the page linking in. Lastly, if I'm missing something, here in the way of off-page attributes, please say so.

    Read the article

  • Linking competitor with the same keyword i am targeting : Good or Bad for Seo?

    - by Badal Surana
    i am linking one of my competitors from my site for the same keyword which is i am targeting for my site.(My competitor is paying me for that) For Example: Me and my competitor both are targeting on keyword "foo" and my competitor paying me for linking his site from my site with keyword "foo" What i want to know is if i do that will my site's position go down in Google search results? or it will make no difference??

    Read the article

  • What HTML and CSS markup is best for SEO for a list of questions (like on Stack Exchange sites)

    - by Oleg9
    On the StackOverflow a question block (in the q-list on the index page and so on) represented by the following html code: <div class="question-summary narrow tagged-interesting" id="question-summary-19832613"> <div onclick="window.location.href='/questions/19832613/how-to-display-only-transit-routesfor-trains-in-google-maps-api'" class="cp"> <div class="votes"> <div class="mini-counts">0</div> <div>votes</div> </div> <div class="status unanswered"> <div class="mini-counts">0</div> <div>answers</div> </div> <div class="views"> <div class="mini-counts">3</div> <div>views</div> </div> </div> <div class="summary"> <h3>...</h3> <div class="tags t-javascript t-google-maps t-google t-google-maps-api-3"> </div> <div class="started"> <a href="/questions/19832613/how-to-display-only-transit-routesfor-trains-in-google-maps-api" class="started-link"><span title="2013-11-07 09:52:29Z" class="relativetime">1 min ago</span></a> <a href="/users/1309392/shirish">Shirish</a> <span class="reputation-score" title="reputation score " dir="ltr">189</span> </div> </div> </div> It uses float positioning. My questions is: Would use of css styled tables be a better choice? (It's a table, isn't it?) Or it just depends on what are you prefer to use and doesn't affect the technical side (search engines or something)? The background information (such as number of views, votes etc.) comes first in the code. And I know that search engines have a limit at viewing each page. So would it better to place div's depending on their importance and then markup them on the page using css methods (like negative margins and absolute positioning)? Or it isn't so important in this instance?

    Read the article

  • How do I optimize SEO in a multiblog WordPress install?

    - by user35585
    We are about to launch two product pages plus a corporate website. The goal is to keep a blog in all of the sites, but here it comes the question about how to do it in a way we get everything unified but do not mess with Google's web crawlers. We considered the following options: Putting a blog from which we retrieve two categories with custom CSS, so we have a blog that sub splits two category-dependent blogs; this way we can get the feeds and will point to it Putting two product blogs of which we retrieve their posts into a bigger, corporate blog Putting three independent blogs Despite I was for the first option, so we only have to address our content from the product pages, I would sincerely like to hear your opinion. We are afraid duplicate content or strange link games may make us lose PageRank. How would you do it?

    Read the article

  • How do sites avoid SEO issues / legalities with subdomain unique ids?

    - by JM4
    I was looking through a few websites recently and noticed a trend I'm not sure I understand. Sites are creating unique referral URLs for customers in the form of: http://customname.site.com (If somebody were to use http://www.site.com/customname it would function the same way). I can see the sites are using 302 redirects at some point using Google Chrome then doing some sort of htaccess redirect, taking the subdomain name (customname) and applying it as a referral parameter then keeping in session during the entire process. However, there must be thousands of these custom URLs that people are typing in. How are each one of these "subdomains" not treated as separate URLs which in turn are redirected to the same page (in short, generating tons of links all pointing to the same page which Google would normally frown upon)? Additionally, the links also appear on the site themselves as clickable links so I'm not sure how these are not tracked. Similarly, the "unique" url is not indexed or cached in any Google search results. How is this capability handled? It does NOT highlight the referral aspect, but a true example of this is visiting http://sfgiants.com which does a 302 redirect to the much longer proper San Francisco Giants MLB homepage. I am wondering how SFgiants.com is not indexed (assuming that direct shortened link appears on several MLB pages)? 1 - I know these are 302 redirects, I can see this on the sites network flow. 2 - These links do in fact appear on the page itself because in some areas (for example, the bottom of the page may say: send this page to a friend! http://name.site.com/ which in turn would again redirect to something like http://www.site.com?id=name so the id value could be stored in session

    Read the article

  • How much of a benefit does a changing landing page give in terms of SEO?

    - by Glycan
    I have a friend with a small business with a website. He asked me if he should make a put a section on his landing page under the fold that with his most recent review (or something along those lines). Specifically, he wants to know if that's the most efficient use of his time. Is there a list or such of things google values compared to each other, so that these kinds of answers could be easily answered?

    Read the article

  • SEO: Is promoting your backlinks a good strategy for improving search results for my site's name?

    - by user4394
    I run a website that's been around for about three years in the sports space. I am successfully ranking well for targeted keywords, but searching for the name of my site itself returns very poor results - it shows my site, its FB/Twitter, and then 15 pages of unrelated spam that happen to contain two words that, when combined, form my website's name. After that, my backlinks begin to show up spordically. As far as I can tell, I simply don't have enough backlinks and the backlinks I do have are ranked worse than the spam. (Site Explorer lists 200 external links to any page on our domain and 20 external links directly to the front page). To counter this, my strategy is to promote my backlinks so they get a better page rank than the spam. Does that make sense? Am I going in the right direction or should I just focus on getting more backlinks pointing directly to my site? Thanks in advance and I'd be happy to answer any questions I can (without giving away my site of course).

    Read the article

  • How should I study a competitor's off page SEO?

    - by Chris Adragna
    What are the things I need to do, and with what tools to know what a competitor has working for him/her off-page (free and paid tools -- please suggest both)? First of all, I'm supposing I want to see all of the sites linking in and see what anchor text is used. Is there something that would report on the anchor text linking in, such as counting the keyword phrases used as anchor text? Also, it would be helpful to see where the PR is coming from by PR, such as, listing inbound links by PR of the page linking in. Lastly, if I'm missing something, here in the way of off-page attributes, please say so.

    Read the article

  • How to reverse engineer the SEO on a website?

    - by Startup Crazy
    I have read this question. My question is a bit different from it. I want to know how can I reverse engineer another website that is ranking the best for some keywords. For example some website called www.bla.com is there and it ranks high for many keywords and I want to learn from it how can my website be of the same authority and get the same ranking (or probably better ranking if I found something that they are missing). Can anyone enlist it as a procedure, how to reverse engineer a website?

    Read the article

  • SEO/Google: How should I handle multiple countries and domains?

    - by Valorized
    Hello. I'm the webmaster of an online shop based in Austria (Europe). Therefore we registered "example.at". We also own different other domain names like "example-shop.com" and "example.info". Currently all those domains are redirected (301) to the .at one. Still available is: "example.net" and "example.org" (and .ws/.cc), unfortunately not available: .de/.eu The .com is currently owned by one of our partners, the contract ends in 2012 but until then we have no chance to get this one. Recently I read more about geo-targeting and I noticed ONE big deal. The tld ".at" is hardly recognised in Germany (google.de) whereas it is excellently listed in Austria (google.at). As a result of the .at I cannot set the target location manually (or to unlisted). More info: https://www.google.com/support/webmasters/bin/answer.py?answer=62399&hl=en This is a big problem. I looked at Google Analytics and - although Germany is 10x as big as Austria - there are more visits from Austria. So, how should I config the domain in order to get the best results in both, Germany and Austria? I thought of some solutions: First I could stop redirecting the .info. Then there would be a duplicate of the .at one. Moreover, in Webmastertools, I could set the target location of the .info to Germany. As the .at still targets Austria, both would be targeted - however I don't now if google punishes one of them because of the duplicate content? Same as 1. but with .net or .org (I think .info is not a "nice" domain and moreover I think search engines prefer .com, .net or .org to .info). Same as 1. (or 2.) but with a rel="canonical" on the new one (pointing to the .at). Con: I don't think this will improve the situation, because it still tells google that the .at one is more important, like: "if .info points to .at, the target may still be Austria". rel="canonical" on the .at pointing to the new (.info or .net or .org). However I fear that this will have a negative impact on the listing on google.at because: "Hey, the well-known .at is not important anymore, so let's focus on the .info which is not well-known." - Therefore: bad position in search results. Redirect .at to the new (.info or .net or .org) with a 301-Redirect. Con: Might be worse than 4, we might loose Page-Rank (or "the value of the page", because google says that page rank is not important anymore). Moreover this might be even more confusing for the customers. In 3. or 4. customers don't get redirected, they do not see the canonical-meta-tag. So, dear experts, please tell me what the best option would be! Thank you very much for your advice in advance and please excuse the long question. I really appreciate this network! Please note: It's exactly the same content AND language. In Austria we speak German.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >