Search Results

Search found 5969 results on 239 pages for 'seo man'.

Page 89/239 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • Is there a weight for html link?

    - by Questions
    I would like to know if there is any weight associated with a html link (its backlink) when google does crawling/indexing. Will 1,2 & 3 be ever considered as a backlink by Google? 1. <a href="xyz.com">1</a> // one character link 2. <a href="xyz.com"> </a> //blank space link 3. <a href="xyz.com">!</a> //special character link 4. <a href="xyz.com">keyword</a> //meaningful word link I hope my question is understandable and guess this is right forum. I don't know to put it in other words. Thanks in advance.

    Read the article

  • Indexing and Page Ranking Issues

    - by user631249
    Hi all I am on the first page of google for keywords concerned with MOVING, however i cant seem to break the carpet cleaning rankings. I have made changes and additions which havent been indexed yet. Should i wait for the run or please please can someone give me pointers on the carpet cleaning indexing. Also i have 53pages submitted and only 38 indexed, where could the problem be. Is there software to check indexing hiccups . Thanks.

    Read the article

  • How to handle URLs with diacritic characters

    - by user359650
    I am wondering how to handle URLs which correspond to strings containing diacritic (á, u, ´...). I believe what we're seeing mostly are URLs where diacritic characters where converted to their closest ASCII equivalent, for instance Rånades på Skyttis i Ö-vik converted to ranades-pa-skyttis-i-o-vik. However depending on the corresponding language, such conversion might be incorrect. For instance in German, ü should be converted to ue and not just u, as seen with the below URL representing the Bayern München string as bayern-muenchen: http://www.bundesliga.de/en/liga/clubs/fc-bayern-muenchen/index.php However what I've also noticed, is that browsers can render non-ASCII characters when they are percent-encoded in the URL, which is the approach Wikipedia has chosen, for instance http://de.wikipedia.org/wiki/FC_Bayern_M%C3%BCnchen which is rendered as: Therefore I'm considering the following approach for creating URL slugs: -(1) convert strings while replacing non-ASCII characters to their recommended ASCII representation: Bayern München - bayern-muenchen -(2) also convert strings to percent encoding: Bayern München - bayern_m%C3%BCnchen -create a 301 redirect from version (1) to version (2) Version (1) URLs could be used for marketing purposes (e.g. mywebsite.com/bayern-muenchen) but the URLs that would end being displayed in the browser bar would be version (2) URLs (e.g. mywebsite.com/bayern-münchen). Can you foresee particular problems with this approach? (Wikipedia is not doing it and I wonder why, apart from the fact that they don't need to market their URLs)

    Read the article

  • CDN virtual subdomain causes duplicated content

    - by user3474818
    I have created a subdomain and a CNAME record which points to the domain root. The subdomain www.static.example.com is actually a copy of the entire website www.example.com and it is supposed to act as an CDN and serve static content in order to improve speed. However, all of my content can be accessed via subdomain aswell, so Google has indexed it all and now I am dealing with duplicated content. How could I deny access to crawlers for the subdomain baring in mind that I do not have different subfolder for subdomain, so I can't create a separate robots.txt file?

    Read the article

  • Consolidating multiple domain names

    - by Mike
    I have a client that has three separately hosted copies of their website, each on a separate domain name. The websites are all essentially the same, bar a few discrepancies caused by badly managed updates in the past. I will soon be launching a completely new website for them, at which point, all three domain names are to resolve to the same web server. One domain name will become the default domain name that they refer to in all their literature, and the other two will simply be used as catch-alls for old links, bookmarks, and so on. I would like to know what people consider the best route to achieve this. My plan so far is: Get the new site up and running on the new webserver. Change the relevant A record of the default domain name to point to the new webserver. a) Keep the existing hosting accounts in operation. Create a list of 301 redirects from old page names on the old site to new page names on the new site. or b) Configure CNAME records for the non-default domain names, each pointing to the new webserver. Create a list of 301 redirects on the new site that redirect from old page names to new page names. If my understanding is correct, 3a will help to maintain whatever search engine rankings the sites already have (I know it's not going to be perfect), while at the same time informing search engines that the old domain names are no longer in use. What's a good approach to take here?

    Read the article

  • When will my old page stop appearing on Google?

    - by Bane
    I recently bought a new address for my Blogger blog, from yannbane.blogspot.com to www.yannbane.com. However, www.yannbane.com addresses do not appear when they are searched for! Is this natural? How much time will it take for Google to update its index? yannbane.blogspot.com 301's to www.yannbane.com. Both are added to my Webmaster Tools account, but it shows no data for www.yannbane.com (strangely). And, finally, is there something I could do to speed up the process?

    Read the article

  • Recovering from an incorrectly deployed robots.txt?

    - by Doug T.
    We accidentally deployed a robots.txt from our development site that disallowed all crawling. This has caused traffic to dip dramatically, and google results to report: A description for this result is not available because of this site's robots.txt – learn more. We've since corrected the robots.txt about a 1.5 weeks ago, and you can see our robots.txt here. However, search results still report the same robots.txt message. The same appears to be true for Bing. We've taken the following action: Submitted site to be recrawled through google webmaster tools Submitted a site map to google (basically doing everything possible to say "Hey we're here! and we're crawlable!") Indeed a lot of crawl activity seems to be happening lately, but still no description is crawled. I noticed this question where the problem was specific to a 303 redirect back to a disallowed path. We are 301 redirecting to /blog, but crawling is allowed here. This redirect is due to a site redesign, wordpress paths for posts such as /2012/02/12/yadda yadda have been moved to /blog/2012/02/12. We 301 redirect to wordpress for /blog to keep our google juice. However, the sitemap we submitted might have /blog URLs. I'm not sure how much this matters. We clearly want to preserve google juice for URLs linked to us from before our redesign with the /2012/02/... URLs. So perhaps this has prevented some content from getting recrawled? How can we get all of our content, with links pointed to our site from pre-and-post redesign reporting descriptions? How can we resolve this problem and get our search traffic back to where it used to be?

    Read the article

  • Repeat use of Schema / Rich Snippets Markup i.e LocalBusiness Data

    - by bybe
    I am unable to find official wording and I'm hoping that some Rich Snippets/Schema Guru can give me some insight into proper usage of repeated content when it comes to using markup. I'm building a site that wants to use Schema as the markup type and the owner would like as much usage as possible. The business name, telephone and address will appear on every page now is it valid or even useful to use Rich Snippets on every page where this information is displayed. For example this information appears in the header, and footer of every page of the site and too give you an example of my current markup see below: <body itemscope itemtype="http://schema.org/LocalBusiness"> <header> <a itemprop="url" href="http://www.domain.co.uk/"> <img itemprop="logo" src="image.png" alt="Company Name Logo" /> </a> <span itemprop="telephone">01202 000 000</span> </header> <div> This is where the content will go</div> <footer> <span itemprop="name">Company Name</span> <span itemprop="description"> A small little bit about this company</span> <div itemprop="address" itemscope itemtype="http://schema.org/PostalAddress"> <span itemprop="streetAddress">Address Goes here</span> <span itemprop="addressLocality">Area Here</span>, <span itemprop="addressRegion">Region Here</span> </div> </footer> </body> !-- Local Business Schema Now Closed --> So as you can see above this information will be displayed on every single page.... Is this valid or bad to repeat usage of this information in schema format...

    Read the article

  • How to handle new domain names?

    - by michael
    I have a new product which I'll call a pen ink reloader. I have a website using my products name, for example, www.inkywink.com which I want to have accessed by searches for keywords such as "pen ink", "pen out of ink" "ink for pens" etc. , since nobody knows that a pen ink reloader exists. I see that its quite difficult to get on front page for these keywords since they have lots of competition. However I notice that the exact phrases I want to rank highly for are available as domains. I purchase "www.penink.com" and "penoutofink.com" which for arguments sake are highly searched and the perfect keywords to get eyes on my money site www.inkywink.com . Two questions: 1. What is my best option to leverage those names so that they appear near top of searches so that I can get traffic to my money site? Do I just have them redirect 301 to inkywink.com or should I create small original content on each with links to my main site? 2. If I just have them redirected to inkywink.com, am I able to use keywords in metatag and headers for each site separately or do they all automatically obtain the same headers and tags as the site to which theyre redirected ? Thanks to anyone who can help as I'm a real newbie to all this.

    Read the article

  • schema.org 'reviewRating' tag not recognized by google snippet testing tool

    - by saravanak
    I'm trying to add more structural information to my webpages by using the microdata format suggested in www.schema.org. The procedure seems straight forward but I'm having issues validating my results in the Google Rich Snippets Testing Tool. Check out this review page, here I'm using the 'reviewRating' property item to specify rating values for that particular review. I followed the same format as defined in schema.org/Rating but this markup fails validation in Google's rich snippet testing tool with the following error info. Item Type: http://schema.org/rating reviewrating = 5 ratingvalue = 5 Warning: Property "reviewrating" was not found.

    Read the article

  • How can I avoid a 302 for Fetch as Bot?

    - by CookieMonster
    I originally posted this on Stackoverflow, but I believe here is a better place to ask. My web application is very similar to notepad.cc which redirects to a randomly generated URL upon access, e.g. http://myapp.com/roTr94h4Gd. (Please note that notepad.cc is not my site.) Probably because of this redirect feature, when I do "fetch as Google" or "fetch as Bingbot", I get a 302 and no html content. Not even a <html></html> tag. HTTP/1.1 302 Moved Temporarily Server: nginx/1.4.1 Date: Tue, 01 Oct 2013 04:37:37 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.4.17-1~dotdeb.1 Set-Cookie: PHPSESSID=vp99q5e5t5810e3bnnnvi6sfo2; expires=Thu, 03-Oct-2013 04:37:37 GMT; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /roTr94h4Gd How should I avoid 302 in this case? I suppose I could modify my site to prevent the redirect, but it is a necessary feature of my web app to generate a random URL on each access. I added <meta name="fragment" content="!"> tag into my index page and set it to return a static snapshot of my page when the flag is set. But this still returns a 302. I also added a header to return 200 before redirecting, but this had no effect, either. Could someone tell me a good suggestion to solve this problem?

    Read the article

  • Hijax == sneaky Javascript redirects? Will I get banned from Google?

    - by Chris Jacob
    Question Will I get penalised as "sneaky Javascript redirects" by Google if I have the following Hijax setup (which requires a JavaScript redirect on the page indexed by google). Goal I want to implement Hijax to enable AJAX content to be accessibile to non-JavaScript users and search engine crawlers. Background I'm working on a static file server (GitHub Pages). No server side tricks allowed (so Google's #! "hash bang" solution is not an option). I'm trying to keep my files DRY. I don't want to repeat the common OUTER template in all my files i.e. header, navigation menu, footer, etc They will live in the main index.html Setup the Hijax index.html page contains all OUTER html/css/js... the site's template. index.html has a <div id="content"> which defaults to containing the "homepage" html. index.html has a navigation menu, with a Hijax link to an "about" page. With JavaScript disabled (e.g. crawler) it follows link to /about.html. With JavaScript enabled (e.g. most people) the link updates the url hash fragment to /#about and jQuery replaces the <div id="content"> innerHTML with $("#content").load("about.html #inner-container");. AJAX content about.html does not contain anything extra to try an cloak content for crawlers. about.html file contains enough HTML / CSS / JavaScript to display /about.html as a standalone page with it's own META data... e.g. <html><head><title>About</title>...</head><body></body></html>. about.html has NO OUTER HTML template (i.e. header, navigation menu, footer, etc). about.html <body> contains a <div id="inner-container"> which holds the content that is injected into index.html. about.html has a <noscript> tag as the first child of <body> which explains to non-JavaScript users that they are viewing the about page "inner content" - with a link to navigate to the index.html page to get the full page layout with menu. The (Sneaky?) Redirect Google indexes the /about.html page. However when a person with JavaScript enabled visits that page there is no OUTER html template (e.g. header, navigation menu, footer, etc). So I need to do a JavaScript redirect to get the person over the /#about page (deeplinking to the "about" page "state" in index.html). I'm thinking of doing a "redirect on click or after 10 seconds". The end results is that user ends up on an "enhanced" page back on index.html with all it's OUTER template - but the core "page" content is practically identical. Known issue with inbound links e.g. Share / Bookmarking It seems that if a user shares the URL /#about on their blog, when allocating inbound links to my site Google ignores everything after the # ... it allocates value to the / page - See: http://stackoverflow.com/questions/5028405/hashbang-vs-hijax/5166665#5166665. I can only try an minimise this issue offering "share" buttons on the page with the appropriate urls i.e. /about.html. Duplicate Sorry. I posted this same question over on http://stackoverflow.com/questions/5561686/hijax-sneaky-javascript-redirects-will-i-get-banned-from-google ... then realised it probably belongs more on this Stack Exchange site... Not sure if I should delete the Stack Overflow question? Or just leave it on both sites? Please leave comment.

    Read the article

  • Content appearing under multiple categories; anything I can do to prevent duplicate penalty?

    - by dave
    I'm working with a CMS that allows me to post content in to multiple categories. So, I have this link: www.site.com/category/green-cars Here are the GREEN cars TITLE: A Big green car INTRO: this is a great big green car. But then I have this link: www.site.com/category/big-cars Here are the BIG cars TITLE: A Big green car INTRO: this is a great big green car. So essentially - for every item of content, header and the intro sentence is the same regardless of the category the item appears in. Will a search engine penalise the site for having the same content in this way? I've looked at canonical links, but I don't think this is relevant here. All my content points to the same page - but the content may appear in multiple categories first. Or am I worrying about nothing? Thanks.

    Read the article

  • Searching for a page with a Very Unique title, doesnt find that intended page... Why?

    - by Sam
    Dear folks, a question about appearing in search results in google: A page of mine has this extremely unique page title: Ein gutes Logo passt wie ein Handschuh auf Ihre Marke in die Hände Now, when I search the phrase: Ein gutes Logo passt wie ein Handschuh auf Ihre Marke in die Hände Then all kinds of other irrelevant pages show up having only 1 or at best two words from my unqie title appearing, although I have searched for the entire phrase! And when I search the phrase in between quotes: "Ein gutes Logo passt wie ein Handschuh auf Ihre Marke in die Hände" Then it finds 1 result, which is my page. What is going on? Why doesn't show the unique result without the quotes? Thanks: your ideas and suggestions are welcome and much appreciated

    Read the article

  • Pagination and duplicate content

    - by jazz090
    I have an archive page that displays the number of articles published. Because there were so many, I ran a pagination script: for 127.0.0.1/archive/2/?p=x&pp=y where p is the page number and pp is number of articles to display per page. The pagination looks like this: Prev 1 2 3 4 ... 12 NEXT with each item linking to p like <a href="?p=x">x</a>. I also have the items per page setter: 25 | 50 | 100 (<a href="?pp=y">y</a>). Now I have a PHP script that fixes pp into a session variable. But I am worried about duplicate content (since incrementing pp values will be inclusive) and also content not getting indexed because its not in the pagination link. so in the example above, pages 5-11 will not be indexed. Any ideas on how to fix this?

    Read the article

  • My blog which gets 300+ daily impressions has stopped appearing on the 1st page of Google

    - by Sangram
    I have a blog regarding Placement papers from December 2010. My monthly impressions are around 4000. For the last 2 days, my blog has disappeared from Google search engine result pages. Impressions have reduced drastically. Please check Stat reports: My blog is still on the search engine because when I search site: mydomain.com on Google, I can see my all pages indexed over there… But my pages which used to appear on the first or second pages of Google do not appear any more. Example: If I search with query GE round 2 code writing test on bing.com or Yahoo search, the first link on the result page is my blog. But if you do same on Google, my URLs do not appear even on the 1st 3 result pages. I used to get lots of visitors by these search query earlier.

    Read the article

  • Using an old penalized domain for a new website

    - by MiladSafaei
    I had a website with 2 domains like these: firstdomain.com and first-domain.com. The main domain was first-domain.com and the other one was 301 redirected to first one. The main domain got a Google Penguin penalty some months ago. I uploaded the site on an new domain and removed Google index of old domain by using the remove URL tool in Webmaster Tools. Now, I want to use firstdomain.com (which was redirected to the penalized domain) for a new and fresh website with new and perfect content. Is it probable that history of this domain affects the new website and harms its ranking?

    Read the article

  • Why my site is not ranking for particular keyword

    - by user543087
    My site is only 3 days to be 6 months old. This website is unique, that is there is no competitor to this type site in India, providing comparison of payment gateways in India, besides the payment gateways companies itself. I've optimized it for key word : "payment gateway" I've changed the url's twice, latest being 3 months back, in which case Google Webmaster gave plently of 404's. I corrected the useful 404's and left meaningless ones as it is. What is the reason it's not ranking well for payment gateways? Even site with single page about "Payment gateways" seem to be ranking better than this. Is it does to: 1) Lot of outbound links to in-context companies and information 2) 404's as reported in Google Webmaster My another site is successfully getting 1500 unique visitors daily and is up in Google ranking. I don't know why it is not!

    Read the article

  • Submitting new site to directories - will Google penalize?

    - by Programmer Joe
    I just started a new site with a forum to discuss stocks. I've already submitted my site to DMOZ. To help promote my site and to help people who are looking for stock discussion forums to find it, I'm thinking of submitting my site to a few more directories but I'm hesistant because I know Google will penalize a site if it believes the backlinks to the site are spammy and/or low quality. So, I have a few questions: 1) If I submit my site to directories with a PR between 4 and 5, will those backlinks be considered spammy/low quality? I noticed most free directories have a PR between 4 and 5, but I don't know if backlinks from those directories would be considered spammy by Google. 2) I'm thinking of submitting it to Best of the Web and JoeAnt, but these are paid. Does anybody have any experience with these two paid directories? Are these two directories considered higher quality by Google?

    Read the article

  • Google indexed page a day before also reflecting in search but today everything vanish

    - by ganesh
    We had robots.txt which disallow all robots as we were in development. We are live now. We change robots.txt as per our requirement a day before. Submit indexes using Google Webmaster Tools index status. After this we can see proper result in search as well as Google images search was working as expected. Suddenly today all these things vanish from Google Search. Now again I can see old result i.e. under construction message. I checked robots.txt in Google Webmaster Tools, it's ok - no crawling errors. Kindly let me know what exactly happened? How I can inform this issue to Google?

    Read the article

  • Meta Description or Title For Post Contents

    - by Raj
    I have a site that has posts without titles. You can think of them as being a lot like Twitter tweets. Should I put the post contents in the meta title tag or the description tag? If I put the post contents in one of the tags what should I put in the other? My challenge is that we have very short amounts of content with no titles. I want to avoid having too many duplicate titles or descriptions. We have things like user name, full name, date, etc.

    Read the article

  • Alternative to nofollow: custom 302 url shortener?

    - by Dogweather
    Here's the scenario: lots of blogging platforms make it tedious to insert nofollow into links within the post content. I.e., you need to edit the html, format it correctly, etc. I have a client who posts lots of content with links that should be nofollow'ed, and I thought of a novel way to handle this, since the blogging platform they're using makes it hard: I install a URL shortener web app on the client's domain. The shortener works as normal, except it redirects via 302 instead of 301. The pagerank will therefore stay at the shortener's domain, and not flow on to the target site. Part 2: In order to get the pagerank to collect meaningfully, say on the site's home page, the shortened URLs would be generated like this: /link?12345 instead of /link/12345. And then, the path /link would 301 to the home page. This way, the id is a param, not a path element. And thus, all the incoming shortened links are going to one path, which transfers pagerank to the home page. So that's my idea. I wanted to see if anybody could find problems with it. Thanks!

    Read the article

  • Incorrect Dates for Downloable files in Google Snippets

    - by alds
    We have a website which create publications and newsletters. In most (if not all) the search results for our downloadable files, the Google snippets show dates which are less than when those files were actually published, from one to three months before. It would be impossible since those files did not even exist before the dates mentioned. The dates themselves do not seem to have any significance in our site. Any suggestions where the dates come from?

    Read the article

  • Google Webmaster Tools shows invalid data

    - by Altar
    Webmaster Tools shows 1 URL error (not found page). The report says that 5 pages are linking to a page (let's call it x) that does not exist (and because it doesn't exists it returns a soft 404). HOWEVER, I look in those 5 pages (in the source code) and none is linking to the x page. It is like Google sees an old page that was indeed pointing to x. What is the problem? How do I know if Google cached an old version for those 5 pages?

    Read the article

  • What meta tag or microdata should I use for a dictionary web application?

    - by vonPetrushev
    I have a web application that serves as a dictionary, and it ranks good at google when searching for a rare word in my language (the dictionary's target language). I want the result to appear in the define: some-word, as well as in the search results when someone uses the filter tool Dictionary. Should I add some special meta-tag in the head of the html? How about microdata? Does google have a special webmaster tool for registering dictionaries like: wordnetweb.princeton.edu or en.wiktionary.org ?

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >