Search Results

Search found 5028 results on 202 pages for 'india seo analyst'.

Page 82/202 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • limit Google maps of countries in the autocomplete list to "INDIA, USA and UK"

    - by Manoj Thakur
    This code is not working. Please tell me the exact solution <script src="maps.googleapis.com/maps/api/…; type="text/javascript"></script> <script type="text/javascript"> function initialize() { var input = document.getElementById('searchTextField'); /* restrict to multiple cities? */ var options = { types: ['(cities)'], componentRestrictions: {country: ["usa", "uk"]} }; var autocomplete = new google.maps.places.Autocomplete(input, options); } google.maps.event.addDomListener(window, 'load', initialize); </script>

    Read the article

  • [SEO] sitemap.xml What is the precision of the priority field?

    - by Christoph
    Unfortunately the specification does not tell anything about precision. The xml scheme definition states that it is of the type xsd:decimal: <xsd:restriction base="xsd:decimal"> <xsd:minInclusive value="0.0"/> <xsd:maxInclusive value="1.0"/> </xsd:restriction> I have a sitemap generator that uses up to 10 positions after decimal point. Where often only the last few positions differ. These numbers are perfectly right according to the xsd, but yet i found some pages(3,4) that state that only 0.0, 0.1, 0.2, .., 1.0 are valid values. How will the search engines react to such a sitemap? Will some just round the value? I know that it is unlikely that someone can provide an answer to that question, unless he works for that search engine, but i think experiences will also do.

    Read the article

  • SEO - Does google+other search engines index links within <noscript> tags?

    - by Joe
    I have setup some dropdown menus allowing users to find pages on my website by selecting options across multiple dropdowns: eg. Color of Car, Year This would generate a link like: mysite.xyz/blue/2010/ The only problem is, because this link is dynamically assembled with Javascript, I've also had to assemble each possible combination from the dropdowns into a list like: <noscript> No javascript enabled? Here are all the links: <a href='mysite.xyz/blue/2009/'>mysite.xyz/blue/2009/</a> <a href='mysite.xyz/blue/2010/'>mysite.xyz/blue/2010/</a> <a href='mysite.xyz/red/2009/'>mysite.xyz/red/2009/</a> <a href='mysite.xyz/red/2010/'>mysite.xyz/red/2010/</a> </noscript> My question is, if I put these in a tag like this, will I be penalized or anything by search engines such as Google? I've already been doing so for some navigational stuff which required offsets etc. However, now I would be listing a whole list of links here too. I want to provide them here, moreso so that google can actually index my pages - but for those without javascript, they can still navigate too. Your thoughts? Also.. even though I have some links that appear to have been indexed, I AM NOT 100% SURE, which is why I'm asking :P

    Read the article

  • best SEO method for date in url structure? [closed]

    - by Haroldo
    I'm working on an events website so dates are very important search terms, ie: 'whats on on fri 14th september' I've seen it done in various methods for example: domain/whats-on/city-hall/14-09-2010/event-name.html domain/whats-on/city-hall/2010/09/14/event-name.html the first is 'shallower'. the second could be clearer for google to synonym-ize as a date, has anyone else got any experience or input?

    Read the article

  • How to deal with missing items the SEO way?

    - by Brandon Montgomery
    I am working on a public-facing web site which serves up articles for people to read. After some time, articles become stale and we remove them from the site. My question is this: what is the best way to handle the situation when a search engine visits a URL corresponding to a removed article? Should the app respond with a permanent redirect (301 Moved Permanently) to a "article not found" page, or is there a better way to handle this?

    Read the article

  • SEO Problem for new dictionary site, google hasn't indexed content.

    - by John
    I loaded about 15,000 pages, letters A & B of a dictionary and submitted to google a text site map. I'm using google's search with advertisement as the planned mechanism to go through my site. Google's webmaster accepted the site mapps as good but then did not index. My index page has been indexed by google and at this point have not linked to any pages. So to get google's search to work I need to get all my content indexed. It appears google will not just index from the site map and so I was thinking of adding pages that spider in links from the main index page. But I don't want to create a bunch of pages that programicly link all of the pages without knowing if this has a chance to work. Eventually I plan on having about 150,000 pages each page being a word or phrase being defined. I wrote a program that is pulling this from a dictionary database. I would like to prove the content that I have to anyone interested to show the value of the dictionary in releation to the dictionary software that I'm completing. Suggestions for getting the entire site indexed by google so I can appear in the search results? Thanks

    Read the article

  • Can rel="next" and rel="prev" can be ignored in blog listings

    - by Saahil Sinha
    We have a blog - which is current spread to 9 pages, every page has a unique title - page 1, page 2, page 3 and so on. Also, as it's a blog, every page has unique 10 listing entry on one page. Is rel="prev" and rel="next" can be safely ignored as all these are listing and not content pages of article. What I read in all through Google Search is that rel="next" and rel="prev" should be applicable on where the content is spread across multiple pages. But - as it's a blog, it has blog listings and every listing has unique content This is the blog: http://www.mycarhelpline.com/index.php?option=com_easyblog&view=latest&Itemid=91. May recommend, if by ignoring rel="next" and rel="prev" - are we inviting Google to treat the blog listing pages as duplicate.

    Read the article

  • Can google “see” this custom javascript code which displays links from an external site to mine

    - by webmasters
    I have a javascript code on my site who displays links from another site. This is what I have on my source before: <script language="JavaScript" type="text/javascript">showLink(1);</script> This is what I have copied from my source after the page has loaded: <script language="JavaScript" type="text/javascript">showLink(1);</script><a rel="nofollow" target="_blank" class="anc" href="http://x5.external_site.net/sc/out.php?s=5483&amp;o=http%3A%2F%2Fwww.bluetooth.com">Bluetooth Devices</a> Can google see this link?

    Read the article

  • Interesting links week #24 and #25

    - by erwin21
    Below a list of interesting links that I found this week: Interaction: Design Usability and All About It Frontend: CSS Lint – CSS Cleaning Tool 10 HTML Entity Crimes You Really Shouldn’t Commit Development: OWASP Top 10 for .NET developers part 7: Insecure Cryptographic Storage C#/.NET Fundamentals: Choosing the Right Collection Class Mobile: Tips to Design a Website for Mobile Marketing: 30 (New) Google Ranking Factors You May Over- or Underestimate Other: 5 Little-Known Web Files That Can Enhance Your Website Interested in more interesting links follow me at twitter http://twitter.com/erwingriekspoor

    Read the article

  • Google-bot sees “Sorry, we have no imagery here” on pages with Google Maps

    - by friism
    I have a site with Google Maps on most of the pages. When inspecting content keywords in Google Webmaster tools, content keywords identified by Google-bot for the site include "imagery", "sorry" and "here". These turn out to be part of an error message returned by Google Maps: "Sorry, we have no imagery here". I cannot reproduce this error with normal clients, nor does "fetch as Google" show it. The problem is presumably that Google-bot tries to execute some of the Google Maps Javascript but then shoots itself on the foot and records the error message. A Google search for "Sorry, we have no imagery here" shows that this problem is endemic to sites across the internet, including Yelp and many others. I'd like to convince Google that my site is not about imagery and being sorry, but I'd also like to keep the maps in place. I guess one option would be to transition to static maps, but that's not a great alternative. There's some related discussion on Webmaster World, no resolution.

    Read the article

  • Should each page of a Blog listing have its own Title

    - by RandomBen
    Should example.com/Blog?Page=1, example.com/Blog?Page=2, etc have the same title? I have done some research on this and SEOMoz's tools say I have duplicate titles and so does Google's Webmaster tools. If you look at top end examples like http://www.seobook.com/blog and http://www.seomoz.org/blog they both use the same title across all query of their ?Page=X URLs. So what is the better choice or does it even matter?

    Read the article

  • When the canonical page itself changes url

    - by lulalala
    This is a continuation of the question: How to handle canonical url changes like Stack Overflow. Say I have the canon url: questions/11/car <---canonically-linked-from--- questions/11/ What will happen if I want to change the canon url to questions/11/car-with-sgx Obviously, questions/11/ will point to the new canon url. But how should the old questions/11/car change to the new one? There are two ways: 301 redirect that to new canon url the old canon url canonically link to the new canon url According to this post: [By using canonical link instead of redirect,] OldPage.html’s rankings will drop over time due to fewer internal links, but the canonical tag won’t make it disappear entirely. It could theoretically remain in their index until one of the following occurs: it is redirected permanently via 301 it returns a 404 for an extended period of time (they will keep checking for a while before dropping a URL) a meta robots “noindex” tag is added If this is true, I really need to use redirect from old canon url to the new canon url, which means I need to keep a log of previous old canon urls of this content, so I know when I can redirect. This is a bit of a hassle to do.

    Read the article

  • Is Google Analytics Part Of Google's Search Engine Algorithm

    - by ub3rst4r
    I was wondering if anyone knows if Google uses the data it receives from Google Analytics to help determine a websites SERP (Search Engine Rank Position). For example, if my website is getting 1000 users visiting my website from Canada and only 100 users visiting my website from the USA, does that mean my website will be ranked higher on Google.ca and lower on Google.com? And, if a website is using Google Analytics will it be ranked higher for the organic search engine keywords?

    Read the article

  • adding slugs to the URLs afterwards

    - by altuure
    we have a website for last 5 months and we did not used slug at bottom level elements so urls was like /apps/webmasters/badges/1100 would it make sense to add name to the URL after that point and redirect to the new ones ? I am interested in building more search terms. and increase page ranks ..... /apps/webmasters/badges/1100 - redirect and served at /apps/webmasters/badges/1100-supporter Or should I keep old URLs as is and create new urls with slugs. I would also appreciate some advice on shared urls on facebook or on twitter in those cases. Thanks in advance...

    Read the article

  • Can I use <link> tags in the body of an HTML document?

    - by Edward Touw
    Can I use <link> tags in the body of an HTML page? I tried to find the answer to this question, but found contradictory information. When adding Schema.org microdata markup to an HTML page, I want to add canonical info in a link tag like this: <div itemscope itemtype="http://schema.org/Book"> <span itemprop="name">The Catcher in the Rye</span>— <link itemprop="url" href="http://en.wikipedia.org/wiki/The_Catcher_in_the_Rye" /> by <span itemprop="author">J.D. Salinger</span> </div> I got the example code above from Schema.org. According to them, this is the way to go for people that want to add a canonical reference to an itemprop, but don't want to place a hyperlink on their website. W3 however clearly states that <link> tags should only be placed within the head section, thus making the Schema.org example invalid. If I want to stick to correct markup, which advice should I follow?

    Read the article

  • A frequently updated mixed bag blog OR several seldom updated niche sites?

    - by Melanie
    Background I am a member of the website HubPages where I have about a hundred articles (and I'm always writing more.) Anyway, HubPages revenue model is 40% ad-share for them and 60% ad share for users. While the userbase there is really friendly, the site is REALLY slow, buggy and there is a ton of content on HubPages that is copied from other sources. Upon flagging these articles it takes a ton of time for mods to remove it and it's just generally dragging down my stuff. Furthermore, HubPages was hit really hard by Google's Panda Update: http://www.google.com/search?hl=en&rlz=1B3GGLL_enUS426US426&tbm=nws&q=google+panda& Aside from the temporary problems I would deal with when removing content from HubPages and putting it on my own domain (duplicate content, etc) I have another problem. Which would be the best for my articles? I have tons of articles in a wide variety of niches and would like to do what would help them perform the best. I'm not a huge niche writer and have received wide criticism from the HubPages community for my articles not performing as well as they could because I don't use enough keywords within the text of my articles. I prefer to write more naturally in a way that would appeal to an audience instead of keyword stuff. Anyway, this is aside the point. My Question After removing my articles from HubPages, should I put them on one domain or spread them across multiple domains grouped sort of by topic. For example: a-bunch-of-articles.com OR travel-articles.com and financial-articles.com and knitting-articles.com (I know those domains aren't available, but it's just kind of an example.) Here are the pros and cons of each: a mixed bag site like a-bunch-of-articles.com may not perform as well because of its mixed-bag nature a mixed bag site would be updated far more frequently than several niche sites... some niche sites may be updated so infrequently that a year could pass before one sees a new article a mixed bag site would be like putting all my eggs is one basket, where having several niche sites would spread out my portfolio, so to speak. a mixed bag site would be cheaper, $14 (two year registration) to start out with and hosting and I'm good to go. a mixed bag site wouldn't allow me to easily target keywords, but then again isn't HubPages pretty much a mixed bag site?

    Read the article

  • Dynamic meta description and keyword tags for your MasterPages

    - by Aamir Hasan
     Today we're going to look at a technique for dynamically inserting meta tags into your master pages. By taking control of the head tag and inserting your own HtmlMeta you can easily customise these tags.Might have noticed that when you create a new master page in visual studio your <head> tag gets decorated with a runat="server" attribute.Asp.net doesn't add this kind of decoration to any other html tags (although you are free to add it if you want). So what makes the head tag special?By adding the runat="server" you're giving actually converting the control into a HtmlHead control. That doesn't particularly matter for this tutorial other than to note that given a reference to the head control you get all the extras that come with asp.net controls such as access to its controls collection.The HtmlMeta control lets us wrap up <meta> tags via asp.net code. To add a meta description we need to create an instance, set the name property, the content property, and then add it to the head: asp.net using (C#)protected void Page_Init(object sender, EventArgs e){  // Add meta description tag  HtmlMeta metaDescription = new HtmlMeta();  metaDescription.Name = "Description";  metaDescription.Content = "Short, unique and keywords rich page description.";  Page.Header.Controls.Add(metaDescription);   // Add meta keywords tag  HtmlMeta metaKeywords = new HtmlMeta();  metaKeywords.Name = "Keywords";  metaKeywords.Content = "selected,page,keywords";  Page.Header.Controls.Add(metaKeywords);}asp.net ( VB.NET )Protected Sub Page_Init(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Init  ' Add meta description tag  Dim metaDescription As HtmlMeta = New HtmlMeta()  metaDescription.Name = "Description"  metaDescription.Content = "Short, unique and keywords rich page description."  Page.Header.Controls.Add(metaDescription)   ' Add meta keywords tag  Dim metaKeywords As HtmlMeta = New HtmlMeta()  metaKeywords.Name = "Keywords"  metaKeywords.Content = "selected,page,keywords"  Page.Header.Controls.Add(metaKeywords)End Sub

    Read the article

  • How google handle site traffic in google analytics

    - by Hamidreza
    I have a site with address www.exam.com and I have put Google analytics javascript scripts in it. I have made an app for my site, I want that everytime a user uses app, he visit the site in the application with built in browser which is inside the application ( I am using C# for application and .NET web browser ). User will address www.example.com/appvisit in the app and I just have put google analytics scripts in that page and nothing else. And I want to disallow this address /appvisit in my robots.txt file . I want to know that Is there any problem with doing this? will google crawl in the /appvisit directory ? Does google hate this work? and will google think this traffic is true and normal? thanks

    Read the article

  • Is the W3 standard a major factor when google decides SERP position?

    - by Camran
    I have a dynamic php website which index only has around 800 errors according to the w3 validator online. I have tried checking major websites like ebay, stackoverflow and others also, all with around 400 errors. So my first thought is, what good is that validator when it always displays errors? Secondly, will the errors affect my SERP ranking? ie, will me fixing these errors as good as I can increase my Google search position? Thanks

    Read the article

  • Many Stack Overflow users' pages have no Google PageRank and they are not indexed, why?

    - by Marco Demaio
    If you go to my user page on Stack Overflow and you check it with the Google Toolbar, you can see it has no PageRank at all (this does happen for almost any user page, even people with much higher reputation, the only exceptions seem to be the users in page 1, and some other users they have PR). My user page's Page Rank is not only zero, but not calculated at all. When PR is 0 or less than 1, but calculated the Google bar shows white, but when the PR is not even calculated like in my user page the Google bar shows in grey. I further more discovered that my user page is NOT EVEN INDEXED on Google, simple test is searching on Google for the exact page url: "http://stackoverflow.com/users/260080/marco-demaio" and you will see no result. The question is how can this be??? This is really weird to me because of the following reason: If you search on Google for "Marco Demaio" on Stack Overflow only (you can do this by searching "site:stackoverflow.com Marco Demaio") the search result shows hundreds of 'asking/answering questions' pages where I was 'tagged'!!! Let's check one of these: the 1st one that appears now (shows one of the question I asked). We can be sure this page is indexed in Google because comes out in a search. Moreover, its PR is calculated. It's probably nearly zero. Still, some PR flows there, the PR bar is not grey, but white: The page shown above has got links to my own user page. I checked the source code of the page shown above and the links are not hidden or set with a rel="nofollow", moreover I can't see any meta character excluding the links on the page from being followed. So what's happening? Why Google does not see my user page at all. Did Stack Overflow do something to achieve this? If yes what did they do? Any explanation really appreciates (as always). P.S. obviously I checked also the code of my user page, but I could not find meta tags excluding Google search for the page. P.S. 2 in a desperate adventure I also checked Stack Overflow's robots.txt but it does not seem to exclude user pages. UPDATE 1 following up on some answers, I did some more research. Excluding for a while the PR problem (since PR is not science), and looking only at the user page on Stack Overflow NOT BEING INDEXED problem: pages do not seem to be indexed by Google because of the user reputation, this user for instance has got NOW 200 points less reputation than me and his page is indexed (while mine not). It does not seem even to be connected with months you have been on Stack Overflow, this user (almost my same reputation) has been there for 3 months only and his page is indexed (while mine not and I have been a user for 7 months). It's bizarre! UPDATE February/2011 As of today, the page got indexed by Google at least when you search for "site:stackoverflow.com Marco Demaio" it's the 1st page. The amazing thing is that it has still got NO PageRank at all: Google toolbar states loud and clear "No PageRank information available". It's odd!

    Read the article

  • Does Google use any “Language” flags / tags set within a PDF file when determining its language?

    - by Ally Ak
    When determining the language of a HTML page, I understand that Google looks at any language declarations that the page owner has set, and then also applies its own language detection algorithms. But does Google similarly look at language meta data set in PDF files when determining a PDF file's language? (Authors of PDF files can set document-wide properties describing the language (or languages) contained within it.) Or does Google rely exclusively on language detection algorithms and disregard the language flag set within the PDF file? Can anyone shed any light?

    Read the article

  • Willy Rotstein on Analytics and Social Media in Retail

    - by sarah.taylor(at)oracle.com
    Recently I came across a presentation from Dan Zarrella on "The Science of Retweets. (http://www.slideshare.net/HubSpot/the-science-of-retweets-with-dan-zarrella). It is an insightful, fact-based analysis of how tweets propagate and what makes them successful. The analysis is of course very interesting for those of us interested Tweeting. However, what really caught my attention is how well it illustrates, form a very different angle, some of the issues I am discussing with retailers these days. In particular the opportunities that e-commerce and social media open to those retailers with the appetite and vision to tackle the associated analytical challenges. And these challenges are of course not straightforward.   In his presentation Dan introduces the concept of Observability, I haven't had the opportunity to discuss with Dan his specific definition for the term. However, in practical retail terms, I would say that it means that through social media (and other web channels such as search) we can analyze and track processes by measuring Indicators that were not measurable before. The focus is in identifying patterns across a large number of consumers rather than what a particular individual "Likes".   The potential impact for retailers is huge. It opens the opportunity to monitor changes in consumer preference  and plan the business accordingly. And you can do this almost "real time" rather than through infrequent surveys that provide a "rear view" picture of your consumer behaviour. For instance, you could envision identifying when a particular set of fashion styles are breaking out from the pack, and commit a re-buy. Or you could monitor when the preference for a specific mobile device has declined and hence markdowns should be considered; or how demand for a specific ready-made food typically flows across regions and manage the inventory accordingly. Search, blogging, website and store data may need to be considered in identifying these trends. The data volumes involved are huge (check Andrea Morgan's recent post on "Big Data" in retail) but so are the benefits. As Andrea says, for the first time we can start getting insight into "Why" the business is performing in a certain way rather than just reporting on what is happening. And it is not just about the data volumes. Tackling the challenge also calls for integrated planning systems that can bring data and insight into the context of the Decision Making process Buyers, Merchandisers and Supply Chain managers are following. I strongly believe that only when data and process come together you can move from the anecdotal to systematically improving business performance.   I would love to hear your opinions on these trends and where you think Retail is heading to exploit these topics - please email me: [email protected]

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >