Search Results

Search found 5738 results on 230 pages for 'seo friendly'.

Page 53/230 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Difference between "/" at end of URL and without "/" [closed]

    - by user702325
    Possible Duplicate: Does it make a difference if your url ends in a trailing slash or not? Why treat these as different URLs? I am doing a 301 redirect in my WP application using .htaccess and have mapped some of the URLs which have either been removed from the new domain or the URL structure has been changed. While doing I got a doubt I have following URL structure in my .htaccess file RewriteCond %{HTTP_HOST} ^old.com$ [OR] RewriteCond %{HTTP_HOST} ^www.old.com$ RewriteRule ^tag/waiting$ http://www.new.com/tag/relationships [R=301,L] while checking this i found that at some places URL is like http://www.new.com/tag/relationships while at others its like http://www.new.com/tag/relationships/, while both refer to the same location but not sure if this will make any difference to SEO and search engines. Please suggest if the way i am doing mapping is correct or do i need to modify it to handle both UR

    Read the article

  • Looking to use .htaccess to create SEO friendly URLs

    - by Ray
    For SEO purposes, I need someone to modify my .htaccess file. Here's what I need to do: current URL: http://www.abc.com/index.php?page=show_type&ord=1 to new URL: http://www.abc.com/amazing Please note that that if someone types in http://www.abc.com/amazing, they must be served content from the current URL, but the new URL must stay in the address bar. I tried this and it didn't seem to work RewriteEngine On RewriteRule ^/?amazing/([^/]+)/([^/]+)/([^/]+)/([^/]+)/([^/]+)/ /index.php?page=show_type&ord=1

    Read the article

  • Which netbook is exceptionally hackintosh friendly?

    - by GeneQ
    I'm a happy owner of a hackintosh-ed Dell Mini 9. I would love to hear the success stories from other people who have managed to install Mac OS X successfully on other brands and models of netbooks. "Exceptionally friendly" here is defined as having, at minimum, fully functioning wireless, sound and graphics when installed with an unmodified version of OS X Leopard.

    Read the article

  • Technical/Programming/Non-SEO Pros and Cons of WWW or no-WWW?

    - by Ingenutrix
    What are technical/programming/non-SEO pros and cons of www or no-www, for domains as well as sub-domains? From Jeff Atwood's twitter at http://twitter.com/codinghorror/status/1637428313 : "sort of regretting the no-www choice because it causes full cookie submission to ALL subdomains. :(" What does this mean? Is there a blog post or article detailing this? What other specific issues and their reasons should be considered for www. vs no-www. Update: On searching for more info on this topic, I found following helpful ( in addition to Laurence Gonsalves answer ) : Dropping the WWW Prefix Impact on search results: Jivlain's and Isaac Lin's comments Use Cookie-free Domains for Components on StackOverflow : Should I default my website to www.foo or not? on StackOverflow : When should one use a ‘www’ subdomain?

    Read the article

  • How Does DotNetNuke Stack Up For SEO? E-Commerce?

    - by user326502
    I've heard that DotNetNuke takes a bit of a hit for Search Engine Optimization. I'm not criticizing the platform, by the way; I love DNN. This is just what I've heard. As I understand it, the impact is from repetitive content, table-based layouts, and lots of extra markup. I've got a friend who would like to start an e-commerce site using DNN and some modules from Snowcovered. I was just wondering whether DNN would be a good platform to choose. The idea is attractive because of the ease with which a DNN commerce system can be deployed. Search-engine friendly URLs aren't the problem - the modules do that, it's whether DNN as a whole would be a good platform for this. Thanks very much for any help or advice.

    Read the article

  • CSS-Friendly Menu adapter that emits the same markup as .NET 4.0

    - by Joe
    For .NET 2.x/3.x there exists a CSS-Friendly Adapter on CodePlex that emits markup for an ASP.NET Menu Control as an ul. The .NET 4.0 Menu control will also emit an ul, but the CSS class names are different from those emitted by the CSS-Friendly Adapter 1.0 on CodePlex. In the interests of having a single version of CSS for .NET 2/3/4 sites, I want to create a version of the CSS-Friendly menu adapter that emits the same markup as the .NET 4.0 Menu control. Before doing so, I thought I'd ask here to see if it's already been done, so I don't reinvent the wheel. Anyone?

    Read the article

  • How bad is for SEO to "redirect" an user depending on his browser locale ?

    - by bgy
    For a personnal page I use the MultiViews options in Apache to determine which page he should see depending on his locale. Here is what I do. Options MultiViews AddLanguage fr .fr AddLanguage en .en <IfModule mod_negotiation.c> LanguagePriority fr en </IfModule> I am wondering if it is bad for SEO to do this since Googlebot will probably fall on 'fr' or 'en' but not both. Would it be fixed if I add a link inside my page to the different language page.

    Read the article

  • h3 tag text/image replacement, does this hurt seo?

    - by Mike007
    I'm trying to "replace" text with an image in all of my h3 tags. I want the image to be in the html to avoid multiple h3 classes as this is being done for a portfolio and there will be about 10 h3 tags on the page. My question is, will this be viewed as an attempt to hide, stuff keywords by google for seo purposes? If it will then does anyone know a better way to accomplish this? CSS: h3 { display: block; width: 156px; height: 44px; overflow: hidden; } Html: <h3><img src="images/project001.png" alt="Recent Projects" />Recent Projects</h3>

    Read the article

  • Is it bad for SEO to have an 'article' published under 2 urls?

    - by Alichad
    Hi, On our new website we publish an article once and can tag it to appear in several sections eg. blahblah.com/insight/10-05-21/Buzzcity-releases-mobile-game-library.aspx blahblah.com/international_media/10-05-21/Buzzcity-releases-mobile-game-library.aspx Is it better for SEO to have the 2 different urls which include important keywords like ‘insight’ and ‘international media’ or is it better to have a single generic url? E.g. blahblah.com/articles/10-05-21/Buzzcity_releases_mobile_game_library.aspx I read somewhere that google doesn’t like the same content ‘duplicated’ in 2 (or 3) places - I am not a tecchie. THanks

    Read the article

  • How bad is it to use display: none in CSS?

    - by Andy
    I've heard many times that it's bad to use display: none for SEO reasons, as it could be an attempt to push in irrelevant popular keywords. A few questions: Is that still received wisdom? Does it make a difference if you're only hiding a single word, or perhaps a single character? If you should avoid any use of it, what are the preferred techniques for hiding (in situations where you need it to become visible again on certain conditions)? Some references I've found so far: Matt Cutts from 2005 in a comment If you're straight-out using CSS to hide text, don't be surprised if that is called spam. I'm not saying that mouseovers or DHTML text or have-a-logo-but-also-have-text is spam; I answered that last one at a conference when I said "imagine how it would look to a visitor, a competitor, or someone checking out a spam report. If you show your company's name and it's Expo Markers instead of an Expo Markers logo, you should be fine. If the text you decide to show is 'Expo Markers cheap online discount buy online Expo Markers sale ...' then I would be more cautious, because that can look bad." And in another comment on the same article We can flag text that appears to be hidden using CSS at Google. To date we have not algorithmically removed sites for doing that. We try hard to avoid throwing babies out with bathwater. (My emphasis) Eric Enge said in 2008 The legitimate use of this technique is so prevalent that I would rarely expect search engines to penalize a site for using the display: none attribute. It’s just very difficult to implement an algorithm that could truly ferret out whether the particular use of display: none is meant to deceive the search engines or not. Thanks in advance, Andy

    Read the article

  • Sitemaps - do I need to submit each sitemap in sitemap_index.xml to Google Webmaster tools?

    - by iSumitG
    I am having a Wordpress blog on my CentOS server. There is no sitemap.xml in the root directory but there is sitemap_index.xml file in the root directory which contains the following code: <?xml-stylesheet type="text/xsl" href="http://mywebsite.com/wp-content/plugins/wordpress-seo/css/xml-sitemap-xsl.php"?> <sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <sitemap> <loc> http://mywebsite.com/post-sitemap.xml </loc> <lastmod> 2012-12-18T19:47:47+00:00 </lastmod> </sitemap> <sitemap> <loc> http://mywebsite.com/page-sitemap.xml </loc> <lastmod> 2012-12-18T17:32:49+00:00 </lastmod> </sitemap> </sitemapindex> My question: Which sitemap should I submit to Google Webmasters Tools? Options are: Only sitemap_index.xml Only post-sitemap.xml and page-sitemap.xml All 3 (sitemap_index.xml, post-sitemap.xml and page-sitemap.xml) Any other, please let me know.

    Read the article

  • Weird entry for robots.txt on a Naked Domain in Google Webmaster Tools

    - by Metalshark
    We own a .co.uk address and use an Internet hosting company that has made mistakes around DNS in the past. Our main site is hosted on www. and their reluctance to allow editing of AAAA records on-line means our naked domain does not resolve. Currently when we attempt to reach the naked version there is no entry for the browser to go to and it displays an unreachable page (nslookup just says Name: name of domain with no further entries such as an IP or Canonical Name). We recently added the relevant TXT records to verify us to view both the www. version and the naked version of the domain in Google Webmaster Tools (in anticipation of the requests to our Internet host coming to fruition). Imagine our shock when double checking the Site configuration Crawler access and finding a (admittedly failing) robots.txt with a dynamically generated HTML page (full of crude pop-up JavaScript) with references to 3 of our most prominent competitors. What could cause this to happen? As we are in the UK I am assuming some DNS server is serving Google bad information. We are going to contact the Internet hosting company to fix our A and AAAA records once and for all, then check that they work in the US (using something like OpenDNS). Should we be doing more though, for instance informing Google (through Webmaster Tools) that we are now aware there is something currently wrong with our naked domain? UPDATE: We have fixed our A records (not AAAA) and that has resolved the issue. But if there are further actions we should take for effectively having a parking page hosted on our active visitor-heavy, SEO-rich domain that advertised our competitors to US visitors, what would they be?

    Read the article

  • Multiple stores for the same niche

    - by pandronic
    I started developing a new niche of products in my country about 3 years ago. That's when I opened my first store. Everything went fine, until a year ago, when someone I thought was a friend secretly stole my idea and made his own competing store. I was pretty upset when I caught him and decided to make it as difficult as possible for him, so I made another 4 stores, trying to get him as low as possible in the search results. The new sites have similar products (although not 100% identical), slightly different titles, images and prices. They look different and are built on different e-commerce platforms. They are all hosted on the same server, have roughly the same backlinks, use the same Google account for Analytics, have the same support phone numbers etc etc. I wasn't thinking that I'm doing something fishy, so I didn't try to hide anything. Trouble is that those sites, after doing fine for a few months, dropped like bricks in search results, almost to the point that they can't be found at all. At the moment, the only site that ranks relatively well is the original one and a couple of secondary pages with no importance from one of the other sites. How did this happen? Does Google have something against this practice? Did they take action by themselves when they realized that I was trying to monopolize this niche, or did my competitor report me for some kind of webspam? And more importantly, what do I do now? Do I shutdown all but my original site and 301 redirect users to it from the others? Can I report my competitor for engaging in the same practice? (He fought back and now he has 3-4 sites, some of which still rank kind of OKish, also he has no idea about web development, SEO or marketing, he just crudely copies what I do and is slowly but surely starting to do better than me).

    Read the article

  • How do you find all the links to disavow for a Google reconsideration request? [duplicate]

    - by QF_Developer
    This question already has an answer here: How to identify spammy domains giving backlinks to my site (to submit in disavow links in WMT) 2 answers A few months ago I received the following notification on Google Webmaster for a website I look after. Unnatural links to your site—impacts links Google has detected a pattern of unnatural artificial, deceptive, or manipulative links pointing to pages on this site. Some links may be outside of the webmaster’s control, so for this incident we are taking targeted action on the unnatural links instead of on the site’s ranking as a whole. Learn more. The question here is, should we actively attempt to disavow these links given that the action is seemingly targeted to just a bunch of keywords? I've downloaded the inbound links sample from Google Webmaster and so far I've been through the disavow and reconsideration requests process 6 times, each taking 2-3 weeks only to be supplied just 2 more links that Google don't approve of. At this rate it will take me the rest of my natural life to cleanup all these spammy links! It seems disavowing is futile as they haven't implemented broad actions against the website as a whole and (from what I can gather) have already nullified the value of those offending links. Under the quoted statement above however is a reconsideration request button that seems to imply I should be actively doing something here? UPDATE 14th October -- I have since created a small .NET application that you can feed the CSV sample links file into from Google Webmaster. What this tool does is crawl all the links and looks for specific linking patterns as per some configurable match strings. I realised that many of the links that Google are taking issue with were created by a rogue SEO firm we hired several years ago. All the links are appended with 1 of 5 different descriptions. The application I built uses some regexes to isolate any link sources with these matching appendages and automatically builds the disavow txt file. In the end it had to come down to an algorithm as manually disavowing links on this scale would take weeks! I will post the app here once I've cleaned it up.

    Read the article

  • Average SPA weight [on hold]

    - by Emmanuel Istace
    First, sorry my noobs questions, but I'm mainly Windows Developer and not Web Developer :) I'm developing a single page application with a lot of css & javascript. For now the page is 1.3Mo composed by 5 section. Here are the rounded stats : Document : 10kb Style : 60kb Images : 450 kb (already compressed, include a big gallery thumbnails) Javascript : 700kb - 600kb of "framework" (jquery, jquery-ui, boostrap, modernizer, waypoint, ...) and 100kb of custom js. Fonts : 125kb And the site is not finished yet. (Will include gmap api, and some others...) My questions are : Do you have any statistics about the average weight of an SPA ? As this is the whole website, do you think it's acceptable ? Is lazy load (for images) a solution ? What will be impact for SEO ? Is the "200kb rule" of google still relevant ? Do you know great tools to detect which javascript code is not used during the the exection of a page and then the availability to optimize these 700kb of framework js stuffs ? Can a caching strategy be an answer ? Thank you in advance for you help ! Best regards

    Read the article

  • Can prefixing a dash reduce the search engine rating?

    - by LeoMaheo
    Hi anyone! If I prefix a dash to GUIDs in my URLs on my Web site, in this manner: example.com/some/folders/-35x2ne5r579n32/page-name Will my SEO rating be affected? Background: On my site, people can look up pages by GUID, and by path. For example, both example.com/forum/-3v32nirn32/eat-animals-without-friends and example.com/forum/eat-animals-without-friends could map to the same page. To indicate that 3v32nirn32 is a GUID and not a page name, I thought I could prefix a - and then my webapp would understand. But I wouldn't want my search engine rating to drop. And prefixing a dash in this manner seems weird, so perhaps Googlebot lowers my rating. Hence my question: Do you know if my search engine rating might drop? (Today or in the future?) (I could also e.g. prefix id-, so the URL becomes example.com/forum/id-3v32nirn32, but then people cannot create pages that start with the word "id".) (I think I don't want URLs like this one: example.com/id/some-guid.) Kind regards, Magnus

    Read the article

  • How to get search engines to properly index an ajax driven search page

    - by Redtopia
    I have an ajax-driven search page that will allow users to search through a large collection of records. Each search result points to index.php?id=xyz (where xyz is the id of the record). The initial view does not have any records listed, and there is no interface that allows you to browse through all records. You can only conduct a search. How do I build the page so that spiders can crawl each record? Or is there another way (outside of this specific search page) that will allow me to point spiders to a list of all records. FYI, the collection is rather large, so dumping links to every record in a single request is not a workable solution. Outputting the records must be done in multiple requests. Each record can be viewed via a single page (eg "record.php?id=xyz"). I would like all the records indexed without anything indexed from the sitemap that shows where the records exist, for example: <a href="/result.php?id=record1">Record 1</a> <a href="/result.php?id=record2">Record 2</a> <a href="/result.php?id=record3">Record 3</a> <a href="/seo.php?page=2">next</a> Assuming this is the correct approach, I have these questions: How would the search engines find the crawl page? Is it possible to prevent the search engines from indexing the words "Record 1", etc. and "next"? Can I output only the links? Or maybe something like:  

    Read the article

  • WordPress page title repeated in SOME pages

    - by cmykrgbb
    I have created a Wordpress site and titles were working just fine. Then, some time and plugins installed later, I noticed that in SOME pages I get the title repeated 2 times. Example of wrong page title: Contact - NAME | NAME Example of normal title: Our Services | NAME Now, if I go to General Settings and change title it will change both, no improvement. SEO by Yoast has the option to reset page titles, but that just removes all titles leaving the current URL as page title, so no good either. Here is the code I originally had: <title><?php wp_title(''); ?><?php if(wp_title('', false)) { echo ' | '; } ?><?php bloginfo('name'); ?></title> Here is the code I am using now: <title><?php wp_title('|'); ?></title> To sum up, I think somewhere in the database there's a wp_title repeated: once using '-' as separator, another one (the current one) using '|'. Any help will be most appreciated, thanks!

    Read the article

  • Merging multiple top-level domains into a single domain

    - by user23089
    My client had multiple top-level-domains. Each one represented an insurance program within a specific vertical. For all the sites at these alternate domains, there was a 30/70 mix of duplicate vs. original content. Some of the alternate domains ranked very well for their target keyphrase groups, where others were absent in results pages. We advised the client to merge multiple domains into their existing main domain, for usability and SEO reasons. We recently ran the merger. Here was our process: On the main domain, transfer the content such that it matches 1-for-1 content on the various alternate domains Setup Google Webmaster Tools on the main domain Push the new content on the main domain live and submit a corresponding sitemap to Google Establish 301 redirects on the alternate domains, such that each alternate domain URL points to its respective page on the main domain We did this 12 days ago, and pages (previously on the alternate domains) that had ranked well on Google have now plummeted or are entirely non-existent. Did we do the right thing by merging multiple top-level domains into a single domain? Is this initial dip in rankings normal? How soon should we expect to see it return to its normal rankings?

    Read the article

  • Do any CDN services offer multiple urls (or aliases) for your files?

    - by Jakobud
    Lets say a company has multiple commercial web properties that happen to use a lot of the same images on each site. For SEO reasons, the sites must not appear to be related to eachother in any way. This means that the sites can't all link to the same image, even though they all use the same one. Therefore, an image is uploaded to each site and served from each site separately. In order to improve maintainability and latency, lets say the company wanted to use a CDN service. What I'm wondering is, if you upload a file, like an image or something, to a CDN, is there basically one single URL that you access that image at? Or do some (or all) CDN services offer alias URLs so that you can access the same resource from multiple URLs? Example of undesirable situation: Both sites link to the same file URL Site ABC links to <img src="http://123.cdnservice.com/some-path/myimage.jpg"/> Site XYZ links to <img src="http://123.cdnservice.com/some-path/myimage.jpg"/> Example of DESIRABLE situation: Both sites link to the same file via different URLs Site ABC links to <img src="http://123.cdnservice.com/some-path/myimage.jpg"/> Site XYZ links to <img src="http://123.cdnservice.com/some-alias-path/myimage.jpg"/> So in the end, there is only one single file, myimage.jpg on the CDN server, but it is accessible from multiple URLs. Is this possible with CDN services? I know this would make browsers cache the same image twice, but at least it would be better for maintainability. Only one file would ever have to be uploaded.

    Read the article

  • Webmaster Tools, www and no-www, duplicate content and subdomains

    - by Jay
    I have not come to any conclusive answers on the following after many hours of research on many websites to the specific issue that I am trying to figure out. My company has two websites a main one at www.example.com and one at subdomain.example.com which is a subdomain of the first and is our self hosted blog. The way Google sees these with the www or no-www (called naked for now on) is that each of these actually are different when the www or naked version is used/not used in the front of the domain. I completely understand this. It is also advised that both should be set up in the Google Webmaster Tools, which I have done. Correct me if I am wrong on that in regard to having both set up. Now the way it appears is that we can set a preferred domain up in Webmaster Tools only at the root domain level. The subdomain can not have this and actually says the following "Restricted to root level domains only". So it appears that the domain should follow what the root domain says which on our preferred one says to display the www.example.com. and not the naked version. That is one issue I have in that one displays one way and the other displays another. Is it that we have the wrong redirects in place for the subdomain? Another question is does this have any affect on SEO in regards to duplicate content on the web in how we have set this up?

    Read the article

  • Reset / Remove - Google Keywords

    - by Herr Kaleun
    Summary: My site is ranking for filthy keywords and i would like to remove them from google ranking/keywords. Background: My server was hacked using the timthumb exploit/security vulnerability, apparently i was the last person on earth to read the news about the exploit, several months after it appeared. Anyway, the "hacker" was so friendly to modify the index.php file in such a fashion, that it generated random sexual oriented keywords if the website is fetched as google-bot. So if you would fetch it as google bot/it gets indexed, you would get randomly generated keywords like: sex videos teenager teen sex adult sex preteen A LINK TO A RANDOM CONTENT OF MY WEBPAGE anime sex videos a rough list something similar to that, about 180-200 per page. I've discovered it far too late, so that google had me indexed for the words "sex" and certain adult oriented keywords, about roughly 2000. I've removed all the content, toke the site down, replaced the index.php with a static HTML and added a "ERROR 410" title to the website so that the content is no longer here and removed permanently. I've also applied for a manual review of my website, about 1.5 months ago but still, the keywords are there, and very strange, some of the keyword rankings actually "improve" over time. Here are some screenshots from webmasters tools: Question: How can i remove this filthy keywords and re-rank my website as a "normal" website on the fastest way? I want to "REMOVE" the keywords if possible. Please help me or point me into a direction. Thank you

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >