Search Results

Search found 5738 results on 230 pages for 'seo friendly'.

Page 126/230 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • How can I get an AdWords ad to show up for a specific term ASAP?

    - by Eric
    I have a very specific situation... I have a client who has a site, backed by a celebrity, selling a comment product.... so imagine my site is all about "Martha Stewart used cars" (that's not it-- but you get the idea). My client wants to see their site show up ASAP in Google search results. While I'm waiting for organic search to kick in and recognize my site, index it properly, etc, I want to buy some adwords for keywords like "Martha Stewart used cars" and "Martha Stewart used car" and so forth and have the ads show up on the 1st page of search results. I've done this. The problem is that many, many other advertisers have set up ads on the keyword "used cars" so my Martha-specific ads never are shown. Even when I bid specifically on the keyword phrase "Martha Stewart used cars" and I enter that directly into Google, it doesn't show my ad. SO MY QUESTION.... how/what can I do to get my ads to show... or really, can I do anything else to get my client's site to show on the result page? (I'm not interested in anything black-hat or illegal; I'm just trying to throw some resources at this situation so when those folks looking SPECIFICALLY for "Martha Stewart used cars" will get to the site quickly.) thanks-- Eric

    Read the article

  • How can I tell GoogleBot that a subdirectory is now a subdomain? [migrated]

    - by cwd
    I had about a million pages of a catalog indexed under a subdirectory, and now that's moved to a subdomain. GoogleBot is crawling each one of them and getting a 301 redirect to the new location. Even though I have set up the redirect rule in the apache sites-enabled configuration file, (i.e. it's early on when apache does the redirect - PHP is not even getting loaded), even though I have done that, the server isn't handling the load well. GoogleBot is making around 5 requests per second, and on top of my normal traffic that is hiking up the CPU for a few hours at a time. I checked in Webmaster Tools and the corresponding documentation for a way to let Google know that the content had been moved from a subdirectory to a subdomain, but with little luck. Basically the most helpful thing I saw said to just send 301 headers for the new location. How can I tell GoogleBot that a subdirectory is now a subdomain? If that is not an option, how can I more efficiently send 301 redirects out for a particular subdomain? I was thinking perhaps the Nginx server but I'm not sure that I can run both Apache and Nginx side by side on port 80 for different subdomains.

    Read the article

  • How long does it take Google to update all links from R 301 ?

    - by romant
    I just changed the location of my blog, and have done the appropriate redirects. Does anyone have knowledge or experience for the delay in updating all the links across Google? Reason I ask, I wish to change the A record. So this will eliminate the .htaccess file, and thus null and void the redirect. How long must I wait prior to the undertaking? Thank you.

    Read the article

  • How long does it take Google to update all links from R 301 ?

    - by romant
    I just changed the location of my blog, and have done the appropriate redirects. Does anyone have knowledge or experience for the delay in updating all the links across Google? Reason I ask, I wish to change the A record. So this will eliminate the .htaccess file, and thus null and void the redirect. How long must I wait prior to the undertaking? Thank you.

    Read the article

  • Do URL shorteners affect Google page rank?

    - by DLux
    With the number of people passing around shortened URLs (through goo.gl, bit.ly, etc), I was wondering how these shortening services affect page rank in Google. Do they count as inbound links to your content or are they completely ignored by Google and other search engines?

    Read the article

  • How to create robots.txt for a domain that contains international websites in subfolders?

    - by aaandre
    Hi, I am working on a site that has the following structure: site.com/us - us version site.com/uk - uk version site.com/jp - Japanese version etc. I would like to create a robots.txt that points the local search engines to a localized sitemap page and has them exclude everything else from the local listings. So, google.com (us) will index ONLY site.com/us and take in consideration site.com/us/sitemap.html google.co.uk will index only site.com/uk and site.com/uk/sitemap.html Same for the rest of the search engines, including Yahoo, Bing etc. Any idea on how to achieve this? Thank you!

    Read the article

  • set all apache sites offline with temporary static cached original pages

    - by rubo77
    I would like to set all virtualhosts on my server down for maintenance for some time. The temporary page should contain something like sorry, the page www.xxx.com is down for maintenance. you can see the cached version here: Then the trick: the user should then see the cached page from a cache like googlecache or similar for the requested page as long as the server is down. This would show the correct content on pages, that are static anyway and give the visitor the needed content in many cases, while I can shut down mysql and other services that would usually be needed to show that pages. How can I set a global page on all virtualhosts, that parses the original requested URL through PHP?

    Read the article

  • [Disallow: /index.php] seems to block /my-beautiful-sef-url-123

    - by Jaroslav Záruba
    Hello I have robots.txt that looks like this: User-agent: * Disallow: /system/ Disallow: /admin/ Disallow: /index.php The obvious goal has been to prevent all the ugly URLs from being indexed, as they all begin with "/index.php". But for some reason all URLs like /my-beautiful-sef-url-123 are listed under Crawl errors in Google Webmaster Tools with "URL restricted by robots.txt". (When I test such URL it yields Allowed for both Googlebot and Googlebot-Mobile.) Can anyone help please?

    Read the article

  • How can I return a 503 status in apache without invoking external scripts

    - by dan mackinlay
    I need to return a 503 status code from one of my sites while it's down for maintenance, in the time-honoured SE_firendly fashion. I can't seem to work out how to do this without invoking external scripts, which I'd rather avoid. Is there an apache directive which will allow me to return an arbitrary HTTP status code without resorting to hacks like invoking a php script which sets the status header?

    Read the article

  • Check keyword popularity of 2000 phrases?

    - by Mark
    I just found a list of about 2000 car manufacturers which I want to put into a drop-down list... but 2000 is probably a bit too many, so I want to filter it down to maybe the top 100 most popular cars. I figure I can just use Google search popularity to give me a rough estimate of how popular the car is... but I can't find a tool that will let me query 2000 keywords. Anyone know of one?

    Read the article

  • Subdomain is preventing my search results from rising as expected in page rank

    - by culov
    My problem is that I have a site which has requires a dedicated page for every city I choose to support. Early on, I decided to use subdomains rather than a directly after my domain (ie i used la.truxmap.com rather than truxmap.com/la). I realize now that this was a major mistake because Google seems to treat la.truxmap.com as a completely different site as ny.truxmap.com. So for instance, if i search "la food truck map" my site will be near the top, however, if i search "nyc food truck map" im no where in sight because ny.truxmap.com wouldnt be very high in the page rank by itself, and it doesnt have the boost that it ought to be getting from the better known la.truxmap.com So a mistake I made a year ago is now haunting my page rank. I'd like to know what the most painless way of resolving my dilemma might be. I have received so much press at la.truxmap.com that I can't just kill the site, but could I re-direct all requests at la.truxmap.com to truxmap.com/la and do the same for all cities supported without trashing my current, satisfactory page rank results I'm getting from la.truxmap.com ?? EDIT I left out some critical information. I am using Google Apps to manage my domain (that is, to add the subdomains) and Google App Engine to host my site. Thus, Google Apps provides a simple mechanism to mask truxmap.appspot.com (the app engine domain) as la.truxmap.com, but I don't see how I can mask it as truxmap.com/la. If I can get this done, then I can just 301 redirect la.truxmap.com to truxmap.com/la as suggested below. Thanks so much!

    Read the article

  • Why is my site not on Google? [closed]

    - by RD
    I wanted to post a link here, but some people might see that as advertising. So, instead I'm going to phrase my question like this: What can I do, to make sure my site appears on Google? I have already done the following: Submitted my sitemap Added my site at www.google.com/addurl Added Analytics to my site Checked in the webmaster tools if there are crawlers errors But still, after about three or four days, the crawler hasn't crawled my site. What am I missing?

    Read the article

  • .htaccess file ?

    - by user368993
    Hello Guys, How to create .htaccess file for 301 permanent redirect. I am looking for exact code that we would put in the file. Looking forward to hear from you all. Thanks in advance.

    Read the article

  • Any mobile-friendly Credit Card billing solutions for mobile sites similar to Bango?

    - by Programmer
    Are there any mobile-friendly Credit Card billing solutions for mobile sites similar to Bango? The advantages of Bango I have seen compared to regular Credit Card solutions that make it considerably "mobile-friendly" are: 1) It does not require the user to enter their full name and billing address to make a payment. The user is only required to enter their Credit Card number, expiration date, and CVC code (if they are in the U.S., they will also have to enter their Zip Code). That is significantly less input than is normally required for Credit Card payments, which is a big plus on small mobile key pads. After a user makes an initial Credit Card payment, their details are stored by Bango, and the next time the user needs to make a payment with the same Credit Card, they just have to click a single link and it processes the payment on their stored Credit Card. Needless to say, this is very convenient for mobile users as it is analogous to Direct Carrier Billing as far as the user is concerned since they won't need to input any details. The downside with Bango is that their fees are higher than others, all payments must be processed via their site and branding, there is a high minimum ($1.99) and a low maximum ($30) on how much you can charge users, and you need to pay a monthly fee on top of the high transaction costs. It is due to the downsides mentioned above that I am looking for an alternative solution that also does the advantages 1) and 2) above. Is there anything like that? I looked at JunglePay and they do neither 1) nor 2).

    Read the article

  • Canonical Link as a Way of Fighting Scrapers?

    - by James D
    Hi, Let's say several external sites are scraping/harvesting your content and posting it as their own. Let's also say that you maintain a single unique/permanent URL for each piece of content, so that content aliasing (on your site) is never an issue. Is there any value from an SEO perspective to including a canonical link in your header anyway, such that when your site is "scraped", the canonical indication is injected into whatever site is stealing your content (assuming they harvest the raw HTML rather than going in through RSS etc.)? I've heard different things about the behavior of cross-site canonical links, from "they're ignored" to "behavior undefined" to "it can't hurt" to "sure that's exactly what canonical is intended for". My impression was that canonical was a good way of dealing with intra-site but not necessarily inter-site aliasing. Thanks~

    Read the article

  • Prevent bot from crawling certain areas of site.

    - by Skoder
    Hey, I don't know much about SEO and how web spiders work, so forgive my ignorance here. I'm creating a site (using ASP.NET-MVC) which has areas that displays information retrieved from the database. The data is unique to the user, so there's no real server-side output caching going on. However, since the data can contain things the user may not wish to have displayed from search engine results, I'd like to prevent any spiders from accessing the search results page. Are there any special actions I should take to ensure that the search result directory isn't crawled? Also, would a spider even crawl a page that's dynamically generated and would any actions preventing certain directories being search mess up my search engine rankings? edit: I should add, I'm reading up on robots.txt protocol, but it relies on co-operation from the web crawler. However, I'd also like to prevent any data-mining users who will ignore the robots.txt file. I appreciate any help!

    Read the article

  • is a negative text-indent considered cloaking?

    - by John Isaacks
    I am using the negative-text-indent technique I learned to show a text-image to the user, while hiding the corresponding actual text. This way the user sees the fancy styled text while search engines can still index it. However I am started to think this sounds like cloaking since I am serving different content to the user vs the spider. However, I am not using this in a deceitful way. Plus it seems like this is a popular technique. So is it SEO-safe or is it cloaking? Thanks!

    Read the article

  • What is the best way to generate a sitemap?

    - by Zakaria
    Hi everybody, I need to build a sitemap for my website. The url will be "www.example.com/mysitemap.html". I know that there are some tools that generate automatically an XML file that contains the reachable URLs and also improve the SEO. So my questions are: How can I build this HTML page going from the generated XML? Or am I wrong and this kind of HTML page is built manually? If not, how do we integrate the XML and convert it to the website? Thank you very much. Regards.

    Read the article

  • noindex, follow on list views?

    - by Fabrizio
    On one of our client's website we have lot's of list views with links to detail views. (Image a blog with the posts overview and the single pages). The detail views don't change, but the list views will change when new items come up. The pages displaying the list view don't contain any other valuable content. So my question is: Does it make sense to define meta "noindex, follow" on the list view pages (and of course "index, follow" on the detail views) to prevent search engines to point to the list views when the keyword is found in the title or teaser of the list view. By the time the visitor clicks on the list view search result it might have changed and the content is not visible anymore, whereas if he goes directly to the single view he will definitly find what he was searching for? Related question: The startpage also contains mainly a list view. Is it a bad idea to have the start page not indexed? Any SEO gurus here? :) Thanks, Fabrizio.

    Read the article

  • Map a domain to an MVC area

    - by Simon_Weaver
    Anybody got any experience in mapping a domain to an MVC area? Here's our situation: Old system (still active but will soon redirect to new store): www.example.com - our main site where we send traffic store.example.com - our store site which is a completely separate site that is indexed in google New system: www.example.com - same site as before www.example.com/store - new store site - built in an ASP.NET MVC area Because store is a separate domain google gives it a separate entry in the search results. I'd like to keep this benefit in future but wondering whether or not there is a good way to map a domain (store.example.com) to the MVC area or if its just going to be more trouble than its worth. PS. I'm not trying to keep existing indexing - its a completely separate store so thats not possible. I just want to redirect to the corresponding page in the new store. I'm just trying not to lose the benefit of two domains for SEO purposes.

    Read the article

  • Webpage layout order for my webapp - does it matter if the Sidebar is programmatically displayed bef

    - by Jack W-H
    OK that's the worst title I could ever possibly think up. But I'm not too sure how to phrase it! What I mean is, is it inefficient for the browser, search engine optimisation, or any other important factors, if programmatically my float:righted sidebar appears in the markup before the main content div, which is set to float:left? To the user, the main content appears on the left, and the sidebar on the right. In the source code it appears like so: <div id="sidebar">This is where my sidebar goes </div> <div id="content">This is where my content goes </div> Will this affect SEO or other factors in my page?

    Read the article

  • Is this a "valid" css image replacement technique?

    - by user278457
    I just came up with this, it seems to work in all modern browsers, I just tested it then on (IE8/compatibility, Chrome, Safari, Moz) HTML <img id="my_image" alt="my text" src="images/small_transparent.gif" /> CSS #my_image{ background-image:url('images/my_image.png'); width:100px; height:100px;} Pro's: image alt text is best-practice for accessibility/seo no extra HTML markup, and the css is pretty minimal too gets around the css on/images off issue where "text-indent" techniques hide text from low bandwidth users The biggest disadvantage that I can think of is the css off/images on situation, because you'll only send a transparent gif. I'd like to know, who uses images without stylesheets? some kind of mobile phone or something? I'm making some sites for clients in regional Australia (hundreds of km from the nearest city), where many users will be suffering from dial-up connections, and often outdated browsers too, so the "images off" issue is an important consideration. are there any other side effects with this technique that I haven't considered?

    Read the article

  • How can I explain to a programmer that CSS positioning has many benefits over table based layouts?

    - by Pat
    I have a friend who wishes to work as a freelance web developer, but insists that tables are the way forwards for layouts. Several points he maintains in favour of tables: 1 This is what was taught at the beginning of 10 years of programming & computer science degrees. 2 Large companies use tables to achieve 'technical' things. 3 It saves time I have coded him some examples of CSS exactly matching table based layouts, and provided many links to articles explaining SEO and accessibility benefits. From the perspective of a client, I have been explaining to him that I wouldn't hire someone using outdated methods as their main strategy for layout. As he is my friend and I wish him every success, I believe it is important for him to gain the best start when pitching for work. The question again: How can I explain to a programmer that CSS positioning has many benefits over table based layouts?

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >