Search Results

Search found 9721 results on 389 pages for 'quicktest pro'.

Page 164/389 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Google analytics and 301 redirects

    - by Ilian Iliev
    We have a multi language website and the first page redirects to specific language page using 301 redirect based on some logic. For exmaple: http://mysite.com/ redirects to http://mysite.com/en/ The problem is that these redirects destroy the primary request so we do not get correct results for traffic sources in GA. How do you handle this case? Is there something that we can do? Any ideas will be appreciated

    Read the article

  • Google crawling the site but refusing to index dynamic content

    - by Omeoe
    I am trying to get Google to index an AJAX site (davidelifestyle.com). It's crawlable with JavaScript turned off and I have also recently implemented _escaped_content_ snapshot mechanism but all that's indexed is a home page and PDF files that are not directly available from the home page. Also when I use Fetch as Google in Webmaster Tools, it downloads the dynamic page but does not index it ("Submit to Index" just reloads the page). Any ideas what might be wrong?

    Read the article

  • Track sales and commission with third-party tool

    - by Andrew
    I have a clothing website where I link to various clothing retailers. I have reached an agreement with one of the retailers whereby they will pay a commission to us for every sale they make from traffic that was referred by our site. I need a mechanism for tracking how much commission should be paid to us, that involves as little work as possible to implement from their side. We both have Google Analytics. Option 1: They record a goal in their GA account whenever someone makes a purchase on their site. They see how many completed goals are marked as referral traffic from our site and calculate commission accordingly. The problem with this is that the whole process of calculating and paying commission will be manual. They will need to frequently check how many sales were generated by referral traffic from our site, and probably we will have to chase them for commission payments. Also - since we won't have access to their GA data - we will need to trust that they report all sales accurately. Option 2: Sign them up to an affiliate network like Commission Junction or Google's Affiliate Network, and connect to them through this network. The problem with this solution is that it seems too heavyweight; ideally we don't want to ask a retailer to go through the whole sign up process just to deal with us and pay us commission. I am assuming that there must be some lightweight service that tracks the number of sales by one site and pays commission accordingly to the other site, where the sign up and installation procedure is simple and fast.

    Read the article

  • Suggestions for a Live chat software on websites for customer support?

    - by Munish Goyal
    Recommendations needed. We want to get in touch with customers via live chat. Requirements: chat window customisable to mingle with website theme (colors etc) preferably the window should be within webpage and not only pop-out/popup. ease of use by customer minimally intrusive should have triggers/Alerts to backend side. for ex: user is unable to fill-up signup form or something, we should be able to offer help to user and this chat window automatically shows to user. What is the cost ? UPDATE: After R&D we also narrowed down to comm100 and liveperson, and we will go with comm100. LP is best commerical soln. but comm100 is a good free soln. We go with comm100 as starting point. But it gives out exceptions alerts sometimes in the dashboard. Can it be configured for chat invitations triggered on certain conditions. Currently only a timebased trigger is available. Any other major difference between these two ?

    Read the article

  • Buy internet country domain name: .fr .co.uk .de .com.au .sg etc

    - by user700580
    I already bought a domain name .com on godaddy for my company. I would like to reserve the same name with country specific domain extention, but not sure where to buy them and how to do it. Here are the ones that I would like to buy: Europe: fr, co.uk, de, ch, es, it, nl, se, no, ru australia: com.au asia: sg Godaddy has all except 1 in europe, australia and singapore. Should I find a website that sell all of them or should I buy some of them in godaddy and others elsewhere? Any suggestions where to buy them? Until now i've always buy .com domain names only so not sure how to do it. Thanks

    Read the article

  • Two different websites in one remote hosting

    - by Kor
    My client asked me that a website that is hosted in one server (and pointing there through a domain) should also be accessed (into a specific directory) from another domain, which is not pointing there. For example: http://www.foo.com, hosted at GoDaddy, with the full website http://www.bar.com, hosted at Bluehost, needs to access http://www.foo.com/bar, as if it was the http://www.bar.com's root. So, if anybody enters through http://www.bar.com, it should internally load http://www.foo.com/bar, without visually changing the url. I amb not sure if this is possible using .htaccess or anything like this. Could anybody show me some light? Thanks in advance

    Read the article

  • Enabling/disabling proftpd accounts with PHP and WHM

    - by Brett G
    I have a VPS with WHM/CPanel which is being used just by me. It's utilizing proftpd. I'd like to, via a PHP script, disable/enable a specific FTP account. I've done this by having PHP call a bash script which removes/adds the user account line to /etc/proftpd/USERNAME password file. However, in order to do this I have to give other write rights to /etc/proftpd/USERNAME. This isn't ideal, and I'd be willing to do it another way. It also seems like WHM is automatically resetting these permissions on a regular basis. Does anybody have any ideas on a better way to deal with this?

    Read the article

  • How can I avoid a 302 for Fetch as Bot?

    - by CookieMonster
    I originally posted this on Stackoverflow, but I believe here is a better place to ask. My web application is very similar to notepad.cc which redirects to a randomly generated URL upon access, e.g. http://myapp.com/roTr94h4Gd. (Please note that notepad.cc is not my site.) Probably because of this redirect feature, when I do "fetch as Google" or "fetch as Bingbot", I get a 302 and no html content. Not even a <html></html> tag. HTTP/1.1 302 Moved Temporarily Server: nginx/1.4.1 Date: Tue, 01 Oct 2013 04:37:37 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.4.17-1~dotdeb.1 Set-Cookie: PHPSESSID=vp99q5e5t5810e3bnnnvi6sfo2; expires=Thu, 03-Oct-2013 04:37:37 GMT; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /roTr94h4Gd How should I avoid 302 in this case? I suppose I could modify my site to prevent the redirect, but it is a necessary feature of my web app to generate a random URL on each access. I added <meta name="fragment" content="!"> tag into my index page and set it to return a static snapshot of my page when the flag is set. But this still returns a 302. I also added a header to return 200 before redirecting, but this had no effect, either. Could someone tell me a good suggestion to solve this problem?

    Read the article

  • Optimizing perceived load time for social sharing widgets on a page?

    - by Lucka
    I have placed the facebook "like" and some other social bookmarking websites link on my blog, such as Google Buzz, Digg, Twitter, etc. I just noticed that it takes a while to load my blog page as it need to load the data from the social networking sites (such as number of likes etc). How can I place the links efficiently so that first my blog content loads, and meanwhile it loads data from these websites -- in other words, these sharing widgets should not hang my blog page while waiting for data from external sites?

    Read the article

  • displaying first few pages of a pdf on a page = duplicate content?

    - by Ace
    I am embedding scribd pdfs on my website. These are exam papers pdf which are available on other websites. As it is scribd is an embed/iframe, I think google considers my page as being empty with no content; google does see iframe content right? So I decided to display the first pages of the pdf as text on the page for google. Then, for user experience, i hide the text and replace it with the scribd embed code using javascript. I have 2 worries about this method. Firstly, i am displaying the first pages of the pdf and the latter may be hosted on other websites, will this be considered as duplicate content. Secondly, I am hiding the content and replacing it with the scribd embed with javascript; is it considered bad by google?

    Read the article

  • Can other domain registrars view non-public whois information?

    - by user3188544
    If my domains are hosted at a registrar (lets take Gandi, for example) and it has privacy protection on the whois information, can another ICANN-accredited registrar (GoDaddy, for example) still view my actual information that is behind the privacy guard? i.e. I don't have a GoDaddy account. But, since they are ICANN-accredited, could they access the real whois info without the privacy protection?

    Read the article

  • AutoMatically Creating New Sites When New Users Sign Up [closed]

    - by Eddy Freeman
    I would like to know how hosted eCommerce sites like www.shopify.com, www.3dCart.com etc.. automatically creates new sites when new users sign up. What kind of tools do they use to create those sites into the users profile. I have tried googling but couldn't find an answer. Does any of you guys have any knowledge or experience that you can share with me? Or do you know a tutorial that you can point me to? I hope my question is clear. Thanks for your help.

    Read the article

  • What should a page's minimum word count be in order to be effectively indexed?

    - by ZakGottlieb
    I'm seeding a new site with hundreds of (high quality) posts, but since I am paying per word written, I'm wondering if anybody in the community has any anecdotal evidence as to how many words of content there should now be for a page to be counted just the same as a 700 word+ post, for example? I know there are always examples of pages ranking well with, for instance, 50 words or less of content, but does anyone have any strong evidence on what the minimum count should be, or has anyone read anything very informative in regards to this issue? Thanks a lot in advance!

    Read the article

  • SEO: disallowing Google from indexing forms in iframes or not?

    - by Marco Demaio
    I usually place forms in iframes (i.e. order form, request assistance form, contact forms, ect.). Just the forms, I never place other contents or pages in iframes. From a SEO point of view, would you exclude forms from being indexed/crawled by Google or not? I mean my forms hardly ever contains keyword/keyphrases, moreover I obviously place empty title/meta description tags in pages shown in iframe to display forms, cause those titles are never displaied in browser title bar. So I'm wondering what's the point of letting Google index them? Moreover I think these form pages might suck out PR from all other pages that are more valuable for SEO. If your answer is "yes I would exclude them form indexing" would you simply use robots.txt to exclude them? Thanks!

    Read the article

  • Which of these URL scenarios is best for big link menus? [seo /user friendly urls]

    - by Sam
    Hi folks, a question about urls... me and a good friend of mine are exploring the possibilities of either of the three scenarios for a website where each webpage has a menusystem with about 130 links.: SCENARIO 1 the pages menu system has SHORT non-descriptive hyperlinks as well as a SHORT canonical: <a href:"design">dutch design</a> the pages canonical url points to e.g.: "design" OR SCENARIO 2 the pages menu system has SHORT non-descriptive hyperlinks wwith LONG canonical urls: <a href="design">dutch design</a> the pages canonical url points to: dutch-design-crazy-yes-but-always-honest OR SCENARIO 3 the pages menu system has LONG descriptive hyperlinks with LONG canonical urls: <a href="dutch-design-crazy-yes-but-always-honest">dutch design</a> the pages canonical url points to: dutch-design-crazy-yes-but-always-honest Currently we have scenario 2... should we progress to scenario 3? All three work fine and point via RewriteMod to the same page which is fetched underwater. Now, my question is which of these is better in terms of: userfriendlyness (page loading times, full url visible in url bar or not) seo friendlyness (proper indexing due to the urls containing descriptive relevant tags) other concerns we forgot like possible penalties for so many words in link hrefs?? Thanks very much for your suggestions: much appreciated!

    Read the article

  • Rewriting a URL for tomcat through an ajp connection

    - by StudentKen
    I've tried several attempts to resolve this, but all have come up naught. Currently I have apache setup to forward all urls at and past the /portal/ tag to tomcat. Unfortunately, tomcat receives these requests through /portal/appName, a subdirectory in webapps rather than the webapps root directory where my wars are deployed. Is there a simple solution to this that I'm not seeing? I've been trying to use mod_rewrite to ^/portal/ $ / but that doesn't yield the expected results (perhaps I'm doing this wrong?).

    Read the article

  • Is there a way I can filter traffic by page-type based upon URL structure in Google-Analytics or Google Webmaster Tools?

    - by Felix
    I have a local business directory site. I'm trying to segment my incoming traffic by page-type such that I can find out what percentage of traffic is going to zip code pages exclusively and what percentage is going to city/state level pages. I basically want to filter by URL structure to find out what percentage of total traffic zip code pages account for. The reason for doing this is to find out if Google Tag Manager can help with this? Here are the two URL paths: http://www.example.com/ny/new-york/10011/ http://www.example.com/ny/new-york

    Read the article

  • E-mail solution recommedation?

    - by Brownsithily Smith
    Do you currently use email marketing as part of your online marketing strategy to new prospects,customers & clients? If yes: What is your single biggest problem/challenge with email marketing? What is your single most important question about email marketing? If no: What is stopping you? Do you plan to go on your email marketing for online business & ecommerce? Any experience or recommendation?

    Read the article

  • How can I exclude content in my notifications bar from being indexed?

    - by Liam E-p
    Of course I want my content to be indexed pretty fast by search engines, however not my notifications bar. My notifications bar contains the last 30 changes to content on the site, and I don't want this to show in my SEO meta. As all the notifications are generic, it often doesn't provide any relevant information. As I said the notifications are generic. If an article named "123" was created, it would create a notification that says "Article "123" was created by xxx at 12:00AM". I'm now wondering if this is a content design problem. As only 1/3 of this information is actually relevant to users (the title, what happened). By SEO meta, and irrelevant notification data being shown, I mean this - Basically what I was wondering, is how I could optimise this, so search engines wouldn't show this generic nonsense.

    Read the article

  • Can I include a robots meta tag outside of the head in HTML snippets indeded to be SSIed?

    - by Dan
    I have a number of files in my site which are not intended for independent viewing, but rather to be AJAXed into content within the site. They obviously don't meet HTML standards (no body, head, etc.) as independent entities. I would like to prevent search engines from indexing these pages, but do not have access to /robots.txt (which would be much more ideal). My question is, could I include the following at the top of these partial HTML files and get the desired results? <meta name="robots" content="noindex, noarchive"> I guess there are two parts to this question. Will this cause any rendering issues in any browsers? Will search engines (at least Google & Bing) interpret this as intended?

    Read the article

  • Why is Google still not indexing my !# website?

    - by Zubair
    I have been working on a website which uses #! (2minutecv.com), but even after 6 weeks of the site up and running and conforming to the Google hash bang guidelines stated here, you can still see that Google still hasn't indexed the site yet. For example if you use Google to search for 2MinuteCV.com benefits it does not find this page which is referenced from the homepage. Can anyone tell me why Google isn't indexing this website? Update: Thanks for al lthe help with this answer. So just to make sure I understand what is wrong. According to the answers Google never actually indexes the pages after the Javascript has run. I need to create a "shadow site" which google indexes (which google calls HTNL snapshots). If I am right in thinking this then I can pick a winner for the bounty

    Read the article

  • Transition to new site

    - by James Hill
    I'm almost finished rewriting the website for a non-profit organization. The existing site receives ~5,000 a month. The new site is being written in ASP.Net and the existing site is PHP. The current hosting provider does not support .Net hosting, so I'll be switching providers. My question revolves around the transition from the old site to the new. I would really like to get the new site up at the new hosting provider and do thorough testing before changing the DNS records for the domain. Question: How can I put the new site up, test it, make any changes/additions necessary before updating the domain DNS to point to the new IP without Google indexing the content? Also, what SEO repercussions should I be aware of when making such a drastic change to the content that exists under the domain name?

    Read the article

  • After changing web host, I get a 'file does not exist' error

    - by Jordan
    I run a WordPress blog, and have recently changed web hosts. When changing web hosts, I copied all files and exported/imported the database etc as explained by lots of tutorials found easily on Google. The blog home page works fine. What goes wrong: When I click on any link from the home page, the browser gets stuck in a redirect loop. Looking at the error log, I see: File does not exist: /usr/local/apache/htdocs/index.php The directory /usr doesn't even exist for my website - so perhaps this is looking for a file that was present using my old Web Host and is no longer present with my new web host? What is going on, and how might I resolve it?

    Read the article

  • Storing User-uploaded Images

    - by Nyxynyx
    What is the usual practice for handling user uploaded photos and storing them on the database and server? For a user profile image: After receiving the image file from user, rename file to <image_id>_<username> Move image to /images/userprofile Add img filename to a table users containing their profile details like first_name, last_name, age, gender, birthday For a image for a review done by user: After receiving the image file from user, rename file to <image_id>_<review_id> Move image to /images/reviews Add img filename to a table reviews containing their profile details like review_id, review_content, user_id, score. Question 1: How should I go about storing the image filenames if the user can upload multiple photos for a particular review? Serialize? Question 2: Or have another table review_images with columns review_id, image_id, image_filename just for tracking images? Will doing a JOIN when retriving the image_filename from this table slow down performance noticeably? Question 3: Should all the images be stored in a single folder? Will there be a problem when we have 100K photos in the same folder? Is there a more efficient way to go about doing this?

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >