Search Results

Search found 9721 results on 389 pages for 'quicktest pro'.

Page 184/389 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • SSL issue and redirects from https to http

    - by Asghar
    I have a site www.example.com for which i purchased SSL cert and installed. And it was working fine, I also have a subdomain with app.example.com which was not on SSL. Both www.example.com and app.example.com are on same IP address. At later we decided to put SSL only on app.frostbox.com and then i configured SSL with app.frostbox.com and it worked fine, Now the issue is that Google is indexing my site as https://www.example.com/ and when users hits the web , Invalid security warning is issued and when user allow security issue they are shown my app.example.com contents. Note: I have my SSL configuration files in /etc/httpd/conf.d/ssl.conf The contents of the ssl.conf are below. http://pastebin.com/GCWhpQJq NOTE: I tried solutions in .httaccess but none of those worked. Like redirecting 301 redirects etc

    Read the article

  • Abandoment to blame for the last JavaScript file not always being loaded?

    - by Larsenal
    I have a code snippet for an app that users are loading as a 3rd party script on their site. The general sequence is as follows: Site loads http://www.example.com/foo.js foo.js does stuff 1 to 2 seconds later, foo.js loads bar.js Now in a perfect world, I'd want to see matching counts for the calls to foo.js and bar.js. However, bar.js loads only about 94% of the time. I'm wondering how much of this discrepancy might be attributable to site abandonment given the fact that bar.js is delayed by 1 or 2 seconds. I posted here instead of StackOverflow since I think it's more a question about what would be typical time on page when users abandon the page.

    Read the article

  • Google indexed site's address by accident. What do I do now?

    - by AndrejaKo
    I was making a site for a friend of mine and he wanted to be able to see my progress as I worked on the site, so I decided to put the site on a server on my computer and enable access by a domain name registered to me. It turns out that I forgot to set up a robots.txt file for the site and somehow Google indexed the site. My question is: What do I do now? As I understand it, Google doesn't like duplicate content and my friend could have problems when I upload the new site to his server. Right now his current site, which only has a work in progress page, is first on Google when searching for relevant keywords and I really really don't want to damage that. Is there anything else I need to be concerned about?

    Read the article

  • How to cache on CloudFlare images that are served to client as JSON?

    - by Askar Ibragimov
    I am using a gallery on my website that gets list of images from a JSON sent by a php script. So, the javascript gallery calls PHP backend and it replies with complex JSON where images are specified as object fields. These fields not necessarily include full URLs, merely a path to needed images. I'd like to use Cloudflare and want these images to be cached there. How I could learn whether these are cached or not, and make sure that these would be cached and not considered some sort of dynamic content?

    Read the article

  • PPC Affiliate networks for web-applications [on hold]

    - by machete
    I want to run a browser-based social media (Twitter, Instagram,...) account management tool (justunfollow.com) which monetizes through ads. But many affiliate networks like Google AdSense or media.net require your website to have "high quality content". AdSense explicitly states: "Google ads, search boxes or search results may not be: * Integrated into a software application (does not apply to AdMob) of any kind, including toolbars. * Placed on any non-content-based page. (Does not apply to AdSense for search, mobile AdSense for search, or AdMob.)" Are there serious & trustworthy affiliate networks which allow ads to be published on a web application?

    Read the article

  • Subdomain redirect to WWW

    - by manix
    I have the domain example.com and the test.example.com running on apache server. For some reason when I try to visit test.example it is redirected to www.test.example and by consequence a Server not found error is displayed in the browser. Both .htaccess (root and subdomain folder) files are empty. Additional facts I have another subdomain xyz.example.com pointed to public_html/xyz directory with some content inside (index.html with "hello world message") and it works fine if I use xyz.example.com instead of www.xyz.example.com. So, can you help me to point to the right direction in order. I have a vps and I am able to change any file if is required. Below you can find my virtual host configuration. <VirtualHost xx.xxx.xxx:80> ServerName test.example.com ServerAlias www.test.example.com DocumentRoot /home/example/public_html/test ServerAdmin [email protected] UseCanonicalName Off CustomLog /usr/local/apache/domlogs/test.example.com combined CustomLog /usr/local/apache/domlogs/test.example.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." ScriptAlias /cgi-bin/ /home/example/public_html/test/cgi-bin/ # To customize this VirtualHost use an include file at the following location # Include "/usr/local/apache/conf/userdata/std/2/example/test.example.com/*.conf" </VirtualHost>

    Read the article

  • Using cracked software and tools [closed]

    - by Lena Aslo
    I am seeing people complaining about expensive tools such as Dreamweaver or Photoshop. I am just wondering about that, because everyone knows that they can get this software running for free (if it is done illegally). Why don't they just use a cracked version? Is it so likely to get caught? I feel that nowadays a lot of people are using cracked software but whenever the topic is mentioned, they ALL say PSSST!!! or start criticizing it, even though they are doing it themselves...

    Read the article

  • Redirect/Rewrite Subdomain to Subfolder

    - by Laurent Ho
    I'm trying to redirect a subdomain to a subfolder e.g. forums.domain.com to www.domain.com/forums Note that I started the forums in the subfolder format but worried that members might mistakenly try to access the forums using the subdomain format. RewriteCond %{HTTP_HOST} ^(www\.)?forums\.domain\.com RewriteRule .* /forums [L] From what I read the codes above should work through .htaccess, but do I still need to create a DNS A record to point to the IP address of the server?

    Read the article

  • What percent of visitors should click on the next page before you enable prefetching?

    - by Kevin Burke
    Mozilla Firefox and Google Chrome support prefetching via an HTML tag: <!-- in chrome --> <link rel="prerender" href="http://example.org/index.html"> I suppose it is always worthwhile to include this tag if 100% of users on a page click on the "Next Page" button or similar, and never worthwhile to include it if only 2% or 3% of users visit the following page. At what percent of clicks should you turn on prefetching of the next page? 65%? Also, does the calculus change if the current page is HTTP and the next page is HTTPS?

    Read the article

  • Googlebot requesting invalid url

    - by Rob Walker
    I have a web app which emails me exceptions automatically. This morning there was an error relating to a url: /Catalog/LiveCatalog?id=ylwpfqzts id is invalid (should be a guid) and caused an error parsing. Everything was handled correctly, and an error page is returned. But what was odd is that the user-agent reported itself as Googlebot and the IP is registered to Google. The URL would never have been generated by my web app but doesn't look particularly malicious. Anyone ever seen anything like this?

    Read the article

  • Rewrite img and link paths with htaccess and serve the file from rewritten path?

    - by frequent
    I have a static mockup page, which I want to "customize" by switching a variable used in image-src and link-href attributes. Paths will look like this: <img src="/some/where/VARIABLE/img/1.jpg" alt="" /> <link rel="some" href="/some/where/VARIABLE/stuff/foo.bar" /> I'm setting a cookie with the VARIABLE value on the preceding page and now want to modfiy the paths accordingly by replacing VARIABLE with the cookie value. I'm a htaccess newbie. This is what I have (doesn't work): <IfModule mod_rewrite.c> # get cookie value cookie RewriteCond %{HTTP_COOKIE} client=([^;]*) # rewrite/redirect to correct file RewriteRule ^/VARIABLE/(.+)$ /%1/$1 [L] </IfModule> So I thought my first line gets the cookie value and stores this in %1. And on the second line I'm filtering VARIABLE, replace it with the cookie value and whatever comes after VARIABLE in $1. Thanks for sheeding some light on what I'm doing, doing wrong and if I can do this at all using htaccess. EDIT: I'm sort of halfway through, but it's still not working... Mabye someone can apply the finishing touches: <IfModule mod_rewrite.c> # check for client cookie RewriteCond %{HTTP_COOKIE} (?:^|;\s*)client=([^;]*) # check if an image was requested RewriteCond %{REQUEST_FILENAME} \.(jpe?g|gif|bmp|png)$ # exclude these folders RewriteCond %{REQUEST_URI} !some/members/logos # grab everything before the variable folder and everything afterwards # replace this with first bracket/cookie_value/second bracket RewriteRule (^.+)/VARIABLE/(.+)$ $1/%1/$2 [L] </IfModule> Still can't get it to work, but I think this is the correct way of doing it. Thanks for help!

    Read the article

  • is redirecting mydomain.eu & mydomain.net to mydomain.com using .htacess spammy?

    - by sam
    a client has asked my to develop their site, they already own 3 domains, mydomain.eu, .net and .com they want all the traffic from .eu and .net to redirect to .com, ive explained to them that it is not that relevant as people will search for them in search engines rather than typing in the domain, but they still would like me to do it. As far as i know this is fine to do from an Seo point of view but i thought id just double check ..

    Read the article

  • Drop down menu for blogger with automatic link update

    - by Payo
    My mom has a recipe blog in blogger which has many categories "Cakes", "Celebration Day" etc which are filled with numerous recipe posts. Now I want to organize my blog in a better way and make a drop down menu for all the categories(I already have labels) but I dont want to keep updating the menu whenever I include a new post. I want to have it updated automatically. Is there any way it can be done with CSS or html codes?

    Read the article

  • Searching for a Calendar Application That Meets the Following Criteria

    - by Navarr
    My Step Father is starting his own company, and he gets the wonderful life of having to deal with multiple clients. He needs a calendar system that can do the following: Can be updated from iCal (I'm pretty sure this just means CalDAV support) Needs to be able to show different clients the details of whats on his calendar for them, while showing other client's arranged times as only busy Wants to show it in a professional way, preferably on his own website. I was looking at Google Calendar, but alas it doesn't seem to actually support it like I thought it would. He was looking at MobileMe, but its simply public. Is there a calendar app or PHP application that can be installed on his server that can do this?

    Read the article

  • SEO with duplicate content

    - by user16831
    I have a nature photography site with multiple types of photo galleries. Each photo and associated caption on my site appears in several galleries. For instance, a photo of a goldfinch that was taken on a trip to New Mexico in 2008 will appear in the "goldfinch.php" gallery, in the "finches.php" gallery, and in the "New_Mexico_2008.php" gallery. This duplication is useful for my site visitors - User A may want to see goldfinch photos, whereas User B wants to see photos from New Mexico - but I am concerned about the SEO implications. The typical suggestions to deal with duplicate content, such as 301 redirects and canonical tags, probably won't work in this case, because the page content is substantially different (ranging from ~1% to ~90% duplication, depending on the specific example chosen). The obvious solution to me would be to edit robots.txt to only allow search engines to crawl one type of gallery - for instance, if they crawled only the galleries organized by species(e.g. goldfinch.php), all the photos on my site would be found exactly once. However, the Google content guidelines recommend against blocking crawler access to duplicate information. Should I go ahead and use robots.txt anyway? Or is there a better solution?

    Read the article

  • custom domain point to tumblr blog

    - by Julius
    My domain mydomain.com is registered with godaddy. I wish to host my tumblr blog on this domain with nearlyfreespeech.net hosting. My active nameservers at godaddy already point to my authoritative ones at NFS.net which is working. However i'm baffled of the correct configuration to set to point to my Tumblr. Preferably id like (A) my domain http://mydomain.com to host the blog and have http://www.mydomain.com redirect also to http://mydomain.com If this is too difficult my next preference is (B) to have http://www.mydomain.com host the blog whilst http://mydomain.com redirects to http://www.mydomain.com My 3rd preference is to have (C) a sub-domain like http://tumblr.mydomain.com or http://tumblr.mydomain.com to host the blog and i guess have http://mydomain.com and http://www.mydomain.com both redirect to it. I've tried having two aliases mydomain.com and www.mydomain.com pointing to my permanent NFS ip at mydomain.nfshost.com and when i try to add: (1) an A record pointing mydomain.com to the ip 66.6.44.4 as per Tumblr's instructions it tells me i already have the bare domain as an alias so i cant do that. (2) the A record on the www.mydomain.com alias. I can do this with either www.mydomain.com set as an alias or not. But when i tried this with mydomain.com set as the canonical name the result when visiting either mydomain.com or www.mydomain.com was them both continually redirecting to eachother until an error was thrown. So i was wondering if there is a ninja that could save me some hairpulling and tell me the correct way to config A, or else B, or else C.

    Read the article

  • Google Analytics Visitors drop-off for certain region of site only

    - by crmpicco
    I have an issue with the tracking on my site where I have seen a dramatic drop off of visitors to the site from a certain region. I have four regions on my site at the moment, these are UK, EU, US and RoW (Rest of the World). The UK, EU and US regions are unaffected, only the RoW region suffers this drop-off. I have included a screen shot below from my GA account, which shows this effect. My GA code, which is included on every page on the site is below. I have changed the UA account number intentionally for this example. There have been no changes made to the GA account or the tracking code in a live environment for some considerable time, but for some reason I am seeing the drop-off for this region only. In the code below I am not tracking page views on certain pages as I have event tracking setup for these pages. <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-18721873-5']); _gaq.push(['_setCookiePath', '/row/']); if ( typeof(p_page) != 'undefined') { // do nothing if user is on above pages // N.B. there are a series of conditions in this if statement checking that we are not on a particular page } else { _gaq.push(['_trackPageview']); } </script>

    Read the article

  • If my URL's are static, but then parsed by Javascript, does that make it crawlable?

    - by Talasan Nicholson
    Lets say I have a link: <a href="/about/">About Us</a> But in Javascript [or jQuery] catches it and then adds the hash based off of the href attribute: $('a').click(function(e) { e.preventDefault(); // Extremely oversimplified.. window.location.hash = $(this).attr('href'); }); And then we use a hashchange event to do the general 'magic' of Ajax requests. This allows for the actual href to be seen by crawlers, but gives client-side users with JS enabled an ajax-based website. Does this 'help' the general SEO issues that come along with hashtags? I know hashbangs are 'ok', but afaik they aren't reliable?

    Read the article

  • Yahoo media player not working with Ruby on rails

    - by luca590
    I have a yahoo media player embedded in my webpage. I am currently using Ruby on Rails to create/edit my web page. When i click the play button next to a track the YMP waits a while and then goes to the next track without playing the first one. I then get a warning on my second (last) track that its file could not be found. Does anyone has a better recommendation for an audio player or a way to fix this one?

    Read the article

  • Issue with image lightbox and enlargement / Jquery Mobile

    - by Matt
    I'm working on a redesign of my weather website using Jquery Mobile. I have it set up so that you drill down through a series of content containers to get to the weather info (each group of info opens in a dialog display). Everything's worked well, but I've run into an issue with my images. I have them sized so that they fit a mobile device's screen nicely, but because of that, when you look at them in a desktop browser, you can't really make out what the image is. I've tried several image lightbox / enlargement solutions, but for some reason, none of them have worked. Either nothing happens or the images open in a new window. I thought that this might be caused by Jquery Mobile somehow overwriting the scripts and css of the lightbox / enlargements I've tried. I'm not completely sure though that this is the case, and if it is, how I can get around it to be able to enlarge the images to their original size, preferably onclick. Here is a working (for the most part - still some kinks to work out) example. If you look under the "Tropical" section at the "Satellite-Derived Products", you'll see what I mean. http://www.suncoaststormwatch.com/Beta/Index.html

    Read the article

  • Looking for free, specific Ip2Location Database

    - by Andresch Serj
    I am searching for a free db (like an updated XML or CSV file) that relates IP addresses to specific locations. I want more information than just the country. I want some sort of region or city reference, even if that ends up to be a number that makes no sense to me. Doesn't have to be super correct or always up to date either. It is just to distinguish between user groups and not to monitor or spy on them.

    Read the article

  • Google Analytics: tracking subdomains for a profile defined for a subdomain

    - by Alex G
    Hope you can help. We have set up a single property under our Google Analytics account. That property's default URL is set to subdomain1.example.com. We would now like to track multiple subdomains for example.com, under the same property. Seems easy enough: we just need to add _gaq.push(['_setDomainName', 'example.com']); to our tracking code, right? But my question is: does it matter if a) we don't need to track www.example.com (this is tracked under a seperate account and property) and b) the default URL for our property is set to subdomain1.example.com? Will either of these have any impact on data collection?

    Read the article

  • Location-Based redirection and duplication in sub-directories affecting SEO

    - by Joshua
    I currently own the website www.xyz.com. The website has a sub-directory for each of the 3 target countries: .../en-US/ (United States), .../es-MX/ (Mexico), and .../es-DO/ (Dominican Republic). I have two main questions about this setup: Currently, the main domain/root (xyz.com) contains a blank index.php file, but I would like for a user to be redirected to one of the sub-directories based on their regional location. What is the best way to accomplish this? I have looked at using browser language-based redirection, but how would I know whether to direct a user to the MX or DO site if the browser language is set to spanish? Is there a way to detect a user's geographic location? Also, the 3 websites are practically identical except they all have 3 unique color schemes and the US site is in english while the MX and DO sites are in spanish. My problem is that I believe GoogleBot is penalizing/banning my site because the spanish text on the MX and DO pages are nearly identical and are thus marked as duplicates/spam. Is there a way to avoid this?

    Read the article

  • Why googling by keycaptcha gives results on reCAPTCHA? [closed]

    - by vgv8
    EDIT: I'd like to change this title to: How to STOP Google's manipulation of Google search engine presented to general public? I am frequently googling and more and more frequently bump when searching by one software product I am given instead the results on Google's own products. For ex., if I google by keyword keycaptcha for the "Past 24 hours" (after clicking on "Show search tools" -- "Past 24 hours" on the left sidebar of a browser) I am getting the Google's search results show only results on reCAPTCHA. Image uploaded later: Though, if confine keycaptcha in quotes the results are "correct" (well, kind of since they are still distorted in comparison with other search engines). I checked this during few months from different domains at different ISPs, different operating systems and from a dozen of browsers. The results are the same. Why is it and how can it be possibly corrected? My related posts: "How Gmail spam filter works?" IP adresses blacklisting Update: It is impossible for me to directly start using google.com as I am always redirected to google.ru (from google.com) by my ip-address "auto-detect location" google's "convenience". The google's help tells that it is impossible to switch off my location auto-detection because it is very helpful feature. There is a work-around to use google.com/ncr (to get google.com) (?anybody know what does it mean) to prevent redirection from google.com but even. But all results are exactly the same OK, I can search by quoted "keycaptcha", I am already accustomed to these google's quirks, but the question arises why the heck to burn time promoting someone's product if GOOGLE uses other product brands for showing its own interests/brands (reCAPTCHA) instead and what can be done with it? The general user will not understand that he was cheated and just will pick up the first (wrong) results Update2: Note that this googling behaviour: is independent on whether I am logged-in (or log-out-ed of) a google account, which account, on browser (I tried Opera, Chrome, FireFox, IE of different versions, Safari), OS or even domain; there are many such cases but I just targeted one concrete restricted example speciffically to to prevent wandering between unrelated details and peculiarities; @Michael, first it is not true and this text contains 2 links for real and significant results.. I also wrote that this is just one concrete example from many and based on many-month exp. These distortions happen upon clicking on: Past 24 hours, Past week, Past month, Past year in many other keywords, occasions/configurations of searches, etc. Second, the absence of the results is the result and there is no point to sneakingly substitute it by another unsolicited one. It is the definition of spam and scam. 3d, the question is not abt workarounds like how to write search queries or use another searching engines. The question is how to straighten the googling's results in order to stop disorienting general public about. Update: I could not understand: nobody reproduces the described by me behavior (i.e. when I click "Past 24 hours" link in google search searching for keycaptcha, the presented results are only on reCAPTCHA presented)? Update: And for the "Past week":

    Read the article

  • Is it a good idea to add robots "noindex" meta tags to deep low content pages, e.g. product model data

    - by Cognize
    I'm considering adding robots "noindex, follow" tags to the very numerous product data pages that are linked from the product style pages in our online store. For example, each product style has a page with full text content on the product: http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE Then many data pages with technical data for each model code is linked from the product style page. http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-1 http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-2 http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-3 It is these technical data pages that I intend to add the no index code to, as I imagine that this might stop these pages from cannibalizing keyword authority for more important content rich pages on the site. Any advice appreciated.

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >