Search Results

Search found 9721 results on 389 pages for 'quicktest pro'.

Page 87/389 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Should I rely on externally-hosted services?

    - by Mattis
    I am wondering over the dangers / difficulties in using external services like Google Chart in my production state website. With external services I mean them that you can't download and host on your own server. (-) Potentially the Google service can be down when my site is up. (+) I don't have to develop those particular systems for new browser technologies, hopefully Google will do that for me. (-) Extra latency while my site fetch the data from the google servers. What else? Is it worth spending time and money to develop my own systems to be more in control of things?

    Read the article

  • Can I remove visible referer from link?

    - by Andreas
    I use referer info to track which of my campaigns works the best. So instead of <a href="someweb.com">someweb</a> I have a link like <a href="http://someweb.com?utm_source=john&utm_medium=email&utm_content=NAME&utm_campaign=campaing">someweb</a> Now when a user cliks "someweb" the whole URL string is shown in the adressbar. Is this possible to mask/hide somehow? Maybe via .htaccss? Thanks in advance

    Read the article

  • Highly SEO optimised forum posts

    - by Tom Gullen
    Given the following forum post: Basics of how internals of Construct work I've used GameMaker in the past. And I know some C++ and have used a few 3d engines with it. I have also looked at Unity, though I didn't get too much into it. So I know my way around programming etc... My question is, how does construct work internally? I know it allows python scripting, which itself is "technically" interpreted, though python is pretty fast as far as being interpreted goes. But what about the rest? Is the executable that gets cre... The forum software will take the first 150 chars of the first post as the page meta description, and the title will be the thread title. All ok. So in Google it will appear as: Basics of how internals of Construct work I've used GameMaker in the past. And I know some C++ and have used a few 3d engines with it. I have also looked at Unity, though I didn't get too much... http://www.domain.com/forum/basics-of-how-internals-of-construct-work.html Now the problem is (not so much with this thread, but other ones) is the first 150 chars don't always create the best meta description. Is it worth my time to cherry pick threads and manually set their description/title tags so they read like: Internal workings of Construct 2 Events aren't converted to any other language. The runtime is a standalone compiled EXE application, which is optimised and actually very fast. Your events... http://www.domain.com/forum/basics-of-how-internals-of-construct-work.html The H1 on the page is still the original title, but we have overridden the title and description to look more friendly on search results. Is this advantageous forgetting the obvious time cost?

    Read the article

  • Gracefully terminate a request based service on server

    - by Jatin
    In our web application, for each http-request there is a lot of computation that happens on back end. Output can vary from 10 sec - 1 Hour. In the mean time when it is computed, "Waiting.." is shown on the website for the respective user. But it so happens, that a user might cut down the service in between. So what all can be done on the back end so that the computation can be stopped in between to save resources? What different tactics can be applied here? And if better (instead of killing the thread directly), then a graceful termination policy should make wonders.

    Read the article

  • Handling SEO for Infinite pages that cause external slow API calls

    - by Noam
    I have an 'infinite' amount of pages in my site which rely on an external API. Generating each page takes time (1 minute). Links in the site point to such pages, and when a users clicks them they are generated and he waits. Considering I cannot pre-create them all, I am trying to figure out the best SEO approach to handle these pages. Options: Create really simple pages for the web spiders and only real users will fetch the data and generate the page. A little bit 'afraid' google will see this as low quality content, which might also feel duplicated. Put them under a directory in my site (e.g. /non-generated/) and put a disallow in robots.txt. Problem here is I don't want users to have to deal with a different URL when wanting to share this page or make sense of it. Thought about maybe redirecting real users from this URL back to the regular hierarchy and that way 'fooling' google not to get to them. Again not sure he will like me for that. Letting him crawl these pages. Main problem is I can't control to rate of the API calls and also my site seems slower than it should from a spider's perspective (if he only crawled the generated pages, he'd think it's much faster). Which approach would you suggest?

    Read the article

  • Apache HTTPS ProxyPass certificate location

    - by oz1cz
    I'm trying to set up an Apache server that uses ProxyPass to pass HTTPS requests on to another server. Let's call the proxy server ALPHA and the target server BETA. ALPHA does not run HTTPS, but BETA does. I first tried using this virtual host specification on ALPHA: <VirtualHost *:443> ServerName mysite.com ProxyPass / https://192.168.1.105/ # BETA's IP address ProxyPassReverse / https://192.168.1.105/ # BETA's IP address ProxyPreserveHost On ProxyTimeout 600 SSLProxyEngine On RequestHeader set Front-End-Https "On" CacheDisable * </VirtualHost> But when I tried this, Apache complained saying, "[error] Server should be SSL-aware but has no certificate configured [Hint: SSLCertificateFile]". I had to copy the SSL certificate from BETA to ALPHA and add these lines to the host specification on ALPHA: SSLEngine on SSLCertificateKeyFile /usr/local/ssl/private/BETA_private.key SSLCertificateFile /usr/local/ssl/crt/BETA_public.crt SSLCertificateChainFile /usr/local/ssl/crt/BETA_intermediate.crt Now the system works. But I have a feeling that I have done something wrong or unnecessary. I have the web site's private key and certificate lying on both ALPHA and BETA. Is that necessary? Should I have done it differently?

    Read the article

  • Google Author information in search results still havent displayed my details in search results

    - by Jayapal Chandran
    I followed the following instructions but still not clear whether i had completely understood it. http://www.google.com/support/webmasters/bin/answer.py?answer=1408986 http://www.labnol.org/internet/author-profile-in-google/19775/ I did the above last week and i did not find my picture in google search result. First i added google + link in certain web pages and in my google profile i added those pages which had google + anchor link with rel=author tag. After updating i used the following to verify. http://www.google.com/webmasters/tools/richsnippets?url=http%3A%2F%2Fvikku.info%2Fcodesnippets%2Fphp%2F&view= You can see that my pic is appearing at the right. here is a screen shot. so, what am i missing? why it is not in the search result. The author of labnol.org said it will take 3 days for my profile photo link to appear... ? Google has stated the following Note that there is no guarantee that a Rich Snippet will be shown for this page on actual search results. For more details, see the FAQ( http://knol.google.com/k/google-rich-snippets-tips-and-tricks#Frequently_Asked_Questions ). Fingers crossed. Thoughtful.

    Read the article

  • What is the right way to do this HTML: header with icon linked

    - by Hell Awaits
    I know how to make these examples look and behave the same. But I would like to know which is the right way to build a HTML structure. <a><img><h1></h1></a> - looks wrong because an inline element is inside of a block element <a><img></a><h1><a></a></h1> - the same a-element is defined twice. Also I'm not sure about markup inside headers Any other solutions ?

    Read the article

  • web hosting locally

    - by Pradyut Bhattacharya
    i have made a website and hosted in my local computer using a static ip where can i buy a domain name such as www.something.com such that it can redirect to my static ip so that if i m using a page like a http://localhost/index.jsp it can be accessed by http://www.something.com/index.jsp does it matter if i run the server locally or i buy a managed web hosting server from a big company if the traffic is low on my site?? thanks

    Read the article

  • Python platform

    - by LazyTiberius
    I'm looking for a python platform or environment. I'm looking for something like or similar to easyphp ou xampp for try and learn some cms. i've find mezzanine cms http://mezzanine.jupo.org/ and skeletonz http://orangoo.com/skeletonz/ usually i use and know apache environment. But python is new for me. i'm a noob with this 2 cms (mezzanine and skeletonz). My configuration os is windows 7 and windows 8 i need something easy to simulate a python environment hosting Thank all for your help

    Read the article

  • 301 url rewrite loop

    - by anyvendetta
    I need to do a 301 rewrite to force all urls to become lowercase i put in htaccess (RewriteMap lc int:tolower in httpd.conf) RewriteCond %{REQUEST_URI} [A-Z] RewriteRule . ${lc:{REQUEST_URI}} [R=301,L] Everything works just fine except to urls with subcategories which in this case are: /category-1256-Product-page-example.html the numer 1256 refers to a "subcategory" So when i try to access /category-1256-Product-page-example.html gives me a loop error message I think another redirect rules are making the loop but dunno how to fix it because are just this urls rewrite rules that don't work with the above rewrite. Rewriterule ^main-site-url/category-([0-9]*)-([-_a-zA-Z0-9]*)\.html$ /subcategories.php?idcategory_main=1&idcategory=$1&category=$2 [L] Rewriterule ^main-site-url/([0-9]*)-([-_a-zA-Z0-9]*)-([0-9]*)\.html$ /file.php?idcategory_main=1&idsubcategory=$1&product=$2&idproduct=$3 [L]

    Read the article

  • Stop bots from crawling old links with extensions

    - by Jared
    I've recently switched to MVC3 which is extension-less for the URL's, but Google and Bing have a wealth of links that they are crawling which no longer exist. So I'm trying to find out if there is a way to format robots.txt (or by some other method) to tell google/bing that any link that ends in an extension isn't a valid link... Is this possible? On pages that I'm concerned about a User having saved as a fav I'm displaying a 404 page that lists the links to take once they are redirected to the new page (I decided to not just redirect them as I don't want to maintain these forever). For Google/Bing sake I do have the canonical tag in the header. User-agent: * Allow: / Disallow: /*.* EDIT: I just added the 3rd line (in text above) and it APPEARS to do what I'm wanting. Allow a path, but disallow a file. Can anyone confirm this?

    Read the article

  • Choosing a CSS grid/framework

    - by jonallard
    There are many grids and framework to choose from. A Google search for CSS frameworks will return a dozen articles that themselves list a number of frameworks to choose from. When it comes to choosing one, it's easy to be lost without having an intimate knowledge of all of them. What are the main factors that go into choosing a CSS framework, and how will those choices map to certain frameworks? More generally, how does one choose a CSS framework? Note 1: I'm using "grid" and "framework" almost interchangeably here, but there is probably one I should use over the other. Corrections on this are welcome. Note 2: I am well aware that some choices will depend on taste and accordingly, this question can turn into a "best of" contest/subjective topic. I'm trying to keep it as answerable as possible, as I'm pretty sure many have this problem/question of choosing a framework and an answer to that would benefit the community. As such, improvements to this question are welcome rather than just closing it.

    Read the article

  • Set up development site on another server/host

    - by Ofeargall
    I'm developing a site for a client. They've got a site now that's hosted at hosting.com. I'm going to move them to my VM hosting solution at edge web but I want to run some tests and have the client approve the site before changing the name servers to the new site/hosting location. How do I make this happen? I'm running a red hat/Apache on linux for the edge web hosting. I don't have control of the domain name (i.e. the client controls that right now). Edgeweb has set up a dns zone for the domain name so that when the time comes to switch we're ready to go. I'm a web developer and I understand the technologies that make a user experience 'work' but I'm unfamiliar with the server jargon and all that so, please be patient. Thanks in advance.

    Read the article

  • Best way to prevent Google from indexing a directory [duplicate]

    - by Gkhan14
    This question already has an answer here: Stopping Google index some web pages I have 5 answers I've researched many methods on how to prevent Google/other search engines from crawling a specific directory. The two most popular ones I've seen are: Adding it into the robots.txt file: Disallow: /directory/ Adding a meta tag: <meta name="robots" content="noindex, nofollow"> Which method would work the best? I want this directory to remain "invisible" from search engines so it does not affect any of my site's ranking. In other words, I want this directory to be neutral/invisible and "just there." I don't want it to affect any ranking. Which method would be the best to achieve this?

    Read the article

  • Can I enable GZIP on Godaddy?

    - by Dave
    One of my sites lives on GoDaddy's bottom-of-the-line cheap hosting, I have the correct code in my .htaccess file, but it's not compressing because mod_deflate is not loaded. How do I enable that? The best I've found is this article which suggests I use PHP to zip everything (which is going to be more work than just changing hosting companies): " Add the following code to the very top of your Web pages above the DOCTYPE: if (substr_count($_SERVER['HTTP_ACCEPT_ENCODING'], 'gzip')) b_start("ob_gzhandler"); else ob_start();

    Read the article

  • My site works with www.example.com but not example.com

    - by toocool
    this is the site that I am going to develop: http://www.juve-news.com/ it works like this but it doesnt when I try to open it without the www prefix. it give me 400 bad request! I have used other web host before now I am trying a new one and I have to add some dns entries like in the picture here: http://cloudcontrol.com/developers/documentation/add-ons/aliases/ I dont know what I have done wrong. If anyone knows what could be the problem then please give me a tip.

    Read the article

  • Web Hosting Backup/Disaster Recovery Plan - Which Company?

    - by Harry Muscle
    I've been asked to look after consolidating all of our various company websites onto one host and also provide a disaster recover plan in case the chosen host goes down/out of business/etc. We're most likely going to go with HostGator as our chosen host, however, I'm not sure who to pick for our backup host. HostGator uses cPanel and has the functionality to provide regular full (ie: including configuration) backups of all the sites we host. Ideally I'm looking for a solution where we can provide these backups to another company and within a short period of time they restore all the sites onto their servers and we're back up and running. The whole disaster recover process has to be fairly straight forward from the point of view of what we need to do in case I am unavailable to assist in the disaster recovery process and no one else overly technical is available to assist (ie: take these backup files, send them to this company, and ask them to do this). Any suggestions on which company would be a good choice for this backup solution would be highly appreciated. Thanks, Harry

    Read the article

  • DNS slows down on development environment

    - by Sequenzia
    I have a local development environment setup on my Mac. I am running an Ubuntu Web Server inside of a Virtual Box VM. I setup a host file on my Mac that points my dev site to the IP of the Ubuntu Virtual Server. Everything works good other than the fact a lot (not all) of the time it takes more than 5 seconds to load a page. I used firebug to track down where the problem is and when it's slow the DNS part of my request is taking over 5 seconds. Like I said it's not all the time. Sometimes it resolves and loads the page within milliseconds. The same page one click will be super fast and then the next time it takes over 5 seconds. It's really slowing me down and I am not sure what is causing it.

    Read the article

  • Bizarre image loading problem from apache2

    - by NateDSaint
    Users have complained a few times about seeing a bizarre set of pink or green stripes on our webpage. At first I thought there were a rash of video card outages, but then someone sent me a screenshot from their browser (IE8). I later saw the same thing, but with slightly different colors on Chrome. Users have experienced this on their iPads and iPhones (iOS Safari). Because I've optimized the site to cache images, the bad image stays around until you clear your cache, so once you do, it resolves itself. My assumption is that the transmission of the image is being cut off mid-stream and then staying that way, but I can't for the life of me figure out why. Here's what I've checked: Header length is being sent, and transmission looks okay (wget sample below): wget http://www.superiorlivestock.com/templates/sla2/images/wallbg2.jpg --2012-04-05 08:46:00-- http://www.superiorlivestock.com/templates/sla2/images/wallbg2.jpg Resolving www.superiorlivestock.com (www.superiorlivestock.com)... [ip redacted] Connecting to www.superiorlivestock.com (www.superiorlivestock.com)|[ip redacted]|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 45926 (45K) [image/jpeg] Saving to: `wallbg2.jpg' Images are not being served gzipped (apache conf below): SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-vary The site is www.superiorlivestock.com, and here's a sample of the bad page load: Is there something obvious I'm missing? Am I saving my images in the wrong format somehow?

    Read the article

  • Software for video subscription service

    - by Clinton Blackmore
    I'd like to sell instructional videos over the web. Primarily, I'd like uses to subscribe to the site and be allowed access to videos over the internet. Secondarily, I might sell DVDs for those who have poor internet connections or would like a physical copy, or possibly I'd sell eBooks and the like in the future. Regarding the subscriptions: I'd like a system that automatically sends out e-mails when it is time to renew I'd like to be able to offer free trials Users without a free trial or subscription should not be able to access the content Incidentally, I plan to host videos on my current web host and move them to a CDN when volume (and capital) make this a good idea. While I have no intention to go crazy with the DRM, it seems expedient not to directly link to the files -- how can I link to them indirectly? It would be nice to support multiple payment processors -- specifically, I'd like to avoid a PayPal only approach. Are there any web applications (or plugins) you'd recommend for something like this? While I've set up and administered several web technologies, I've never done anything with e-commerce. I see there are possibilities like osCommerce, one friend recommends using WordPress with plugins, and it really appears that for any given CMS, you can graft on components like this, although I imagine that not all are created equal. As I'm not tied to a particular web application (and, while open source software that can run on a LAMP [p=perl, python, php] stack is preferable), I'd like to make a good choice at the beginning.

    Read the article

  • Ubuntu server Mysql remote access from MySQL Workbench

    - by goodseller
    I have a newly install ubuntu installed the mysql server. After the basic config, I changed the my.cnf file and commented the bind_address I can start the server and added iptable for 3306. I also add the privileges to mysql server as follow: GRANT ALL PRIVILEGES ON . TO 'root'@'%' IDENTIFIED BY 'P@ssw0rd' WITH GRANT OPTION; FLUSH PRIVILEGES; exit However after connected from the mysql workbench, it shows no database. But it seems that have login. Anyone have faced that or can help me? Thx!

    Read the article

  • How do I handle having too many links on a webpage because of my menu

    - by RandomBen
    I am developing a website that has a drop-down menu at the top of it. The Menu has around 100 links in it that are repeated on every page. Every page also has some number of links below the Menu that may or may not be in the menu itself. My issue is that Google says they generally don't like pages with more than 100 links on them. Is there any way to change the links on the menu so that they no longer "count" towards my max of 100 links? It seems like there should be an easy way to do this but their really doesn't seem to be. the rel=nofollow still counts towards the number of links on the page at least according to Google, so what other options do I have? I looked into where the 100 comes from and I found that it used to be here: http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=35769#2 but that is no longer the case. I found a more definitive and frankly muddier answer here: http://www.seomoz.org/blog/questions-answers-with-googles-spam-guru from Matt Cutts from 2007. Long story short, in 2007 they still felt 100 links was a good number but they stated you could go far beyond that. In fact, they said that pages with high PageRank could have 2-300. It did sound like having many links could reduce the PageRank of the page with all of the links or possibly all of the items linked to. Also, I know IIS7's SEO 1.0 toolkit suggests that pages should have no more than 250 links.

    Read the article

  • adwords traffic shows as 'not set' in google analytics

    - by sam
    in google analytics if i go to traffic sources search paid all i get is "not set". instead of the usual list of keywords that people have used to find the ad, this makes it really difficult to understand whats going on in the campaign.. ive gone into adwords and turned on auto tagging but still the same problem any idea how i can fix this so under the paid tab i get the phrases people have used to find my ad in ppc ?

    Read the article

  • How to handle gender and sexual orientation on a form with select boxes more inclusively?

    - by Drew
    The existing question and the answers for it are not satisfying when one wants to be more inclusive. Gender as Male, Female or No Answer works for some sites, but not others. Taking the view that Gender is not the same as Sex (assigned at birth based on bodily characteristics). Would it be more inclusive to include these two options: Transgender Male, Transgender Female? So instead would more inclusive options be the following? Gender Identity Male Female Transgender Male Transgender Female No Answer Sexual Orientation Straight Lesbian Gay Bisexual Asexual No Answer Gender Expression could also be included, but I think most people would find that too confusing (defined on GLAAD's website linked below) I'm going off what I've read on GLAAD's Transgender Glossary of Terms. Thanks!

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >