Search Results

Search found 3118 results on 125 pages for 'fragment caching'.

Page 18/125 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • What reasons are there to reduce the max-age of a logo to just 8 days? [closed]

    - by callum
    Most websites set max-age=31536000 (1 year) on the Cache-control headers of static assets such as logo images. Examples: YouTube Yahoo Twitter BBC But there is a notable exception: Google's logo has max-age=691200 (8 days). I've checked the headers on the Google logo in the past, and it definitely used to be 1 year. (Also, it used to be part of a sprite, and now it is a standalone logo image, but that's probably another question...) What could be valid technical reasons why they would want to reduce its cache lifetime to just 8 days? Google's homepage is one of the most carefully optimised pages in the world, so I imagine there's a good reason. Edit: Please make sure you understand these points before answering: Nobody uses short max-age lifetimes to allow modifying a static asset in future. When you modify it, you just serve it at a different URL. So no, it's nothing to do with Google doodles. Think about it: even if Google didn't understand this basic trick of HTTP, 8 days still wouldn't be appropriate, as only those users who don't have the original logo cached would see the doodle on doodle-day – and then that group of users would go on seeing the doodle for the following 8 days after Google changed it back :) Web servers do not worry about "filling up" the caches of clients (or proxies). The client manages this by itself – when it hits its own storage limit, it just starts dropping the lowest priority items to make space for new items. The priority score is based on the question "How likely am I to benefit from having cached this URL?", which is nothing to do with what max-age value the server sent when the URL was originally requested; it's a heuristic based on the "frecency" of requests for that URL. The max-age simply lets the server set a cut-off point – the time at which the client is supposed to discard the item regardless of how often it's being re-used. It would be very nice and trusting of a downstream client/proxy to rely on all origin servers "holding back" from filling up their caches, but I don't think we live in that world ;)

    Read the article

  • Android application Database Framework

    - by Marek Sebera
    When creating mobile (specially Android) application, I usually come to touch with similar pattern of working with data. Usually I need to fetch some remote data (covered by authorization process) to local cache. And on next request: Check networking Check presence of cache file Check version of cache file (if networking) Get new version and save cache (if networking and file not in cache, or outdated) Data store is no-SQL JSON Document-Based (and yes, I know about CouchDB Android version, but it doesn't fit my needs yet.) Process of authorizing to data source and code for check version of local cache is adapted to application. But the other code (handling network, saving cache, handling exceptions,...) is always the same. Is there any Data Store helper I can use, which provides functions I described above?

    Read the article

  • Why is facebook cache buggy?

    - by IAdapter
    I just started using facebook and I see that many times when I add something to my profile and visit it later its not there. I bet the reason is that the page is cached and not updated very often. Is this on purpose or is it a bug? P.S. For example I added the music I like and later I see that I did not add it, but next day when I visit again its there. I saw it in two web-browsers, so its a facebook bug. Does it has something to do with scalability?

    Read the article

  • How can I best implement 'cache until further notice' with memcache in multiple tiers?

    - by ajreal
    the term "client" used here is not referring to client's browser, but client server Before cache workflow 1. client make a HTTP request --> 2. server process --> 3. store parsed results into memcache for next use (cache indefinitely) --> 4. return results to client --> 5. client get the result, store into client's local memcache with TTL After cache workflow 1. another client make a HTTP request --> 2. memcache found return memcache results to client --> 3. client get the result, store into client's local memcache with TTL TTL = time to live Is possible for me to know when the data was updated, and to expire relevant memcache(s) accordingly. However, the pitfalls on client site cache TTL Any data update before the TTL is not pick-up by client memcache. In reverse manner, where there is no update, client memcache still expire after the TTL First request (or concurrent requests) after cache TTL will get throttle as it need to repeat the "Before cache workflow" In the event where client require several HTTP requests on a single web page, it could be very bad in performance. Ideal solution should be client to cache indefinitely until further notice. Here are the three proposals about futher notice Proposal 1 : Make use on HTTP header (current implementation) 1. client sent HTTP request last modified time header 2. server check if last data modified time=last cache time return status 304 3. client based on header to decide further processing GOOD? ---- - save some parsing for client - lesser data transfer BAD? ---- - fire a HTTP request is still slow - server end still need to process lots of requests Proposal 2 : Consistently issue a HTTP request to check all data group last modified time 1. client fire a HTTP request 2. server to return last modified time for all data group 3. client compare local last cache time with the result 4. if data group last cache time < server last modified time then request again for that data group only GOOD? ---- - only fetch what is no up-to-date - less requests for server BAD? ---- - every web page require a HTTP request Proposal 3 : Tell client when new data is available (Push) 1. when server end notice there is a change on a data group 2. notify clients on the changes 3. help clients to fetch again data 4. then reset client local memcache after data is parsed GOOD? ---- - let the cache act/behave like a true cache BAD? ---- - encourage race condition My preference is on proposal 3, and something like Gearman could be ideal Where there is a change, Gearman server to sent the task to multiple clients (workers). Am I crazy? (I know my first question is a bit crazy)

    Read the article

  • Are HTTP requests cached? [closed]

    - by nischayn22
    Many HTTP requests are sent repeatedly by browsers on almost every page load, such as requesting the jQuery .js file etc. Since these are already used on too many sites doesn't modern browsers keep a cache for this? I am thinking of a system where the browser has a cached copy of the .js file used very very frequently. On a new request for the .js file, it sends the server a request for a hash of the .js file (provided the server can reply to that) and compares the returned hash with the cached copy's hash... rest is intuitive.

    Read the article

  • Distributed cache and improvement

    - by philipl
    Have this question from interview: Web Service function given x static HashMap map (singleton created) if (!map.containsKey(x)) { perform some function to retrieve result y map.put(x, y); } return y; The interviewer asked general question such as what is wrong with this distributed cache implementation. Then asked how to improve on it, due to distributed servers will have different cached key pairs in the map. There are simple mistakes to be pointed out about synchronization and key object, but what really startled me was that this guy thinks that moving to database implementation solves the problem that different servers will have different map content, i.e., the situation when value x is not on server A but on server B, therefore redundant data has to be retrieved in server A. Does his thinking make any sense? (As I understand this is the basic cons for distributed cache against database model, seems he does not understand it at all) What is the typical solution for the cache growth issue (weak reference?) and sync issue (do not know which server has the key already cached - use load balancing)? Thanks

    Read the article

  • Is the UX affected negatively by fully cacheable pages?

    - by ChocoDeveloper
    I want to have fully cacheable pages in my websites, but one cannot do that if they contain user-specific data, like the userbar or things in the UI that can change depending on the permissions the user has. So I was thinking whether it was possible to pull everything that is user-specific via ajax, and update the UI accordingly. But I'm worried that this might be annoying for the user, and also it might be difficult to develop. What do you think? Is there a pattern or something I can follow to deal with this?

    Read the article

  • Random Cache Expiry

    - by mahemoff
    I've been experimenting with random cache expiry times to avoid situations where an individual request forces multiple things to update at once. For example, a web page might include five different components. If each is set to time out in 30 minutes, the user will have a long wait time every 30 minutes. So instead, you set them all to a random time between 15 and 45 minutes to make it likely at most only one component will reload for any given page load. I'm trying to find any research or guidelines on this topic, e.g. optimal variance parameters. I do recall seeing one article about how Google (?) uses this technique, but can't locate it, and there doesn't seem to be much written about the topic.

    Read the article

  • IE browser caching and the jQuery Form Plugin

    - by Harfleur
    Like so many lost souls before me, I'm floundering in the snake pit that is Ajax form submission and IE browser caching. I'm trying to write a simple script using the jQuery Form Plugin to Ajaxify Wordpress comments. It's working fine in Firefox, Chrome, Safari, et. al., but in IE, the response text is cached with the result that Ajax is pulling in the wrong comment. jQuery(this).ajaxSubmit({ success: function(data) { var response = $("<ol>"+data+"</ol>"); response.find('.commentlist li:last').hide().appendTo(jQuery('.commentlist')).slideDown('slow'); } }); ajaxSubmit sends the comment to wp-comments-post.php, which inelegantly spits back the entire page as a response. So, despite the fact that it's ugly as toads, I'm sticking the response text in a variable, using :last to isolate the most recent comment, and sliding it down in its place. IE, however, is returning the cached version of the page, which doesn't include the new comment. So ".commentlist li:last" selects the previous comment, a duplicate of which then uselessly slides down beneath the original. I've tried setting "cache: false" in the ajaxSubmit options, but it has no effect. I've tried setting a url option and tacking on a random number or timestamp, but it winds up being attached to the POST that submits the comment to the server rather than the GET that returns the response, and so has no effect. I'm not sure what else to try. Everything works fine in IE if I turn off browser caching, but that's obviously not something I can expect anyone viewing the page to do. Any help will be hugely appreciated. Thanks in advance! EDIT WITH A PROGRESS REPORT: A couple of people have suggested using PHP headers to prevent caching, and this does indeed work. The trouble is that wp-comments-post is spitting back the entire page when a new comment is submitted, and the only way I can see to add headers is to put them in the Wordpress post template, which disables caching on all posts at all times--not quite the behavior I'm looking for. Is there a way to set a php conditional--"if is_ajax" or something like that--that would keep the headers from being applied during regular pageloads, but plug them in if the page was called by an Ajax GET?

    Read the article

  • how to use data caching with sqldatabase and asp& c#.net

    - by subash
    i have a large database which is updated every now and then. The application has been developed in asp.net and c#.net. i need to fetch data from the datbase to griview on a button click event . i am planning to use data caching , so that i can improve the performance of the application how can i use DATA caching mechanism so that i could see the updated result of the database in a gridview.

    Read the article

  • NFS caching on Ubuntu

    - by stream
    We run a bunch of ubuntu servers (mostly 8.04 LTS) which all mount an nfs share at /nfs. We use the nfs primarily for two purposes: symlinking config files (such as apache vhosts) reading & writing uploaded files This all works great except it makes us fully dependent on the central NFS server (which is a DRBD cluster with heartbeat failover from primary to secondary, but we've still seen issues). What we'd like is if we could mount the NFS through some local caching layer which would make any file which had previously been read remain available even if /nfs isn't. Writes could be disabled for this period. Searching around it looks like cachefilesd may be an option. Unfortunately, it's only packaged for ubuntu 9.10 & 10.04 it looks like. I was also looking for a FUSE-based solution which might fit the bill, but hadn't found anything yet. Any suggestions would be greatly appreciated!

    Read the article

  • Bind9 as a caching resolver fails with mismatch ID on localhost but not external IP

    - by argibbs
    I'm running Ubuntu 12.04 LTS on a machine on my private network. I have bind9 installed (v9.8.1-P1) via aptitude, so it appears to have put all the bits in the right places and the service starts automatically. I plan on adding some zones later, but first I'm just trying to get it working as a caching resolver. I installed bind, configured it, and starting using it. Initially I thought it was working ok, but then I found some sites weren't being resolved. I've pinned it down to being linked to the size of the result and bind failing-over to TCP mode. So: I'm trying to find out why bind is failing when I query for domain info and the result is 512 bytes (causing a truncation and retry on TCP). Specifically it fails with ID mismatches if I point dig at localhost, but works when I query the machine's own IP (192.168.0.2). This appears to be backwards to the problem that most people have when using bind (fails on external ip, works on localhost). If I do dig @localhost google.com (which has a response of <512 bytes) then it works; I get no warnings, and plenty of output. $ dig @localhost google.com ; <<>> DiG 9.8.1-P1 <<>> @localhost google.com [snip lots of output] ;; Query time: 39 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 17 23:08:34 2013 ;; MSG SIZE rcvd: 495 If I do dig @localhost play.google.com (which has a larger response) then I get back something like: $ dig @localhost play.google.com ;; Truncated, retrying in TCP mode. ;; ERROR: ID mismatch: expected ID 3696, got 27130 This seems to be standard, documented behaviour - when the UDP response is large (here 'large' == 512 bytes) it falls back to TCP. The ID mismatch is not expected though. If I do dig @192.168.0.2 play.google.com then I still get the warning about using TCP mode, but it otherwise works $ dig @192.168.0.2 play.google.com ;; Truncated, retrying in TCP mode. ; <<>> DiG 9.8.1-P1 <<>> @192.168.0.2 play.google.com [snip most of the output] ;; Query time: 5 msec ;; SERVER: 192.168.0.2#53(192.168.0.2) ;; WHEN: Thu Oct 17 23:05:55 2013 ;; MSG SIZE rcvd: 521 At the moment I've not set up any zones in my local instance, so it's just acting as a caching resolver. My options config is pretty much unchanged from standard, I've got the following set: options { directory "/var/cache/bind"; allow-query { 192.168/16; 127.0.0.1; }; forwarders { 8.8.8.8; 8.8.4.4; }; dnssec-validation auto; edns-udp-size 4096 ; allow-transfer { any; }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; And my /etc/resolv.conf is just nameserver 127.0.0.1 search .local The problem definitely seems linked to the failover to TCP mode: if I do dig +bufsize=4096 @localhost play.google.com then it works; no warning about failover to TCP, no ID mismatch, and a standard looking result. To be honest, if there was a way to force bind to use a much larger UDP buffer, that'd probably be good enough for me, but all I've been able to find mention of is max-udp-size 4096 and that doesn't change the behaviour in any way. I've also tried setting edns-udp-size 512 in case the problem is some weird EDNS issue with my router (which seems unlikely since the +bufsize=4096 flag works fine). I've also tried dig +trace @localhost play.google.com; this works. No truncation/TCP warning, and a full result. I've also tried changing the servers used in the forwarder (e.g. to OpenDNS), but that makes no difference. There's one last data point: if I repetitively do dig @localhost play.google.com I don't always get an ID mismatch, but sometimes a REFUSED error. I'm much more likely to get a REFUSED error if I dig the non-localhost IP (192.168.0.2) first: $ dig @localhost play.google.com ;; Truncated, retrying in TCP mode. ; <<>> DiG 9.8.1-P1 <<>> @localhost play.google.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 35104 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;play.google.com. IN A ;; Query time: 4 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 17 23:20:13 2013 ;; MSG SIZE rcvd: 33 Any insights or things to try would be much appreciated.

    Read the article

  • Chrome browser caching

    - by Kyle B.
    I do a lot of development on my local machine and would like to start using Chrome, however I cannot seem to do a hard-refresh (ctrl+f5) or any other key combination to get my browser to forcibly refresh all content @ http://localhost. I change projects frequently in IIS and this presents a problem because I see stylesheet and image data from my previous project with no way to get this page to reload without forcibly dumping all cache data from the settings menu. Is there another key combination I am missing, or is there a place I can (on a site by site basis) turn off caching? I prefer not to have to clear out my temporary files in the browser settings as I switch projects frequently. Thanks, Kyle

    Read the article

  • How can I exceed the 60% Memory Limit of IIS7

    - by evilknot
    Pardon if this is more stackoverflow vs. serverfault. It seems to be on the border. We have an application that caches a large amount of product data for an e-commerce application using ASP.NET caching. This is a dictionary object with 65K elements, and our calculations put the object's size at ~10GB. Problem: The amount of memory the object consumes seems to be far in excess of our 10GB calculation. BIGGEST CONCERN: We can't seem to use over 60% of the 32GB in the server. What we've tried so far: In machine.config/system.web (sf doesn't allow the tags, pardon the formatting): processModel autoConfig="true" memoryLimit="80" In web.config/system.web/caching/cache (sf doesn't allow the tags, pardon the formatting): privateBytesLimit = "20000000000" (and 0, the default of course) percentagePhysicalMemoryUsedLimit = "90" Environment: Windows 2008R2 x64 32GB RAM IIS7 Nothing seems to allow us to exceed the 60% value. See attached screenshot of taskman.

    Read the article

  • Caching of path environment variable on windows?

    - by jwir3
    I'm assisting one of our testers in troubleshooting a configuration problem on a Windows XP SP3 system. Our application uses an environment variable, called APP_HOME, to refer to the directory where our application is installed. When the application is installed, we utilize the following environment variables: APP_HOME = C:\application\ PATH = %PATH%;%APP_HOME%bin Now, the problem comes in that she's working with multiple versions of the same application. So, in order to switch between version 7.0 and 8.1, for example, she might use: APP_HOME = C:\application_7.0\ (for 7.0) and then change it to: APP_HOME = C:\application_8.1\ (for 8.1) The problem is that once this change is made, the PATH environment variable apparently still is looking at the old expansion of the APP_HOME variable. So, for example, after she has changed APP_HOME, PATH still refers to the 7.0 bin directory. Any thoughts on why this might be happening? It looks to me like the PATH variable is caching the expansion of the APP_HOME environment variable. Is there any way to turn this behavior off?

    Read the article

  • Caching without file extensions

    - by Sigurs
    I'm trying to use Varnish to show the non-logged in users a cached version of my website. I'm able to perfectly detect if the user is logged in or out, but I can't cache pages without extensions. There is no file extension because nginx is rewriting the URL to a php script(so caching .php does not work). For example I'd like varnish to cache: example.com example.com/forum/ example.com/contact/ I have tried if (req.request == "GET" && req.url ~ "^/") { return(lookup); } if (req.request == "GET" && req.url ~ "") { return(lookup); } if (req.request == "GET" && req.url ~ "/") { return(lookup); } but nothing seems to work... any help?

    Read the article

  • how can I effect DNS Caching on PHP/Memcache application

    - by Niro
    In a very high loaded Ubuntu/PHP web server I found that the PHP line: $memcache-connect("int-aws_ec2.memcached.myapp.net",11211); sometimes takes ~5 secs. Replacing the url with the ip address decreases the server load from ~20 to 0 My question is - where are the settings that effect the DNS caching for this? Is it in the server level or the memcache library ? How can I change it ? Additional info: Ubuntu 10.04 lucid PHP: 5.3.2-1ubuntu4.10 Apache/2.2.14 (Ubuntu) Amazon EC2 Even more info per Celada's comment: The DNS handling for the memcache server is done by scalr (the platform I use to manage the cloud resources). They have a client located on the instances and their own DNS servers. /etc/nsswitch.conf - hosts: files dns /etc/resolv.conf: nameserver 172.16.0.23 domain ec2.internal search ec2.internal The domain is not in hosts.conf To check if I run nscd I used /etc/init.d/nscd stop and received 'no such file' so i guess I dont run nscd. Thanks !

    Read the article

  • http proxy caching headers

    - by David Hagan
    I have a service for which I'm about to upgrade the authentication. However, I'm trying to ensure that I make the right decision about where the encryption algorithms occur. I currently have two options: option 1) the authentication module is deployed to the client as a javascript library over https and executes client-side, so that the client can POST back an encrypted string. option 2) the authentication module is kept server-side so that the client need only POST back an unencrypted string. I know that many http proxies cache/log the query-string (and therefore any query parameters), but does anyone know of any http proxies that cache the headers as well? If the headers are being cached, then I'll clearly want to encrypt the password inside the SSL encryption, because to my understanding the headers of an HTTPS request may not always be encrypted (depending on the capabilities of the browser etcetera). Can anyone shed any light on the caching of headers by http proxies? Do you have one that does, or know of one that does?

    Read the article

  • Perforce Proxy Server: Caching selective files [closed]

    - by fbrereto
    I just set up a Perforce proxy server for work. I'm noticing the cache directory is filling up very quickly -- with files I know I will never need. For example, there is a 'sandbox' directory in the depot where users keep personal branches and other work; a p4 sync is causing the p4 proxy cache to grab these user's sandboxes when I'll never need them. I would create a symbolic link for the sandbox directory to /dev/null but then I wouldn't be caching my sandbox, which I am interested in. Is there any way to tell the perforce proxy something to the effect of "if I haven't had to sync it, please don't cache it?"

    Read the article

  • Rail's FileStore with Linux Disk Caching or RAMdisk?

    - by Yo Ludke
    I have a Ruby on Rails application that stores it's catched files on the filesystem (Rails file-system cache). I was thinking about changing to memcached Store, but a short test shows it isn't a big difference in speed. From linuxatemyram.com I learned a bit about file caching. On the current machine there would be around 40..45GB RAM left which isn't needed for the application and which can be used to linux-disk-cache this rails file cache store. The disk is a RAID10 system with almost 120MB disk perfomance. How can I tell Linux to use free RAM more deliberately and not to be shy about using it? Do think it's necessary to adjust a sysyctl/.. value here, or would I have performance advantages to put the File Store root diretory on a ramdisk? (Loosing the cache during a reboot wouldn't be a problem)

    Read the article

  • How do I turn off caching in IIS7?

    - by jammus
    Hello. I'm developing an ASP classic site under Windows 7 (form a queue ladies). The problem is IIS seems to be heavily making use of its cache for both static and dynamic content which really conflicts with my 'make a small change, alt-tab, hit ctrl-F5' development style. Changes made to .asp files may take two or three refreshes to show up where as changes to .js files can take 20 times as many. How do I go about turning the caching off on my development machine? Cheers. in b4 stop using asp classic

    Read the article

  • nginx caching per user agent

    - by Tuinslak
    I'm currently using nginx as reverse proxy with caching enabled. However, the main site has two different layouts, depending on the user-agent (mobile or not). I've tried something similar to this: # mobile users if ($http_user_agent ~* '(iPhone|iPod|mobile|Android|2.0\ MMP|240x320|AvantGo|BlackBerry|Blazer|Cellphone|Danger|DoCoMo|Elaine/3.0|EudoraWeb|hiptop|IEMobile)') { set $iphone_request '1'; } if ($iphone_request = '1') { proxy_cache mobile; } if ($iphone_request = '') { proxy_cache site; } proxy_cache_key "$scheme://$host$request_uri"; proxy_pass http://real-site.tld; However, nginx gives an error, stating proxy_cache can't be used in an if-structure. Any other way to serve from a different cache depending on the browser? Thanks, Tuinslak

    Read the article

  • server caching problem on ASP.NET MVC page

    - by Rita
    Hi I have server caching error on ASP.NET MVC Pages. The scenario is like this. I have two applications (1).External Website and (2).Internal Adminsite, both pointing to the same Database. There is one page called EditProfile Page on the External Website that registered customer can update his profile information like Firstname, Lastname and Address…etc. Similarly there is similar functionality on the Internal Adminsite on the page called CustomerProfile Page where the Site Admin can update all these fields. When the user updates the profile information from the Adminsite, those updates are not reflecting back to the Website. Now I tried restarting the Website on IIS and that din’t help. Again I tried both restarting the Website on IIS and opening a new browser, then those updates are reflecting back. I am wondering how I can come out of this caching problem without restarting the site and open a new browser window everytime? Are there any IIS settings that could help? This caching is happening only on couple of tables only and all the updates are showing up in the database. Appreciate your responses. Thanks

    Read the article

  • Mod disk_cache permanent caching images and disabling reacurring header updates

    - by user135532
    I am trying to get mod disk_cache to permantly cache images retrieved from an image server on the webserver using ProxyPass. While the image is being retrieved correctly from the server and is served from the cache on further requests, then I am still having the webserver call the image server and causing the cached header to be updated. Because of load concerns then I need to never call the image server on a specific url again after it has been cached once, or extend the refresh time for as long as possible. The webserver is IHS 7.0 The mod's are mod_disk_cache.so, mod_cache.so, mod_proxy.so Version 2.2.8.0 Following is from my httpd.conf: ProxyPass /webserver/media/images/ http://imageserver.com/ws/media/images/ # Caching pictures <IfModule mod_cache.c> <IfModule mod_disk_cache.c> CacheDefaultExpire 2628000 #CacheDisable CacheEnable disk /webserver/media/images/ CacheIgnoreCacheControl On CacheIgnoreHeaders Cookie Referer User-Agent X-Forwarded-For X-Forwarded-Host X-Forwarded-Server Accept-Language Accept Host CacheIgnoreNoLastMod On CacheIgnoreQueryString Off #CacheIgnoreURLSessionIdentifiers CacheLastModifiedFactor 10000000.1 #CacheLock on #CacheLockMaxAge 5 #CacheLockPath CacheMaxExpire 1576800 CacheStoreNoStore On CacheStorePrivate On CacheDirLength 2 CacheDirLevels 3 CacheMaxFileSize 1000000 CacheMinFileSize 1 CacheRoot c:/cacheroot2 </IfModule> </IfModule>

    Read the article

  • HAProxy is caching the forwarding?

    - by shadow_of__soul
    i'm trying to set up a server structure for an application i'm building in Node.js with socket.io. My setup is: HAProxy frontend forward to -> apache2 as default backend (or nginx, is apache in this local test) -> node.js app if the url has socket.io in the request AND a domain name i have something like: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 user haproxy group haproxy daemon defaults log global mode http maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 frontend all 0.0.0.0:80 timeout client 5000 default_backend www_backend acl is_soio url_dom(host) -i socket.io #if the request contains socket.io acl is_chat hdr_dom(host) -i chaturl #if the request comes from chaturl.com use_backend chat_backend if is_chat is_soio backend www_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout server 5000 timeout connect 4000 server server1 localhost:6060 weight 1 maxconn 1024 check #forwards to apache2 backend chat_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout queue 50000 timeout server 50000 timeout connect 50000 server server1 localhost:5558 weight 1 maxconn 1024 check #forward to node.js app The problem comes when i made a request to something like www.chaturl.com/index.html it load perfectly but fails to loads the socket.io files (www.chaturl.com/socket.io/socket.io.js) why it redirect to apache (and should redirect to the node.js app that serve the files). The weird thing is that if i access directly to the socket.io file, after refreshing a few times, it loads, so i suppose is "caching" the forwarding for the client when it makes the first request and reach the apache server. Any suggestion of how this can be solved? or what i can try or look about this?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >