Search Results

Search found 6638 results on 266 pages for 'cache coherence'.

Page 30/266 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Firefox not honoring must-revalidate cache headers returned by jQuery.ajax() request

    - by Oliver Weichhold
    UPDATE 1: Judging by this thread I am not the only one having this problem in FF 12 and only in 12. UPDATE 2: The problem does not seem to be limited to Ajax requests. From the looks of it everything that makes it into Firefox 12's cache will be fetched from there. No matter what. The server can specify cache control headers all day long. Bummer! What I'm trying to achieve is the following behavior: Browser may cache the response without revalidating for up to 5 minutes I don't care if the browser revalidates on every request (Both Chrome and IE9 do for example) When the expiration is up the browser MUST revalidate (which in my case will result in fresh data) Chrome and IE9 exhibit the desired behavior when issuing a jquery.ajax() request with ifModified: true and cache: true while Firefox 12 never revalidates, which poses a serious problem. These are the actual response headers: HTTP/1.1 200 OK Server: nginx Date: Sun, 03 Jun 2012 07:13:43 GMT Content-Type: text/javascript; charset=UTF-8 Transfer-Encoding: chunked Connection: keep-alive Vary: Accept-Encoding Cache-Control: private, must-revalidate, max-age=300 Last-Modified: Sun, 03 Jun 2012 07:07:13 GMT Content-Encoding: gzip Any suggestions?

    Read the article

  • Hibernate Distributed Cache

    - by DD
    Hi, I'm looking to setup Hibernate with distributed cache where I have one application writing to the DB and another one reading from the DB. Is there an easy way to notify the reading application when the writing one has written through Hibernate? The distributed cache will invalidate the cache but I need the reading application to know a change has been made to refresh its data immediately. Thanks, D

    Read the article

  • Rails passenger production cache definition

    - by mark
    Hi I'm having a bit of a problem with storing data in rails cache under production. What I currently have is this, though I have been trying to work this out for hours already: #production.rb config.cache_store = :mem_cache_store if defined?(PhusionPassenger) PhusionPassenger.on_event(:starting_worker_process) do |forked| if forked Rails.cache.instance_variable_get(:@data).reset end end end I am using a cron job to (try to) save remote data to the cache for display. It is logged as being written to the cache but reportedly null. If anyone could point me toward a decent current tutorial on the subject or offer guidance I'd be extremely grateful. This is really, really frustrating me. :(

    Read the article

  • Cache in asp.net

    - by newperson
    Cache.Insert("lstDownload", GetListDownload(), null, DateTime.Now.AddDays(1), TimeSpan.Zero); when will cache be exprired? what will we receive when cache expired? http://stackoverflow.com/questions

    Read the article

  • System.Web.Caching vs. Enterprise Library Caching Block

    - by ESV
    For a .NET component that will be used in both web applications and rich client applications, there seem to be two obvious options for caching: System.Web.Caching or the Ent. Lib. Caching Block. What do you use? Why? System.Web.Caching Is this safe to use outside of web apps? I've seen mixed information, but I think the answer is maybe-kind-of-not-really. a KB article warning against 1.0 and 1.1 non web app use The 2.0 page has a comment that indicates it's OK: http://msdn.microsoft.com/en-us/library/system.web.caching.cache(VS.80).aspx Scott Hanselman is creeped out by the notion The 3.5 page includes a warning against such use Rob Howard encouraged use outside of web apps I don't expect to use one of its highlights, SqlCacheDependency, but the addition of CacheItemUpdateCallback in .NET 3.5 seems like a Really Good Thing. Enterprise Library Caching Application Block other blocks are already in use so the dependency already exists cache persistence isn't necessary; regenerating the cache on restart is OK Some cache items should always be available, but be refreshed periodically. For these items, getting a callback after an item has been removed is not very convenient. It looks like a client will have to just sleep and poll until the cache item is repopulated. Memcached for Win32 + .NET client What are the pros and cons when you don't need a distributed cache?

    Read the article

  • cache_money only writing to memcached on creates and updates, and seemingly never looking in the cac

    - by Shane Liebling
    I seem to be having some extremely odd cache_money interactions. When I am on the console, and I create a new instance of a class and save it I see the cache misses and cache stores on my memcached console output. Then when the create finishes I see a bunch of cache deletions. If I then try to do any kind of find for the newly created object (or any other objects for that matter) I never see any cache access. This is highly confusing. I could kind of understand if all finds never hit the cache (though that in and of itself would be an issue requiring investigation), but finds do seem to hit the cache when the object is being created (checking for associations and such). Anyone have this experience in the past at all? Any thoughts? AFAIK there isn't really much in the way of configuration options for cache_money, and it certainly doesn't seem like there are any that would be on by default and be creating these kinds of symptoms. My cache_money config is basically straight out of the docs. Any help would be greatly appreciated.

    Read the article

  • "no block given" errors with cache_money

    - by emh
    i've inherited a site that in production is generating dozens of "no block given" exceptions every 5 minutes. the top of the stack trace is: vendor/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:42:in `add' vendor/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:33:in `get' vendor/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:22:in `call' vendor/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:22:in `fetch' vendor/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:31:in `get' so it appears that the problem is in the cache money plugin. has anyone experienced something similar? i've cut and pasted the relevant code below -- anyone more familiar with blocks able to discern any obvious problems? 11 def fetch(keys, options = {}, &block) 12 case keys 13 when Array 14 keys = keys.collect { |key| cache_key(key) } 15 hits = repository.get_multi(keys) 16 if (missed_keys = keys - hits.keys).any? 17 missed_values = block.call(missed_keys) 18 hits.merge!(missed_keys.zip(Array(missed_values)).to_hash) 19 end 20 hits 21 else 22 repository.get(cache_key(keys), options[:raw]) || (block ? block.call : nil) 23 end 24 end 25 26 def get(keys, options = {}, &block) 27 case keys 28 when Array 29 fetch(keys, options, &block) 30 else 31 fetch(keys, options) do 32 if block_given? 33 add(keys, result = yield(keys), options) 34 result 35 end 36 end 37 end 38 end 39 40 def add(key, value, options = {}) 41 if repository.add(cache_key(key), value, options[:ttl] || 0, options[:raw]) == "NOT_STORED\r\n" 42 yield 43 end 44 end

    Read the article

  • APC not working as expected?

    - by Alix Axel
    I've the following function: function Cache($key, $value = null, $ttl = 60) { if (isset($value) === true) { apc_store($key, $value, intval($ttl)); } return apc_fetch($key); } And I'm testing it using the following code: Cache('ktime', time(), 3); // Store sleep(1); var_dump(Cache('ktime') . '-' . time()); echo '<hr />'; // Should Fetch sleep(5); var_dump(Cache('ktime') . '-' . time()); echo '<hr />'; // Should NOT Fetch sleep(1); var_dump(Cache('ktime') . '-' . time()); echo '<hr />'; // Should NOT Fetch sleep(1); var_dump(Cache('ktime') . '-' . time()); echo '<hr />'; // Should NOT Fetch And this is the output: string(21) "1273966771-1273966772" string(21) "1273966771-1273966777" string(21) "1273966771-1273966778" string(21) "1273966771-1273966779" Shouldn't it look like this: string(21) "1273966771-1273966772" string(21) "-1273966777" string(21) "-1273966778" string(21) "-1273966779" I don't understand, can anyone help me figure out this strange behavior?

    Read the article

  • Harvesting Dynamic HTTP Content to produce Replicating HTTP Static Content

    - by Neil Pitman
    I have a slowly evolving dynamic website served from J2EE. The response time and load capacity of the server are inadequate for client needs. Moreover, ad hoc requests can unexpectedly affect other services running on the same application server/database. I know the reasons and can't address them in the short term. I understand HTTP caching hints (expiry, etags....) and for the purpose of this question, please assume that I have maxed out the opportunities to reduce load. I am thinking of doing a brute force traversal of all URLs in the system to prime a cache and then copying the cache contents to geodispersed cache servers near the clients. I'm thinking of Squid or Apache HTTPD mod_disk_cache. I want to prime one copy and (manually) replicate the cache contents. I don't need a federation or intelligence amongst the slaves. When the data changes, invalidating the cache, I will refresh my master cache and update the slave versions, probably once a night. Has anyone done this? Is it a good idea? Are there other technologies that I should investigate? I can program this, but I would prefer a configuration of open source technologies solution Thanks

    Read the article

  • How do I avoid having JSONP returns cached in an HTML5 offline application?

    - by Kent Brewster
    I had good luck with cached offline apps until I tried including data from JSONP endpoints. Here's a tiny example, which loads a single movie from the new Netflix widget API: <!DOCTYPE html> <html manifest="main.manifest"> <head> <title>Testing!</title> </head> <body> <p>Attempting to recover a title from Netflix now...</p> <script type="text/javascript"> function ping(r) { alert('API reply: ' + r.catalog_title.title.regular); } var cb = new Date().getTime(); var s = document.createElement('SCRIPT'); s.src = 'http://movi.es/7Soq?v=2.0&output=json&expand=widget&callback=ping&cacheBuster=' + cb; alert('SCRIPT src: ' + s.src); s.type = 'text/javascript'; document.getElementsByTagName('BODY')[0].appendChild(s); </script> </body> </html> ... and here's the contents of my manifest, main.manifest, which contains no files and is only there so my browser knows to cache the calling HTML file. CACHE MANIFEST Yes, I've confirmed that my server is sending the manifest down with the correct content type, text/cache-manifest. The app works fine--meaning both alerts show--the first time I run it, but subsequent runs, even with the attempt at cache-busting in line 10, seem to be attempting to load the script from cache no matter what the query string is. I see the alert showing the script source, but the callback never fires. If I remove the manifest link from line 2 and reset my browser--being Safari and the iPhone Simulator--to clear cache, it works every time. I've also tried alerting the number of SCRIPT tags in the page, and it's definitely seeing both the existing and dynamically-created tag in all cases.

    Read the article

  • Caching static content from Adobe, Microsoft, etc

    - by Tim
    I'm currently running the Apple SUS on a Mac OS X Server in a small office environment. It works well for Apple updates, but I'm still stuck with either manually downloading and installing Adobe/Microsoft updates on each computer or running them through a Squid cache, with the blind faith that Squid will keep the files I actually want to stay cached. What is the best way to cache updates locally for applications like the Adobe Updater or Microsoft AutoUpdate? Ideally cached in such a way that I can tell which files I do or do not have cached. It would also be nice to be able to cache things for other software like Firefox and Sparkle-enabled apps, but these are usually small enough to ignore.

    Read the article

  • Static Content Caching in Sticky Session ?

    - by Ravi
    can we do Static Content caching in Sticky Session Servers. We use SqlStateServer to store the Session of the user. right now we are doing performance tuning in our application, so we decided to cache the static Content(images, css, js) for the applicaiton. so that it loads faster. Is it Good to cache the static content in Sticky Session ? If it's good, then can any one give me some links where i can read about it. right now i done following settings in my web config file <staticContent> <clientCache cacheControlCustom="public" cacheControlMode="UseMaxAge" cacheControlMaxAge="500.00:00:00" /> </staticContent> can this is the good code ? will it not affect our sticky session environment ? my goal is to cache the static images, css, Js for the application

    Read the article

  • How do I tell memcache to ignore the django admin page?

    - by Chris
    I'm running memcache infront of django without any explicit configuration in my code. I.e. nothing more than MIDDLEWARE_CLASSES = ( 'django.middleware.cache.UpdateCacheMiddleware', ... 'django.middleware.cache.FetchFromCacheMiddleware', ) and CACHE_BACKEND = 'memcached://127.0.0.1:11211/' in my settings.py. This works great, in fact so great that it's caching my admin page leaving me no way to moderate live actions on the site until the cache refetches the data. Is there a regex I can throw somewhere to let memcached know to leave my admin page alone? (I'm also using nginx and gunicorn)

    Read the article

  • Free / Cached / Available memory on Linux

    - by pkoraca
    I have read that linux uses free memory for caching, to make system faster. However, both Nagios and Paessler PRTG monitoring system show me that my memory usage is critical. I could change Nagios mem_usage script to sum free and cached memory, but would that be correct information? I doubt that they misunderstood Linux memory usage. Lets say I have 8 GB RAM. 5 GB are used, 2 GB is cached, and I have 1 GB of free memory. Real available memory should be free+cached (3 GB)? If some new application would need additional 3 GB RAM, could it take everything from cache and free without using swap, or is there a minimum that should be in cache? Real example: $ cat /proc/meminfo MemTotal: 5984256 kB MemFree: 137052 kB Buffers: 140484 kB Cached: 3439616 kB SwapCached: 244 kB Active: 3148824 kB Inactive: 2341768 kB ... My monitoring tools show that I have 137 MB free RAM, however I have ~3,5 GB in Cache. Thanks!

    Read the article

  • Can I use a SD card as cache instead of a Solid State Drive

    - by user654628
    I just installed a solid state drive a few days ago and I have been reading about how to preserve the file of it. I am running Windows 8 and my SSD has 256G of storage. I am using a laptop and cannot carry an external hard drive connected to my laptop to hold cache, temp files etc. I was wondering if a SD card would work. So I would use the SD card to hold cache, temp files and maybe index files from Windows, would this work and be effective (since I believe sd cards will also wear out)?

    Read the article

  • Cache Busting and Include Files for Nginx

    - by Vince Kronlein
    In Apache you can use the following to cache bust css and js files and server them as a single file with Apache's Include mod: <FilesMatch "\.combined\.js$"> Options +Includes AddOutputFilterByType INCLUDES application/javascript application/json SetOutputFilter INCLUDES </FilesMatch> <FilesMatch "\.combined\.css$"> Options +Includes AddOutputFilterByType INCLUDES text/css SetOutputFilter INCLUDES </FilesMatch> <IfModule mod_rewrite.c> RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] </IfModule> I know this is possible with nginx but I can't seem to get the syntax correct. -- EDIT -- Adding some code The only piece I have thus far is: location ~* (.+)\.(?:\d+)\.(js|css)$ { ssi on; try_files $uri $1.$2; } What I'm looking for is to be able to combine all js and css files into single files using the combined keyword with a number for cache busting: style.combined.100.css javascript.combined.100.js

    Read the article

  • Database Server Hardware components (order of importance), CPU speed VS CPU cache vs RAM vs DISK

    - by nulltorpedo
    I am new to database world and would like to know what are crucial hardware specs when it comes to database performance. I have searched the internet and found this so far (In order of decreasing importance): 1) Hard Disk: Get an SSD basically (much more IOPS than spinners) 2) Memory: Get as much as you can afford 3) CPU: For the same $ spent, prefer larger cache size over speed. Are these findings sensible? EDIT: I would like to focus on CPU speed VS CPU cache size. EDIT2: The database is used to store some combination of ints and int arrays with few text fields. There are a lot of Select queries looking for existing entries. If entry is not found, then insert it. I would say most of processing would be trying to find a match across a table with 200 columns and 20k rows. The insert statements are very few. EDIT3: Also, we have a lot of views (basically select queries).

    Read the article

  • Import emails from Claws IMAP cache

    - by calandoa
    I am trying to import an IMAP account composed of many folders from Claws Mail internal cache. Claws is unfortunately unable to export all the folders by selecting the root account. When checking the internal Claws cache folder, each mail is a plain text file named as following: base_path/My Account/Folder ABC/1 base_path/My Account/Folder ABC/2 base_path/My Account/Folder ABC/3 base_path/My Account/Folder ABC/4 base_path/My Account/Folder DEF/1 base_path/My Account/Folder DEF/2 base_path/My Account/Folder DEF/3 base_path/My Account/Folder X/etc... I tried to import this structure with different mails reader like KMail and Balsa, but each import failed. I just would like all these mails easily accessible and readable. Which tool on Linux can I use to import such a structure?

    Read the article

  • How to clear Outlook's Exchance cache address book information

    - by Assaf
    When a new email address is added to our company's Exchange server it doesn't show up immediately on my Outlook, and I suspect that it's because of the "cached mode". When I disable cached mode and restart outlook I see the new address fine. But when I restore cached mode and restart outlook it's missing again. So I guess the cache wasn't updated by this move. I tried deleting the .nk2 file in %appdata%\Microsoft\Outlook, but that didn't help. How can I force Outlook to clear its address book cache?

    Read the article

  • Battery backed write cache behavior upon disk change

    - by Halfgaar
    We use 3ware Inc 9650SE SATA-II RAID PCIe RAID controllers with battery backed write cache. Our spare hardware has the same controller. I was wondering; are these controllers smart enough not to sync the cache when the disks have been changed? For example, if I deploy one of those spare machines by putting in the disks of another machine and that spare machine still has pending writes, will it be smart enough not to perform those writes on the replaced array? Edit: my scenario is not really made clear, so let me give an example: server1 goes down because of power supply failure. I put the disks in server2 and start. I repair server1 I put the disks back from server2 in server1 (it's not relevant right now that in reality I would probably keep server2 running). If server1 doesn't have safeguards, it will write to the array, thinking it's simply powering up again, corrupting it.

    Read the article

  • Using nginx and/or varnish to cache server-generated 301 redirects

    - by rlotun
    I'm implementing a sort of url-shortener service. What happens is that I have some backend app server that takes in a request, does some computation and returns a 301 redirected url back upstream to an nginx frontend: request ---> nginx ----> app_server What I want to be able to do is cache this returned 301 url for the same request (a specific url with a "short code"). Does nginx do this caching automatically? Or should I drop in something like varnish in between nginx and the app_server? I can easily cache this in memcache, but that would require hitting the app_server, which I'm sure can be dispensed with after the first request. Thanks.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >