Search Results

Search found 6528 results on 262 pages for 'appfabric cache'.

Page 35/262 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Need help profiling .NET caching extension method.

    - by rockinthesixstring
    I've got the following extension Public Module CacheExtensions Sub New() End Sub Private sync As New Object() Public Const DefaultCacheExpiration As Integer = 1200 ''# 20 minutes <Extension()> Public Function GetOrStore(Of T)(ByVal cache As Cache, ByVal key As String, ByVal generator As Func(Of T)) As T Return cache.GetOrStore(key, If(generator IsNot Nothing, generator(), Nothing), DefaultCacheExpiration) End Function <Extension()> Public Function GetOrStore(Of T)(ByVal cache As Cache, ByVal key As String, ByVal generator As Func(Of T), ByVal expireInSeconds As Double) As T Return cache.GetOrStore(key, If(generator IsNot Nothing, generator(), Nothing), expireInSeconds) End Function <Extension()> Public Function GetOrStore(Of T)(ByVal cache As Cache, ByVal key As String, ByVal obj As T) As T Return cache.GetOrStore(key, obj, DefaultCacheExpiration) End Function <Extension()> Public Function GetOrStore(Of T)(ByVal cache As Cache, ByVal key As String, ByVal obj As T, ByVal expireInSeconds As Double) As T Dim result = cache(key) If result Is Nothing Then SyncLock sync If result Is Nothing Then result = If(obj IsNot Nothing, obj, Nothing) cache.Insert(key, result, Nothing, DateTime.Now.AddSeconds(expireInSeconds), cache.NoSlidingExpiration) End If End SyncLock End If Return DirectCast(result, T) End Function End Module From here, I'm using the extension is a TagService to get a list of tags Public Function GetTagNames() As List(Of String) Implements Domain.ITagService.GetTags ''# We're not using a dynamic Cache key because the list of TagNames ''# will persist across all users in all regions. Return HttpRuntime.Cache.GetOrStore(Of List(Of String))("TagNamesOnly", Function() _TagRepository.Read().Select(Function(t) t.Name).OrderBy(Function(t) t).ToList()) End Function All of this is pretty much straight forward except when I put a breakpoint on _TagRepository.Read(). The problem is that it is getting called on every request, when I thought that it is only to be called when Result Is Nothing Am I missing something here?

    Read the article

  • Why does using ASP.NET OutputCache keep returning a 200 OK, not a 304 Not Modified?

    - by Pure.Krome
    Hi folks, i have a simple aspx page. Here's the top of it:- <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Foo.aspx.cs" Inherits="Foo" %> <%@ OutputCache Duration="3600" VaryByParam="none" Location="Any" %> Now, every time I hit the page in FireFox (either hit F5 or hit enter in the url bar) I keep getting a 200 OK response. Here's a sample reply from FireBug :- Request Headers:- GET /sitemap.xml HTTP/1.1 Host: localhost.foo.com.au User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2) Gecko/20100115 Firefox/3.6 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-au,en-gb;q=0.7,en;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Cookie: <snipped> If-Modified-Since: Tue, 01 Jun 2010 07:35:17 GMT If-None-Match: "" Cache-Control: max-age=0 Response Headers:- HTTP/1.1 200 OK Cache-Control: public Content-Type: text/xml; charset=utf-8 Expires: Tue, 01 Jun 2010 08:35:17 GMT Last-Modified: Tue, 01 Jun 2010 07:35:17 GMT Etag: "" Server: Microsoft-IIS/7.5 X-Powered-By: UrlRewriter.NET 2.0.0 X-AspNet-Version: 4.0.30319 Date: Tue, 01 Jun 2010 07:35:20 GMT Content-Length: 775 Firebug Cache tab:- Last Modified Tue Jun 01 2010 17:35:20 GMT+1000 (AUS Eastern Standard Time) Last Fetched Tue Jun 01 2010 17:35:20 GMT+1000 (AUS Eastern Standard Time) Expires Tue Jun 01 2010 18:35:17 GMT+1000 (AUS Eastern Standard Time) Data Size 775 Fetch Count 105 Device disk Now, if i try it in Fiddler using the Request Builder (and no extra data) I also keep getting the same 200 OK reply. Request Headers:- GET http://localhost.foo.com.au/sitemap.xml HTTP/1.1 User-Agent: Fiddler Host: foo.com.au Response Headers:- HTTP/1.1 200 OK Cache-Control: public Content-Type: text/xml; charset=utf-8 Expires: Tue, 01 Jun 2010 07:58:00 GMT Last-Modified: Tue, 01 Jun 2010 06:58:00 GMT ETag: "" Server: Microsoft-IIS/7.5 X-Powered-By: UrlRewriter.NET 2.0.0 X-AspNet-Version: 4.0.30319 Date: Tue, 01 Jun 2010 06:59:16 GMT Content-Length: 775 It looks like it's asking to cache it but it's not :( Server is a localhost IIS7.5 on Win7. (as listed in the Response data). Can anyone see what I'm doing wrong?

    Read the article

  • Impact of the L3 cache on performance - worth a dual-processor system?

    - by Dan Nissenbaum
    I will be purchasing a new high-end system, and I would like to have a better sense of whether a dual-processor Xeon system (I am looking at the new, high-end Xeon E5-2687W) might, realistically, provide a noticeable performance improvement due to the doubling of the L3 cache (20 MB per CPU). (This is in addition to the occasional added advantage due to the doubling of cores and RAM.) My usage scenario is, roughly, that I have many background applications running at any time - 3 or 4 data compression/backup applications, a low-impact web server, one or two virtual machines at any given time (usually fairly idle), and perhaps 20 utility programs that utilize a noticeable (but small) portion of the CPU cores. In total, when I am not actively using the computer, about 25% of the total CPU power is utilized in my current i7-970 6-core (12 thread) system. When I am doing routine work, the CPU utilization often exceeds 50%, and occasionally hits 75%-80%. The Xeon E5-2687W is not only a second-generation i7 (so should improve performance for that reason), but also has 8 cores (16 threads), rather than 6 cores. For this reason, I expect to run into the 75% CPU range even less frequently. Nonetheless, the ability to double the cores and the RAM is a consideration. However, in the end, I believe this decision comes down to whether the doubling of the L3 cache will provide a noticeable improvement. There are many benchmarks, and a lot of discussion, regarding CPU power. However, I find very little discussion of L3 cache utilization, and how increases in the L3 cache (such as doubling it with dual processors) affect performance. For example: If there are only two processes running, but each benefits from a large L3 cache (such as might be the case for background processes that frequently scan the file system), perhaps the overall system performance might noticeably improve with dual CPU's - even if only a single core is active on each CPU - due to each process having double the effective L3 cache. I am hoping that someone has a sense of the benefits of increasing (or doubling) the L3 cache size. Note: the CPU I am considering (the Xeon E5-2687W) has 20 MB L3 cache, so a system with dual CPU's would have 40 MB L3 cache.

    Read the article

  • Why are some web clients requesting a page named "cache"?

    - by Toto
    We see errors like this in the apache error log: [Thu May 17 14:32:35 2012] [error] [client 192.168.1.1] File does not exist: /home/www-data/mywebsite.com/r/cache, referer: http://www.mywebsite.com/r/1010 It is strange because: There is no reference in the code/url about a folder/file "cache". The folder/file "cache" does not exist The client is randomly trying to access a "cache" folder everywhere on the website. It is always trying to access the folder/file "cache" following this pattern: Pattern: /level1/.../levelwhatever/filename (referer) /level1/.../levelwhatever/cache We run a LAMP (Debian stable: PHP 5.3.3-7+squeeze9. We also use APC 3.1.3p1). We use Google Analytics and AdSense. We do not know how to reproduce the problem. Note: I replaced the user's IP in the code for privacy.

    Read the article

  • Has anyone managed to get jdotnetservices working on Android ?

    - by Bert
    I am trying to use jdotnetservices (http://www.jdotnetservices.com/), which is a java SDK for Windows Azure AppFabric, in an Android application. I have had to make some tweaks but only minor ones because jdotnetservices is written to target Java 1.6 and Android uses 1.5. I can get it to compile and run OK but I'm getting errors when I try to access the service bus ACS. Specifically, if I try to get a token from the service bus ACS I get this : Hostname mysolution-sb.accesscontrol.windows.net was not verified. Can anyone give me some pointers as to why this might be ? I can browse to the url of the ACS from Android : https://mysolution-sb.accesscontrol.windows.net/wrapv0.9 which gives me a certificate error, could this be why ? Any way round this ?

    Read the article

  • asp mvc unit test HttpContext.Current.Cache?

    - by Paul Creasey
    Here is the first part of my controller code: public class ControlMController : Controller { IControlMService _controlMservice; public IList<User> Users { get { if (System.Web.HttpContext.Current.Cache["users"] == null) { System.Web.HttpContext.Current.Cache["users"] = _controlMservice.GetUsers(); } return (IList<User>)System.Web.HttpContext.Current.Cache["users"]; } } public ControlMController(IControlMService controlMservice) { this._controlMservice = controlMservice; var users = Users; ViewData["Users"] = users; ViewData["jqSelectUsers"] = string.Join(";", users.Select(x => x.UserID + ":" + x.Name).ToArray()); } I'm trying to test it, and because i'm caching using the HttpContext, i'm struggling with null reference exceptions. I've tried using MvcContrib.TestHelper; here is my sample test... [TestMethod] public void EventDetails_Returns_view_with_correct_event() { var builder = new TestControllerBuilder(); var controller = builder.CreateController<ControlMController>( new ControlMService( new MockControlMRepository() )); var view = (controller.EventDetails(1) as ViewResult); Assert.AreEqual(1, (view.ViewData.Model as Event).EventId); } (I haven't quite got round to using DI for my tests! I'm still getting the same null reference exception when the code hits the httpcontext: Error 1 TestCase 'SupportTool.Tests.Services.ControlM.ControlMControllerTests.EventDetails_Returns_view_with_correct_event' failed: System.NullReferenceException: Object reference not set to an instance of an object. at SupportTool.web.Controllers.ControlMController.get_Users() Any ideas?

    Read the article

  • How to cache an HTTP POST response?

    - by KARASZI István
    I would like to create a cacheable HTTP response for a POST request. My actual implementation responses the following for the POST request: HTTP/1.1 201 Created Expires: Sat, 03 Oct 2020 15:33:00 GMT Cache-Control: private,max-age=315360000,no-transform Content-Type: application/x-www-form-urlencoded; charset=UTF-8 Content-Length: 9 ETag: 2120507660800737950 Last-Modified: Wed, 06 Oct 2010 15:33:00 GMT ......... But it looks like that the browsers (Safari, Firefox tested) are not cacheing the response. In the HTTP RFC the corresponding part says: Responses to this method are not cacheable, unless the response includes appropriate Cache-Control or Expires header fields. However, the 303 (See Other) response can be used to direct the user agent to retrieve a cacheable resource. So I think it should be cached. I know I could set a session variable and set a cookie and do a 303 redirect, but I want to cache the response of the POST request. Is there any way to do this? P.S.: I've started with a simple 200 OK, so it does not work. Thanks,

    Read the article

  • InstallExecuteSequence cache interferes with custom action operation

    - by Dima G
    I need to upgrade a product that could be installed in per-user context to a new version that is always in per-machine context. The requirements are: Whether the old version was installed in a Per-User (no matter who) or Per-Machine context should be completely seamless to an administrator user that performs the upgrade. The MSI upgrade should succeed without the need to know the password of the user that originally installed the previous version of the product in a Per-User context. The installation should be performed from a single .msi file (no setup.exe is allowed). The installation should be able to run in a silent (non-UI) mode. No reboots are allowed during installation. My strategy is to find in the beginning of the installation whether the product is already installed in per-user context, and if so, to transform the registry keys manually to Per-Machine context (I checked: no additional changes such as file system changes etc. are needed except this transform). I figured out how to move all appropriate keys in the registry from the user settings to the machine settings (pre-loaded appropriate user hive in case it didn't appear in HKEY_USERS) and created custom action that does it - and it does work when I run it as a stand-alone executable before running the MSI. The problem, however, is that when Windows Installer runs InstallExecuteSequence it first creates a 'cached product context' for all products. So when my custom action runs in the course of InstallExecuteSequence, this cache has already been created. Thus FindRelatedProducts action that determines if older product with same upgrade code exists looks on that cache and ignores the changes that my custom action applied. If before running the MSI the previous product was in per-user context, FindRelatedProducts will look at the cache and not apply the upgrade and remove the previous version, because the new product is in per-machine context, even though the previous product version is already configured to per-machine context in the registry by that time by my custom action. What can be done to solve this problem without violating the requirements stated above?

    Read the article

  • newbie hibernate first level cache confusion

    - by Bruce
    Hi all I'm just geting to grips with hibernate. Little bit confused. I just wanted to watch the operation of the first level cache, which I understood to batch up queries until the end of the session. But if I create an object, hibernate saves it immediately, so that when I later update it in the same transaction, it has to do an update too: Session session = factory.getCurrentSession(); session.beginTransaction(); Test1 test1 = new Test1(); test1.setName("Test 1"); test1.setValue(10); // Touch it session.save(test1); System.out.println("At checkpoint 1"); test1.setValue(20); session.getTransaction().commit(); I see the sql for the save, then 'At checkpoint 1', then the sql for the update. Do I have something set up wrong or am I misunderstanding hibernate's first level cache? Is there a good document on the first level cache - I didn't find anything in the hibernate docs, but I could easily have missed it.. Thanks!

    Read the article

  • Best approach to cache Counts from SQL tables ?

    - by pixel3cs
    I would like to develop a Forum from scratch, with special needs and customization. I would like to prepare my forum for intensive usage and wondering how to cache things like User posts count and User replies count. Having only three tables, tblForum, tblForumTopics, tblForumReplies, what is the best approach of cache the User topics and replies counts ? Think at a simple scenario: user press a link and open the Replies.aspx?id=x&page=y page, and start reading replies. On the HTTP Request, the server will run an SQL command wich will fetch all replies for that page, also "inner joining with tblForumReplies to find out the number of User replies for each user that replied." select tblForumReplies.*, tblFR.TotalReplies from tblForumReplies inner join ( select IdRepliedBy, count(*) as TotalReplies from tblForumReplies group by IdRepliedBy ) as tblFR on tblFR.IdRepliedBy = tblForumReplies.IdRepliedBy Unfortunately this approach is very cpu intensive, and I would like to see your ideas of how to cache things like table Counts. If counting replies for each user on insert/delete, and store it in a separate field, how to syncronize with manual data changing. Suppose I will manually delete Replies from SQL.

    Read the article

  • solaris zpool SSD cache device "faulted"

    - by John-ZFS
    I am trying to get over these SATA SSD errors - smartctl command failed to read the SATA SSD - SATA is not supported what could be the reason for errors? does this mean that SSD has reached EOL & needs to be replacement? errors: No known data errors pool: zpool1216 state: DEGRADED status: One or more devices are faulted in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the faulted device, or use 'zpool clear' to mark the device repaired. scan: scrub repaired 0 in 0h24m with 0 errors on Fri May 18 14:31:08 2012 config: NAME STATE READ WRITE CKSUM zpool1216 DEGRADED 0 0 0 raidz1-0 ONLINE 0 0 0 c11t10d0 ONLINE 0 0 0 c11t11d0 ONLINE 0 0 0 c11t12d0 ONLINE 0 0 0 c11t13d0 ONLINE 0 0 0 c11t14d0 ONLINE 0 0 0 c11t15d0 ONLINE 0 0 0 c11t16d0 ONLINE 0 0 0 c11t1d0 ONLINE 0 0 0 c11t2d0 ONLINE 0 0 0 c11t3d0 ONLINE 0 0 0 c11t4d0 ONLINE 0 0 0 c11t5d0 ONLINE 0 0 0 c11t6d0 ONLINE 0 0 0 c11t7d0 ONLINE 0 0 0 c11t8d0 ONLINE 0 0 0 c11t9d0 ONLINE 0 0 0 logs c9d0 FAULTED 0 0 0 too many errors cache c10d0 FAULTED 0 17 0 too many errors

    Read the article

  • Can't access my accelerated hard disk from msdos after installing linux on ssd cache

    - by Chibueze Opata
    I mistakenly installed Linux Mint on my ssd (forgot my PC actually came with one), when it detected a ~31GiB disk that it wanted to install to, I was a bit confused since I had brought out 30Gb in my primary disk for it, but I clicked continue. After installation, I tried to boot back into my Windows and it brought out some Intel Raid Disk Utility stuff saying I should disable acceleration on a disk something couldn't be found, I canceled it but whatever I tried, recovery tools, setups etc, I couldn't just access the drive which was apparently using the SSD as cache. Since then I've been stuck. I tried setting the 'raid' flag to the disk from 'gParted', still I couldn't. I tried the diskraid utility from windows recover disk, it said it couldn't detect any raid, diskpart sees the partition but doesn't see the volume, when I remove the raid flag, it sees the volume as one of raw type, and I can't access anything. I can however mount the drive from terminal in Mint and access my files, but I don't have any backup media at the moment so I can do a factory re-install. Please how do I go about solving the issue, precisely I would like to know how to boot into the drive again. Thanks!

    Read the article

  • Linux arp cache timeout values

    - by Jak
    I'm trying to configure sane values for the Linux kernel arp cache timeout, but I can't find a detailed explanation as to how they work anywhere. Even the kernel.org documentation doesn't give a good explanation, I can only find recommended values to alleviate overflow. Here is an example of the values I have: net.ipv4.neigh.default.gc_thresh1 = 128 net.ipv4.neigh.default.gc_thresh2 = 512 net.ipv4.neigh.default.gc_thresh3 = 1024 Now, from what I've gathered so far: gc_thresh1 is the number of arp entries allowed before the garbage collector starts removing any entries at all. gc_thresh2 is the soft-limit, which is the number of entries allowed before the garbage collector actively removes arp entries. gc_thresh3 is the hard limit, where entries above this number are aggressively removed. Now, if I understand correctly, if the number of arp entries goes beyond gc_thresh1 but remains below gc_thresh2, the excess will be removed periodically with an interval set by gc_interval. My question is, if the number of entries goes beyond gc_thresh2 but below gc_thresh3, or if the number goes beyond gc_thresh3, how are the entries removed? In other words, what does "actively" and "aggressively" removed mean exactly? I assume it means they are removed more frequently than what is defined in gc_interval, but I can't find by how much.

    Read the article

  • How to efficiently serve massive sitemaps in django

    - by mlissner
    I have a site with about 150K pages in its sitemap. I'm using the sitemap index generator to make the sitemaps, but really, I need a way of caching it, because building the 150 sitemaps of 1,000 links each is brutal on my server.[1] I COULD cache each of these sitemap pages with memcached, which is what I'm using elsewhere on the site...however, this is so many sitemaps that it would completely fill memcached....so that doesn't work. What I think I need is a way to use the database as the cache for these, and to only generate them when there are changes to them (which as a result of the sitemap index means only changing the latest couple of sitemap pages, since the rest are always the same.)[2] But, as near as I can tell, I can only use one cache backend with django. How can I have these sitemaps ready for when Google comes-a-crawlin' without killing my database or memcached? Any thoughts? [1] I've limited it to 1,000 links per sitemap page because generating the max, 50,000 links, just wasn't happening. [2] for example, if I have sitemap.xml?page=1, page=2...sitemap.xml?page=50, I only really need to change sitemap.xml?page=50 until it is full with 1,000 links, then I can it pretty much forever, and focus on page 51 until it's full, cache it forever, etc.

    Read the article

  • VBScript: Disable caching of response from server to HTTP GET URL request

    - by Rob
    I want to turn off the cache used when a URL call to a server is made from VBScript running within an application on a Windows machine. What function/method/object do I use to do this? When the call is made for the first time, my Linux based Apache server returns a response back from the CGI Perl script that it is running. However, subsequent runs of the script seem to be using the same response as for the first time, so the data is being cached somewhere. My server logs confirm that the server is not being called in those subsequent times, only in the first time. This is what I am doing. I am using the following code from within a commercial application (don't wish to mention this application, probably not relevant to my problem): With CreateObject("MSXML2.XMLHTTP") .open "GET", "http://myserver/cgi-bin/nsr/nsr.cgi?aparam=1", False .send nsrresponse =.responseText End With Is there a function/method on the above object to turn off caching, or should I be calling a method/function to turn off the caching on a response object before making the URL? I looked here for a solution: http://msdn.microsoft.com/en-us/library/ms535874(VS.85).aspx - not quite helpful enough. And here: http://www.w3.org/TR/XMLHttpRequest/ - very unfriendly and hard to read. I am also trying to force not using the cache using http header settings and html document header meta data: Snippet of server-side Perl CGI script that returns the response back to the calling client, set expiry to 0. print $httpGetCGIRequest-header( -type = 'text/html', -expires = '+0s', ); Http header settings in response sent back to client: <html><head><meta http-equiv="CACHE-CONTROL" content="NO-CACHE"></head> <body> response message generated from server </body> </html> The above http header and html document head settings haven't worked, hence my question.

    Read the article

  • swapping or trashing with vast amounts of unmapped pagecache

    - by Marco
    I'm using kubuntu jaunty (i386 32bit), kernel 2.6.28-13-generic. I've 4Gb of RAM, of which only 3317Mb are seen by the system (I guess because of the 32bit system). I'm seeing that the pagecache utilization is continually growing, up to the point that the system is unusable (after a few days). This happens also when I don't do anything (all user applications closed and the bare minimum of services enabled). If enabled, the system starts to use swap space (using it all in the end). Even if swap is disabled, disk activity becomes continuous, with the system unresponsive. For example, right now the system is working (albeit a tad slow), with only Firefox and wing ide running, and I have 2Gb cached with only 45Mb mapped: $ free total used free shared buffers cached Mem: 3346388 3247328 99060 0 8416 2117980 -/+ buffers/cache: 1120932 2225456 Swap: 2144668 519448 1625220 $ cat /proc/meminfo MemTotal: 3346388 kB MemFree: 97128 kB Buffers: 7872 kB Cached: 2120224 kB SwapCached: 413860 kB Active: 2304596 kB Inactive: 865984 kB Active(anon): 2279168 kB Inactive(anon): 830236 kB Active(file): 25428 kB Inactive(file): 35748 kB Unevictable: 32 kB Mlocked: 32 kB HighTotal: 2492940 kB HighFree: 5456 kB LowTotal: 853448 kB LowFree: 91672 kB SwapTotal: 2144668 kB SwapFree: 1625244 kB Dirty: 84 kB Writeback: 0 kB AnonPages: 629304 kB Mapped: 45768 kB Slab: 45600 kB SReclaimable: 21756 kB SUnreclaim: 23844 kB PageTables: 4468 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 3817860 kB Committed_AS: 3735020 kB VmallocTotal: 122880 kB VmallocUsed: 9352 kB VmallocChunk: 66600 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB DirectMap4k: 16376 kB DirectMap4M: 888832 kB If I try to drop the caches, little happens: # sync ; echo 3 > /proc/sys/vm/drop_caches ; free total used free shared buffers cached Mem: 3346388 3220580 125808 0 3020 2100600 -/+ buffers/cache: 1116960 2229428 Swap: 2144668 519356 1625312 Right now I've vm.swappiness = 5, but I've tried also with 0 and 1 (without noticeable differences). I've also tried vm.vfs_cache_pressure = 50 and 150 (again, no differences). As I said the pagecache eats all memory even with swapping turned off. What is happening? How to avoid this?

    Read the article

  • Apache Caching and Expires configuration

    - by mcondiff
    I'm looking for a best possible caching/expires configuration for my specific situation. I realize that some sites have advocated turning etags off: Header unset ETag, FileETag None I know that I should use either Expires or Cache-Control. In additions, I know that I should use either Last-modified or ETAGs (Per ySlow docs). I inherited a clients server that uses the following in .htaccess: <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf|xml|txt|html|htm)$"> Header set Cache-Control "max-age=172800, public, must-revalidate" </FilesMatch> With this server I am not going to be able to rely on staff to rename images, css and js in web applications so I do not want to set the expires far in the future without knowing (with a good certainty) that "most/all" browsers will check to see if content has changed. What I do not want to happen is someone call me and say the website is broken because they replaced an image and it's not showing up. But I do want to take the most advantage I can with caching and expires while still maintaining that mostly all browsers will check with the server to see if components have changed. I have access to both the .htaccess and apache .conf file and it is a single server, the content is not deployed on multiple servers. What would be the best .htaccess or .conf configuration for me to achieve my goals for this clients server? Thanks for your help

    Read the article

  • Conditionally changing MIME type in nginx

    - by Peter
    I'm using nginx as a frontend to Rails. All pages are cached as .html files on disk, and nginx serves these files if they exist. I want to send the correct MIME type for feeds (application/rss+xml), but the way I have so far is quite ugly, and I'm wondering if there is a cleaner way. Here is my config: location ~ /feed/$ { types {} default_type application/rss+xml; root /var/www/cache/; if (-f request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f request_filename.html) { rewrite (.*) $1.html break; } if (-f request_filename) { break; } if (!-f request_filename) { proxy_pass http://mongrel; break; } } location / { root /var/www/cache/; if (-f request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f request_filename.html) { rewrite (.*) $1.html break; } if (-f request_filename) { break; } if (!-f request_filename) { proxy_pass http://mongrel; break; } } My questions: Is there a better way to change the MIME type? All cached files have .html extensions and I cannot change this. Is there a way to factor out the if conditions in /feed/$ and /? I understand that I can use include, but I'm hoping for a better way. Putting part of the config in a different file is not that readable. Can you spot any bugs in the if conditions? I'm using nginx 0.6.32 (Debian Lenny). I prefer to use the version in APT. Thanks.

    Read the article

  • Unusual Caching Issue with IE 7/8 and IIS 7

    - by Daniel A. White
    We recently moved a site into production running Server 2008 x64 and IIS 7. The ASP.NET pages apparently load just fine, but when it comes to IE 7 and 8, a weird caching issue has cropped up with the CSS and JavaScript files on the page. On a very sporadic schedule, IE does not get all the files necessary to compose the page (i.e. CSS and JS files). When I manually go to the missing files from the address bar, they come back from local cache as empty. I F5 these source files and magically they come down properly. I refresh the site after loading a few files and the cache seems to hold. This problem has only been reproduced (again, sporadically) on IE 7 and 8 running XP. Chrome and Firefox appear to be immune. We have set IIS to use server-side kernel caching for CSS, JS and images. We also have set to expire content for the App_Themes and Scripts directories to expire immediately. One initial thought it was a SWF loading an FLV on page load. These fixes have not remedied the problem. We had no problems on our staging server which is using Server 2003 and IIS 6. Any ideas would be greatly appreciated. P.S. It sounds similar to this problem: but we do have the Static Content module installed. http://serverfault.com/questions/115099/iis-content-length-0-for-css-javascript-and-images

    Read the article

  • Output Caching with IIS7 - How To for an dynamic aspx page?

    - by Lieven Cardoen
    I have a RetrieveBlob.aspx that gets some query string variables and returns an asset. Eeach url corresponds to a unique asset. In the RetrieveBlob.aspx a Cache Profile is set. In Web.Config the profile looks like (under system.web tag: <caching> <outputCache enableOutputCache="true" /> <outputCacheSettings> <outputCacheProfiles> <add duration="14800" enabled="true" varyByParam="*" name="AssetCacheProfile" /> </outputCacheProfiles> </outputCacheSettings> </caching> Ok, this works fine. When I put a breakpoint in the code behind of RetrieveBlob.aspx, it gets triggered the first time, and all the other times not. Now, I throw away the Cache Profile and instead I'm having this in my Web.Config under System.WebServer: <caching> <profiles> <add extension=".swf" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".flv" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".gif" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".png" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".mp3" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpeg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> </profiles> </caching> Now the caching doesn't work anymore. What am I doing wrong? Is it possible to configure under Caching tag of System.WebServer a Caching Profile for a Dynamic aspx page?

    Read the article

  • Curios: What makes CPUs better than others? [closed]

    - by Zizma
    I have been wondering about this for a long while now and was hoping someone here could answer it pretty easily. If I was looking for the most powerful CPU what should I really be looking at? There are so many different parameters of a CPU and I am wanting to know what each thing does and what really matters. Basically this: What is the deal with cores? If I take using optimized applications out of the mix would it theoretically better to get quad core 1.0GHz CPU or a 1 core 4 GHz CPU? Also, what is the difference between maybe an Sandy Bridge CPU versus an Ivy Bridge CPU? If they both were had the same clock speed and number of cores would the Ivy Bridge perform better? Does an older Xeon with an equal clock speed and number of cores to a new i7 really perform worse/slower? Does size matter? Why would I go with a 22nm CPU over a 32nm when the size difference is so trivial? What about the cache? When does the cache come into play with performance?

    Read the article

  • Offline copies in Windows file sharing

    - by netvope
    I frequently access media files (music or video) on a remote Windows file share. My Internet connection is not very fast, and I find it a waste of bandwidth when I repeatedly access the same files. For example, I may listen to the same song 30 times in a month. So, I would like to cache files I've used. I know Windows has an "Always available offline" feature but I think it doesn't suit my needs. I don't want to make the whole share "available offline" as the remote Windows file share is huge (in terabytes). Making individual files "available offline" is tedious as the files are scattered in many different directories. It would be much more convenient if I can simply cache those I've used. Also The files on the share seldom change. Many of the files are rarely used. Some of the files are frequently used. I don't have a list of the most frequently used files. So I think the best way is to have a caching proxy for the Windows share. What do you think? I have a Linux box sitting around. Perhaps I should try to setup samba4?

    Read the article

  • Trying get dynamic content hole-punched through Magento's Full Page Cache

    - by rlflow
    I am using Magento Enterprise 1.10.1.1 and need to get some dynamic content on our product pages. I am inserting the current time in a block to quickly see if it is working, but can't seem to get through full page cache. I have tried a variety of implementations found here: http://tweetorials.tumblr.com/post/10160075026/ee-full-page-cache-hole-punching http://oggettoweb.com/blog/customizations-compatible-magento-full-page-cache/ http://magentophp.blogspot.com/2011/02/magento-enterprise-full-page-caching.html (http://www.exploremagento.com/magento/simple-custom-module.php - custom module) Any solutions, thoughts, comments, advice is welcome. here is my code: app/code/local/Fido/Example/etc/config.xml <?xml version="1.0"?> <config> <modules> <Fido_Example> <version>0.1.0</version> </Fido_Example> </modules> <global> <blocks> <fido_example> <class>Fido_Example_Block</class> </fido_example> </blocks> </global> </config> app/code/local/Fido/Example/etc/cache.xml <?xml version="1.0" encoding="UTF-8"?> <config> <placeholders> <fido_example> <block>fido_example/view</block> <name>example</name> <placeholder>CACHE_TEST</placeholder> <container>Fido_Example_Model_Container_Cachetest</container> <cache_lifetime>86400</cache_lifetime> </fido_example> </placeholders> </config> app/code/local/Fido/Example/Block/View.php <?php /** * Example View block * * @codepool Local * @category Fido * @package Fido_Example * @module Example */ class Fido_Example_Block_View extends Mage_Core_Block_Template { private $message; private $att; protected function createMessage($msg) { $this->message = $msg; } public function receiveMessage() { if($this->message != '') { return $this->message; } else { $this->createMessage('Hello World'); return $this->message; } } protected function _toHtml() { $html = parent::_toHtml(); if($this->att = $this->getMyCustom() && $this->getMyCustom() != '') { $html .= '<br />'.$this->att; } else { $now = date('m-d-Y h:i:s A'); $html .= $now; $html .= '<br />' ; } return $html; } } app/code/local/Fido/Example/Model/Container/Cachetest.php <?php class Fido_Example_Model_Container_Cachetest extends Enterprise_PageCache_Model_Container_Abstract { protected function _getCacheId() { return 'HOMEPAGE_PRODUCTS' . md5($this->_placeholder->getAttribute('cache_id') . $this->_getIdentifier()); } protected function _renderBlock() { $blockClass = $this->_placeholder->getAttribute('block'); $template = $this->_placeholder->getAttribute('template'); $block = new $blockClass; $block->setTemplate($template); return $block->toHtml(); } protected function _saveCache($data, $id, $tags = array(), $lifetime = null) { return false; } } app/design/frontend/enterprise/[mytheme]/template/example/view.phtml <?php /** * Fido view template * * @see Fido_Example_Block_View * */ ?> <div> <?php echo $this->receiveMessage(); ?> </span> </div> snippet from app/design/frontend/enterprise/[mytheme]/layout/catalog.xml <reference name="content"> <block type="catalog/product_view" name="product.info" template="catalog/product/view.phtml"> <block type="fido_example/view" name="product.info.example" as="example" template="example/view.phtml" />

    Read the article

  • Http header 304 and caching?

    - by Royi Namir
    Our company uses these settings( don't ask me why) - for every request they want a new request from server. this is an intranet system which uses only IE. They defined it in : We also have windows authentication NTLM in the iis7. I have 2 questions please. Question #1) when the browser make a request ( css ) : (leave the 401 response for now - this is how ntlm works) He is requesting it with if-modified-since header. why is he adding this header ? How can I configure it ? why doesn't he use the settings from IE and try to download it each time - as I showed in the first picture ? Question #2) The response ( after ntlm negotiation) for that was : Response with Not-modified which is 304 header. and I assume its because we sent the request with the if-modified-since header. But there is a problem. He is actually tells me to download from my cache. But I told him explicitly in the IE settings - not to load from cache. Wham am I missing here ? Thanks a lot.

    Read the article

  • Caching all files in varnish

    - by csgwro
    I want my varnish servers to cache all files. At backend there is lighttpd hosting only static files, and there is an md5 in the url in case of file change, ex. /gfx/Bird.b6e0bc2d6cbb7dfe1a52bc45dd2b05c4.swf). However my hit ratio is very poorly (about 0.18) My config: sub vcl_recv { set req.backend=default; ### passing health to backend if (req.url ~ "^/health.html$") { return (pass); } remove req.http.If-None-Match; remove req.http.cookie; remove req.http.authenticate; if (req.request == "GET") { return (lookup); } } sub vcl_fetch { ### do not cache wrong codes if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } remove beresp.http.Etag; remove beresp.http.Last-Modified; } sub vcl_deliver { set resp.http.expires = "Thu, 31 Dec 2037 23:55:55 GMT"; } I have made an performance tuning: DAEMON_OPTS="${DAEMON_OPTS} -p thread_pool_min=200 -p thread_pool_max=4000 -p thread_pool_add_delay=2 -p session_linger=100" The main url which is missed is... /health.html. Is that forward to backend correctly configured? Disabling health checking hit ratio increases to 0.45. Now mostly "/crossdomain.xml" is missed (from many domains, as it is wildcard). How can I avoid that? Should I carry on other headers like User-Agent or Accept-Encoding? I thing that default hashing mechanism is using url + host/IP. Compression is used at the backend. What else can improve performance?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >