Search Results

Search found 6412 results on 257 pages for 'cache'.

Page 33/257 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Why does using ASP.NET OutputCache keep returning a 200 OK, not a 304 Not Modified?

    - by Pure.Krome
    Hi folks, i have a simple aspx page. Here's the top of it:- <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Foo.aspx.cs" Inherits="Foo" %> <%@ OutputCache Duration="3600" VaryByParam="none" Location="Any" %> Now, every time I hit the page in FireFox (either hit F5 or hit enter in the url bar) I keep getting a 200 OK response. Here's a sample reply from FireBug :- Request Headers:- GET /sitemap.xml HTTP/1.1 Host: localhost.foo.com.au User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2) Gecko/20100115 Firefox/3.6 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-au,en-gb;q=0.7,en;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Cookie: <snipped> If-Modified-Since: Tue, 01 Jun 2010 07:35:17 GMT If-None-Match: "" Cache-Control: max-age=0 Response Headers:- HTTP/1.1 200 OK Cache-Control: public Content-Type: text/xml; charset=utf-8 Expires: Tue, 01 Jun 2010 08:35:17 GMT Last-Modified: Tue, 01 Jun 2010 07:35:17 GMT Etag: "" Server: Microsoft-IIS/7.5 X-Powered-By: UrlRewriter.NET 2.0.0 X-AspNet-Version: 4.0.30319 Date: Tue, 01 Jun 2010 07:35:20 GMT Content-Length: 775 Firebug Cache tab:- Last Modified Tue Jun 01 2010 17:35:20 GMT+1000 (AUS Eastern Standard Time) Last Fetched Tue Jun 01 2010 17:35:20 GMT+1000 (AUS Eastern Standard Time) Expires Tue Jun 01 2010 18:35:17 GMT+1000 (AUS Eastern Standard Time) Data Size 775 Fetch Count 105 Device disk Now, if i try it in Fiddler using the Request Builder (and no extra data) I also keep getting the same 200 OK reply. Request Headers:- GET http://localhost.foo.com.au/sitemap.xml HTTP/1.1 User-Agent: Fiddler Host: foo.com.au Response Headers:- HTTP/1.1 200 OK Cache-Control: public Content-Type: text/xml; charset=utf-8 Expires: Tue, 01 Jun 2010 07:58:00 GMT Last-Modified: Tue, 01 Jun 2010 06:58:00 GMT ETag: "" Server: Microsoft-IIS/7.5 X-Powered-By: UrlRewriter.NET 2.0.0 X-AspNet-Version: 4.0.30319 Date: Tue, 01 Jun 2010 06:59:16 GMT Content-Length: 775 It looks like it's asking to cache it but it's not :( Server is a localhost IIS7.5 on Win7. (as listed in the Response data). Can anyone see what I'm doing wrong?

    Read the article

  • Impact of the L3 cache on performance - worth a dual-processor system?

    - by Dan Nissenbaum
    I will be purchasing a new high-end system, and I would like to have a better sense of whether a dual-processor Xeon system (I am looking at the new, high-end Xeon E5-2687W) might, realistically, provide a noticeable performance improvement due to the doubling of the L3 cache (20 MB per CPU). (This is in addition to the occasional added advantage due to the doubling of cores and RAM.) My usage scenario is, roughly, that I have many background applications running at any time - 3 or 4 data compression/backup applications, a low-impact web server, one or two virtual machines at any given time (usually fairly idle), and perhaps 20 utility programs that utilize a noticeable (but small) portion of the CPU cores. In total, when I am not actively using the computer, about 25% of the total CPU power is utilized in my current i7-970 6-core (12 thread) system. When I am doing routine work, the CPU utilization often exceeds 50%, and occasionally hits 75%-80%. The Xeon E5-2687W is not only a second-generation i7 (so should improve performance for that reason), but also has 8 cores (16 threads), rather than 6 cores. For this reason, I expect to run into the 75% CPU range even less frequently. Nonetheless, the ability to double the cores and the RAM is a consideration. However, in the end, I believe this decision comes down to whether the doubling of the L3 cache will provide a noticeable improvement. There are many benchmarks, and a lot of discussion, regarding CPU power. However, I find very little discussion of L3 cache utilization, and how increases in the L3 cache (such as doubling it with dual processors) affect performance. For example: If there are only two processes running, but each benefits from a large L3 cache (such as might be the case for background processes that frequently scan the file system), perhaps the overall system performance might noticeably improve with dual CPU's - even if only a single core is active on each CPU - due to each process having double the effective L3 cache. I am hoping that someone has a sense of the benefits of increasing (or doubling) the L3 cache size. Note: the CPU I am considering (the Xeon E5-2687W) has 20 MB L3 cache, so a system with dual CPU's would have 40 MB L3 cache.

    Read the article

  • Why are some web clients requesting a page named "cache"?

    - by Toto
    We see errors like this in the apache error log: [Thu May 17 14:32:35 2012] [error] [client 192.168.1.1] File does not exist: /home/www-data/mywebsite.com/r/cache, referer: http://www.mywebsite.com/r/1010 It is strange because: There is no reference in the code/url about a folder/file "cache". The folder/file "cache" does not exist The client is randomly trying to access a "cache" folder everywhere on the website. It is always trying to access the folder/file "cache" following this pattern: Pattern: /level1/.../levelwhatever/filename (referer) /level1/.../levelwhatever/cache We run a LAMP (Debian stable: PHP 5.3.3-7+squeeze9. We also use APC 3.1.3p1). We use Google Analytics and AdSense. We do not know how to reproduce the problem. Note: I replaced the user's IP in the code for privacy.

    Read the article

  • asp mvc unit test HttpContext.Current.Cache?

    - by Paul Creasey
    Here is the first part of my controller code: public class ControlMController : Controller { IControlMService _controlMservice; public IList<User> Users { get { if (System.Web.HttpContext.Current.Cache["users"] == null) { System.Web.HttpContext.Current.Cache["users"] = _controlMservice.GetUsers(); } return (IList<User>)System.Web.HttpContext.Current.Cache["users"]; } } public ControlMController(IControlMService controlMservice) { this._controlMservice = controlMservice; var users = Users; ViewData["Users"] = users; ViewData["jqSelectUsers"] = string.Join(";", users.Select(x => x.UserID + ":" + x.Name).ToArray()); } I'm trying to test it, and because i'm caching using the HttpContext, i'm struggling with null reference exceptions. I've tried using MvcContrib.TestHelper; here is my sample test... [TestMethod] public void EventDetails_Returns_view_with_correct_event() { var builder = new TestControllerBuilder(); var controller = builder.CreateController<ControlMController>( new ControlMService( new MockControlMRepository() )); var view = (controller.EventDetails(1) as ViewResult); Assert.AreEqual(1, (view.ViewData.Model as Event).EventId); } (I haven't quite got round to using DI for my tests! I'm still getting the same null reference exception when the code hits the httpcontext: Error 1 TestCase 'SupportTool.Tests.Services.ControlM.ControlMControllerTests.EventDetails_Returns_view_with_correct_event' failed: System.NullReferenceException: Object reference not set to an instance of an object. at SupportTool.web.Controllers.ControlMController.get_Users() Any ideas?

    Read the article

  • How to cache an HTTP POST response?

    - by KARASZI István
    I would like to create a cacheable HTTP response for a POST request. My actual implementation responses the following for the POST request: HTTP/1.1 201 Created Expires: Sat, 03 Oct 2020 15:33:00 GMT Cache-Control: private,max-age=315360000,no-transform Content-Type: application/x-www-form-urlencoded; charset=UTF-8 Content-Length: 9 ETag: 2120507660800737950 Last-Modified: Wed, 06 Oct 2010 15:33:00 GMT ......... But it looks like that the browsers (Safari, Firefox tested) are not cacheing the response. In the HTTP RFC the corresponding part says: Responses to this method are not cacheable, unless the response includes appropriate Cache-Control or Expires header fields. However, the 303 (See Other) response can be used to direct the user agent to retrieve a cacheable resource. So I think it should be cached. I know I could set a session variable and set a cookie and do a 303 redirect, but I want to cache the response of the POST request. Is there any way to do this? P.S.: I've started with a simple 200 OK, so it does not work. Thanks,

    Read the article

  • InstallExecuteSequence cache interferes with custom action operation

    - by Dima G
    I need to upgrade a product that could be installed in per-user context to a new version that is always in per-machine context. The requirements are: Whether the old version was installed in a Per-User (no matter who) or Per-Machine context should be completely seamless to an administrator user that performs the upgrade. The MSI upgrade should succeed without the need to know the password of the user that originally installed the previous version of the product in a Per-User context. The installation should be performed from a single .msi file (no setup.exe is allowed). The installation should be able to run in a silent (non-UI) mode. No reboots are allowed during installation. My strategy is to find in the beginning of the installation whether the product is already installed in per-user context, and if so, to transform the registry keys manually to Per-Machine context (I checked: no additional changes such as file system changes etc. are needed except this transform). I figured out how to move all appropriate keys in the registry from the user settings to the machine settings (pre-loaded appropriate user hive in case it didn't appear in HKEY_USERS) and created custom action that does it - and it does work when I run it as a stand-alone executable before running the MSI. The problem, however, is that when Windows Installer runs InstallExecuteSequence it first creates a 'cached product context' for all products. So when my custom action runs in the course of InstallExecuteSequence, this cache has already been created. Thus FindRelatedProducts action that determines if older product with same upgrade code exists looks on that cache and ignores the changes that my custom action applied. If before running the MSI the previous product was in per-user context, FindRelatedProducts will look at the cache and not apply the upgrade and remove the previous version, because the new product is in per-machine context, even though the previous product version is already configured to per-machine context in the registry by that time by my custom action. What can be done to solve this problem without violating the requirements stated above?

    Read the article

  • newbie hibernate first level cache confusion

    - by Bruce
    Hi all I'm just geting to grips with hibernate. Little bit confused. I just wanted to watch the operation of the first level cache, which I understood to batch up queries until the end of the session. But if I create an object, hibernate saves it immediately, so that when I later update it in the same transaction, it has to do an update too: Session session = factory.getCurrentSession(); session.beginTransaction(); Test1 test1 = new Test1(); test1.setName("Test 1"); test1.setValue(10); // Touch it session.save(test1); System.out.println("At checkpoint 1"); test1.setValue(20); session.getTransaction().commit(); I see the sql for the save, then 'At checkpoint 1', then the sql for the update. Do I have something set up wrong or am I misunderstanding hibernate's first level cache? Is there a good document on the first level cache - I didn't find anything in the hibernate docs, but I could easily have missed it.. Thanks!

    Read the article

  • Best approach to cache Counts from SQL tables ?

    - by pixel3cs
    I would like to develop a Forum from scratch, with special needs and customization. I would like to prepare my forum for intensive usage and wondering how to cache things like User posts count and User replies count. Having only three tables, tblForum, tblForumTopics, tblForumReplies, what is the best approach of cache the User topics and replies counts ? Think at a simple scenario: user press a link and open the Replies.aspx?id=x&page=y page, and start reading replies. On the HTTP Request, the server will run an SQL command wich will fetch all replies for that page, also "inner joining with tblForumReplies to find out the number of User replies for each user that replied." select tblForumReplies.*, tblFR.TotalReplies from tblForumReplies inner join ( select IdRepliedBy, count(*) as TotalReplies from tblForumReplies group by IdRepliedBy ) as tblFR on tblFR.IdRepliedBy = tblForumReplies.IdRepliedBy Unfortunately this approach is very cpu intensive, and I would like to see your ideas of how to cache things like table Counts. If counting replies for each user on insert/delete, and store it in a separate field, how to syncronize with manual data changing. Suppose I will manually delete Replies from SQL.

    Read the article

  • solaris zpool SSD cache device "faulted"

    - by John-ZFS
    I am trying to get over these SATA SSD errors - smartctl command failed to read the SATA SSD - SATA is not supported what could be the reason for errors? does this mean that SSD has reached EOL & needs to be replacement? errors: No known data errors pool: zpool1216 state: DEGRADED status: One or more devices are faulted in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the faulted device, or use 'zpool clear' to mark the device repaired. scan: scrub repaired 0 in 0h24m with 0 errors on Fri May 18 14:31:08 2012 config: NAME STATE READ WRITE CKSUM zpool1216 DEGRADED 0 0 0 raidz1-0 ONLINE 0 0 0 c11t10d0 ONLINE 0 0 0 c11t11d0 ONLINE 0 0 0 c11t12d0 ONLINE 0 0 0 c11t13d0 ONLINE 0 0 0 c11t14d0 ONLINE 0 0 0 c11t15d0 ONLINE 0 0 0 c11t16d0 ONLINE 0 0 0 c11t1d0 ONLINE 0 0 0 c11t2d0 ONLINE 0 0 0 c11t3d0 ONLINE 0 0 0 c11t4d0 ONLINE 0 0 0 c11t5d0 ONLINE 0 0 0 c11t6d0 ONLINE 0 0 0 c11t7d0 ONLINE 0 0 0 c11t8d0 ONLINE 0 0 0 c11t9d0 ONLINE 0 0 0 logs c9d0 FAULTED 0 0 0 too many errors cache c10d0 FAULTED 0 17 0 too many errors

    Read the article

  • Can't access my accelerated hard disk from msdos after installing linux on ssd cache

    - by Chibueze Opata
    I mistakenly installed Linux Mint on my ssd (forgot my PC actually came with one), when it detected a ~31GiB disk that it wanted to install to, I was a bit confused since I had brought out 30Gb in my primary disk for it, but I clicked continue. After installation, I tried to boot back into my Windows and it brought out some Intel Raid Disk Utility stuff saying I should disable acceleration on a disk something couldn't be found, I canceled it but whatever I tried, recovery tools, setups etc, I couldn't just access the drive which was apparently using the SSD as cache. Since then I've been stuck. I tried setting the 'raid' flag to the disk from 'gParted', still I couldn't. I tried the diskraid utility from windows recover disk, it said it couldn't detect any raid, diskpart sees the partition but doesn't see the volume, when I remove the raid flag, it sees the volume as one of raw type, and I can't access anything. I can however mount the drive from terminal in Mint and access my files, but I don't have any backup media at the moment so I can do a factory re-install. Please how do I go about solving the issue, precisely I would like to know how to boot into the drive again. Thanks!

    Read the article

  • Linux arp cache timeout values

    - by Jak
    I'm trying to configure sane values for the Linux kernel arp cache timeout, but I can't find a detailed explanation as to how they work anywhere. Even the kernel.org documentation doesn't give a good explanation, I can only find recommended values to alleviate overflow. Here is an example of the values I have: net.ipv4.neigh.default.gc_thresh1 = 128 net.ipv4.neigh.default.gc_thresh2 = 512 net.ipv4.neigh.default.gc_thresh3 = 1024 Now, from what I've gathered so far: gc_thresh1 is the number of arp entries allowed before the garbage collector starts removing any entries at all. gc_thresh2 is the soft-limit, which is the number of entries allowed before the garbage collector actively removes arp entries. gc_thresh3 is the hard limit, where entries above this number are aggressively removed. Now, if I understand correctly, if the number of arp entries goes beyond gc_thresh1 but remains below gc_thresh2, the excess will be removed periodically with an interval set by gc_interval. My question is, if the number of entries goes beyond gc_thresh2 but below gc_thresh3, or if the number goes beyond gc_thresh3, how are the entries removed? In other words, what does "actively" and "aggressively" removed mean exactly? I assume it means they are removed more frequently than what is defined in gc_interval, but I can't find by how much.

    Read the article

  • How to efficiently serve massive sitemaps in django

    - by mlissner
    I have a site with about 150K pages in its sitemap. I'm using the sitemap index generator to make the sitemaps, but really, I need a way of caching it, because building the 150 sitemaps of 1,000 links each is brutal on my server.[1] I COULD cache each of these sitemap pages with memcached, which is what I'm using elsewhere on the site...however, this is so many sitemaps that it would completely fill memcached....so that doesn't work. What I think I need is a way to use the database as the cache for these, and to only generate them when there are changes to them (which as a result of the sitemap index means only changing the latest couple of sitemap pages, since the rest are always the same.)[2] But, as near as I can tell, I can only use one cache backend with django. How can I have these sitemaps ready for when Google comes-a-crawlin' without killing my database or memcached? Any thoughts? [1] I've limited it to 1,000 links per sitemap page because generating the max, 50,000 links, just wasn't happening. [2] for example, if I have sitemap.xml?page=1, page=2...sitemap.xml?page=50, I only really need to change sitemap.xml?page=50 until it is full with 1,000 links, then I can it pretty much forever, and focus on page 51 until it's full, cache it forever, etc.

    Read the article

  • VBScript: Disable caching of response from server to HTTP GET URL request

    - by Rob
    I want to turn off the cache used when a URL call to a server is made from VBScript running within an application on a Windows machine. What function/method/object do I use to do this? When the call is made for the first time, my Linux based Apache server returns a response back from the CGI Perl script that it is running. However, subsequent runs of the script seem to be using the same response as for the first time, so the data is being cached somewhere. My server logs confirm that the server is not being called in those subsequent times, only in the first time. This is what I am doing. I am using the following code from within a commercial application (don't wish to mention this application, probably not relevant to my problem): With CreateObject("MSXML2.XMLHTTP") .open "GET", "http://myserver/cgi-bin/nsr/nsr.cgi?aparam=1", False .send nsrresponse =.responseText End With Is there a function/method on the above object to turn off caching, or should I be calling a method/function to turn off the caching on a response object before making the URL? I looked here for a solution: http://msdn.microsoft.com/en-us/library/ms535874(VS.85).aspx - not quite helpful enough. And here: http://www.w3.org/TR/XMLHttpRequest/ - very unfriendly and hard to read. I am also trying to force not using the cache using http header settings and html document header meta data: Snippet of server-side Perl CGI script that returns the response back to the calling client, set expiry to 0. print $httpGetCGIRequest-header( -type = 'text/html', -expires = '+0s', ); Http header settings in response sent back to client: <html><head><meta http-equiv="CACHE-CONTROL" content="NO-CACHE"></head> <body> response message generated from server </body> </html> The above http header and html document head settings haven't worked, hence my question.

    Read the article

  • swapping or trashing with vast amounts of unmapped pagecache

    - by Marco
    I'm using kubuntu jaunty (i386 32bit), kernel 2.6.28-13-generic. I've 4Gb of RAM, of which only 3317Mb are seen by the system (I guess because of the 32bit system). I'm seeing that the pagecache utilization is continually growing, up to the point that the system is unusable (after a few days). This happens also when I don't do anything (all user applications closed and the bare minimum of services enabled). If enabled, the system starts to use swap space (using it all in the end). Even if swap is disabled, disk activity becomes continuous, with the system unresponsive. For example, right now the system is working (albeit a tad slow), with only Firefox and wing ide running, and I have 2Gb cached with only 45Mb mapped: $ free total used free shared buffers cached Mem: 3346388 3247328 99060 0 8416 2117980 -/+ buffers/cache: 1120932 2225456 Swap: 2144668 519448 1625220 $ cat /proc/meminfo MemTotal: 3346388 kB MemFree: 97128 kB Buffers: 7872 kB Cached: 2120224 kB SwapCached: 413860 kB Active: 2304596 kB Inactive: 865984 kB Active(anon): 2279168 kB Inactive(anon): 830236 kB Active(file): 25428 kB Inactive(file): 35748 kB Unevictable: 32 kB Mlocked: 32 kB HighTotal: 2492940 kB HighFree: 5456 kB LowTotal: 853448 kB LowFree: 91672 kB SwapTotal: 2144668 kB SwapFree: 1625244 kB Dirty: 84 kB Writeback: 0 kB AnonPages: 629304 kB Mapped: 45768 kB Slab: 45600 kB SReclaimable: 21756 kB SUnreclaim: 23844 kB PageTables: 4468 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 3817860 kB Committed_AS: 3735020 kB VmallocTotal: 122880 kB VmallocUsed: 9352 kB VmallocChunk: 66600 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB DirectMap4k: 16376 kB DirectMap4M: 888832 kB If I try to drop the caches, little happens: # sync ; echo 3 > /proc/sys/vm/drop_caches ; free total used free shared buffers cached Mem: 3346388 3220580 125808 0 3020 2100600 -/+ buffers/cache: 1116960 2229428 Swap: 2144668 519356 1625312 Right now I've vm.swappiness = 5, but I've tried also with 0 and 1 (without noticeable differences). I've also tried vm.vfs_cache_pressure = 50 and 150 (again, no differences). As I said the pagecache eats all memory even with swapping turned off. What is happening? How to avoid this?

    Read the article

  • Apache Caching and Expires configuration

    - by mcondiff
    I'm looking for a best possible caching/expires configuration for my specific situation. I realize that some sites have advocated turning etags off: Header unset ETag, FileETag None I know that I should use either Expires or Cache-Control. In additions, I know that I should use either Last-modified or ETAGs (Per ySlow docs). I inherited a clients server that uses the following in .htaccess: <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf|xml|txt|html|htm)$"> Header set Cache-Control "max-age=172800, public, must-revalidate" </FilesMatch> With this server I am not going to be able to rely on staff to rename images, css and js in web applications so I do not want to set the expires far in the future without knowing (with a good certainty) that "most/all" browsers will check to see if content has changed. What I do not want to happen is someone call me and say the website is broken because they replaced an image and it's not showing up. But I do want to take the most advantage I can with caching and expires while still maintaining that mostly all browsers will check with the server to see if components have changed. I have access to both the .htaccess and apache .conf file and it is a single server, the content is not deployed on multiple servers. What would be the best .htaccess or .conf configuration for me to achieve my goals for this clients server? Thanks for your help

    Read the article

  • Conditionally changing MIME type in nginx

    - by Peter
    I'm using nginx as a frontend to Rails. All pages are cached as .html files on disk, and nginx serves these files if they exist. I want to send the correct MIME type for feeds (application/rss+xml), but the way I have so far is quite ugly, and I'm wondering if there is a cleaner way. Here is my config: location ~ /feed/$ { types {} default_type application/rss+xml; root /var/www/cache/; if (-f request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f request_filename.html) { rewrite (.*) $1.html break; } if (-f request_filename) { break; } if (!-f request_filename) { proxy_pass http://mongrel; break; } } location / { root /var/www/cache/; if (-f request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f request_filename.html) { rewrite (.*) $1.html break; } if (-f request_filename) { break; } if (!-f request_filename) { proxy_pass http://mongrel; break; } } My questions: Is there a better way to change the MIME type? All cached files have .html extensions and I cannot change this. Is there a way to factor out the if conditions in /feed/$ and /? I understand that I can use include, but I'm hoping for a better way. Putting part of the config in a different file is not that readable. Can you spot any bugs in the if conditions? I'm using nginx 0.6.32 (Debian Lenny). I prefer to use the version in APT. Thanks.

    Read the article

  • Unusual Caching Issue with IE 7/8 and IIS 7

    - by Daniel A. White
    We recently moved a site into production running Server 2008 x64 and IIS 7. The ASP.NET pages apparently load just fine, but when it comes to IE 7 and 8, a weird caching issue has cropped up with the CSS and JavaScript files on the page. On a very sporadic schedule, IE does not get all the files necessary to compose the page (i.e. CSS and JS files). When I manually go to the missing files from the address bar, they come back from local cache as empty. I F5 these source files and magically they come down properly. I refresh the site after loading a few files and the cache seems to hold. This problem has only been reproduced (again, sporadically) on IE 7 and 8 running XP. Chrome and Firefox appear to be immune. We have set IIS to use server-side kernel caching for CSS, JS and images. We also have set to expire content for the App_Themes and Scripts directories to expire immediately. One initial thought it was a SWF loading an FLV on page load. These fixes have not remedied the problem. We had no problems on our staging server which is using Server 2003 and IIS 6. Any ideas would be greatly appreciated. P.S. It sounds similar to this problem: but we do have the Static Content module installed. http://serverfault.com/questions/115099/iis-content-length-0-for-css-javascript-and-images

    Read the article

  • Output Caching with IIS7 - How To for an dynamic aspx page?

    - by Lieven Cardoen
    I have a RetrieveBlob.aspx that gets some query string variables and returns an asset. Eeach url corresponds to a unique asset. In the RetrieveBlob.aspx a Cache Profile is set. In Web.Config the profile looks like (under system.web tag: <caching> <outputCache enableOutputCache="true" /> <outputCacheSettings> <outputCacheProfiles> <add duration="14800" enabled="true" varyByParam="*" name="AssetCacheProfile" /> </outputCacheProfiles> </outputCacheSettings> </caching> Ok, this works fine. When I put a breakpoint in the code behind of RetrieveBlob.aspx, it gets triggered the first time, and all the other times not. Now, I throw away the Cache Profile and instead I'm having this in my Web.Config under System.WebServer: <caching> <profiles> <add extension=".swf" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".flv" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".gif" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".png" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".mp3" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpeg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> </profiles> </caching> Now the caching doesn't work anymore. What am I doing wrong? Is it possible to configure under Caching tag of System.WebServer a Caching Profile for a Dynamic aspx page?

    Read the article

  • Curios: What makes CPUs better than others? [closed]

    - by Zizma
    I have been wondering about this for a long while now and was hoping someone here could answer it pretty easily. If I was looking for the most powerful CPU what should I really be looking at? There are so many different parameters of a CPU and I am wanting to know what each thing does and what really matters. Basically this: What is the deal with cores? If I take using optimized applications out of the mix would it theoretically better to get quad core 1.0GHz CPU or a 1 core 4 GHz CPU? Also, what is the difference between maybe an Sandy Bridge CPU versus an Ivy Bridge CPU? If they both were had the same clock speed and number of cores would the Ivy Bridge perform better? Does an older Xeon with an equal clock speed and number of cores to a new i7 really perform worse/slower? Does size matter? Why would I go with a 22nm CPU over a 32nm when the size difference is so trivial? What about the cache? When does the cache come into play with performance?

    Read the article

  • Offline copies in Windows file sharing

    - by netvope
    I frequently access media files (music or video) on a remote Windows file share. My Internet connection is not very fast, and I find it a waste of bandwidth when I repeatedly access the same files. For example, I may listen to the same song 30 times in a month. So, I would like to cache files I've used. I know Windows has an "Always available offline" feature but I think it doesn't suit my needs. I don't want to make the whole share "available offline" as the remote Windows file share is huge (in terabytes). Making individual files "available offline" is tedious as the files are scattered in many different directories. It would be much more convenient if I can simply cache those I've used. Also The files on the share seldom change. Many of the files are rarely used. Some of the files are frequently used. I don't have a list of the most frequently used files. So I think the best way is to have a caching proxy for the Windows share. What do you think? I have a Linux box sitting around. Perhaps I should try to setup samba4?

    Read the article

  • Trying get dynamic content hole-punched through Magento's Full Page Cache

    - by rlflow
    I am using Magento Enterprise 1.10.1.1 and need to get some dynamic content on our product pages. I am inserting the current time in a block to quickly see if it is working, but can't seem to get through full page cache. I have tried a variety of implementations found here: http://tweetorials.tumblr.com/post/10160075026/ee-full-page-cache-hole-punching http://oggettoweb.com/blog/customizations-compatible-magento-full-page-cache/ http://magentophp.blogspot.com/2011/02/magento-enterprise-full-page-caching.html (http://www.exploremagento.com/magento/simple-custom-module.php - custom module) Any solutions, thoughts, comments, advice is welcome. here is my code: app/code/local/Fido/Example/etc/config.xml <?xml version="1.0"?> <config> <modules> <Fido_Example> <version>0.1.0</version> </Fido_Example> </modules> <global> <blocks> <fido_example> <class>Fido_Example_Block</class> </fido_example> </blocks> </global> </config> app/code/local/Fido/Example/etc/cache.xml <?xml version="1.0" encoding="UTF-8"?> <config> <placeholders> <fido_example> <block>fido_example/view</block> <name>example</name> <placeholder>CACHE_TEST</placeholder> <container>Fido_Example_Model_Container_Cachetest</container> <cache_lifetime>86400</cache_lifetime> </fido_example> </placeholders> </config> app/code/local/Fido/Example/Block/View.php <?php /** * Example View block * * @codepool Local * @category Fido * @package Fido_Example * @module Example */ class Fido_Example_Block_View extends Mage_Core_Block_Template { private $message; private $att; protected function createMessage($msg) { $this->message = $msg; } public function receiveMessage() { if($this->message != '') { return $this->message; } else { $this->createMessage('Hello World'); return $this->message; } } protected function _toHtml() { $html = parent::_toHtml(); if($this->att = $this->getMyCustom() && $this->getMyCustom() != '') { $html .= '<br />'.$this->att; } else { $now = date('m-d-Y h:i:s A'); $html .= $now; $html .= '<br />' ; } return $html; } } app/code/local/Fido/Example/Model/Container/Cachetest.php <?php class Fido_Example_Model_Container_Cachetest extends Enterprise_PageCache_Model_Container_Abstract { protected function _getCacheId() { return 'HOMEPAGE_PRODUCTS' . md5($this->_placeholder->getAttribute('cache_id') . $this->_getIdentifier()); } protected function _renderBlock() { $blockClass = $this->_placeholder->getAttribute('block'); $template = $this->_placeholder->getAttribute('template'); $block = new $blockClass; $block->setTemplate($template); return $block->toHtml(); } protected function _saveCache($data, $id, $tags = array(), $lifetime = null) { return false; } } app/design/frontend/enterprise/[mytheme]/template/example/view.phtml <?php /** * Fido view template * * @see Fido_Example_Block_View * */ ?> <div> <?php echo $this->receiveMessage(); ?> </span> </div> snippet from app/design/frontend/enterprise/[mytheme]/layout/catalog.xml <reference name="content"> <block type="catalog/product_view" name="product.info" template="catalog/product/view.phtml"> <block type="fido_example/view" name="product.info.example" as="example" template="example/view.phtml" />

    Read the article

  • Http header 304 and caching?

    - by Royi Namir
    Our company uses these settings( don't ask me why) - for every request they want a new request from server. this is an intranet system which uses only IE. They defined it in : We also have windows authentication NTLM in the iis7. I have 2 questions please. Question #1) when the browser make a request ( css ) : (leave the 401 response for now - this is how ntlm works) He is requesting it with if-modified-since header. why is he adding this header ? How can I configure it ? why doesn't he use the settings from IE and try to download it each time - as I showed in the first picture ? Question #2) The response ( after ntlm negotiation) for that was : Response with Not-modified which is 304 header. and I assume its because we sent the request with the if-modified-since header. But there is a problem. He is actually tells me to download from my cache. But I told him explicitly in the IE settings - not to load from cache. Wham am I missing here ? Thanks a lot.

    Read the article

  • Caching all files in varnish

    - by csgwro
    I want my varnish servers to cache all files. At backend there is lighttpd hosting only static files, and there is an md5 in the url in case of file change, ex. /gfx/Bird.b6e0bc2d6cbb7dfe1a52bc45dd2b05c4.swf). However my hit ratio is very poorly (about 0.18) My config: sub vcl_recv { set req.backend=default; ### passing health to backend if (req.url ~ "^/health.html$") { return (pass); } remove req.http.If-None-Match; remove req.http.cookie; remove req.http.authenticate; if (req.request == "GET") { return (lookup); } } sub vcl_fetch { ### do not cache wrong codes if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } remove beresp.http.Etag; remove beresp.http.Last-Modified; } sub vcl_deliver { set resp.http.expires = "Thu, 31 Dec 2037 23:55:55 GMT"; } I have made an performance tuning: DAEMON_OPTS="${DAEMON_OPTS} -p thread_pool_min=200 -p thread_pool_max=4000 -p thread_pool_add_delay=2 -p session_linger=100" The main url which is missed is... /health.html. Is that forward to backend correctly configured? Disabling health checking hit ratio increases to 0.45. Now mostly "/crossdomain.xml" is missed (from many domains, as it is wildcard). How can I avoid that? Should I carry on other headers like User-Agent or Accept-Encoding? I thing that default hashing mechanism is using url + host/IP. Compression is used at the backend. What else can improve performance?

    Read the article

  • Faster caching method

    - by pataroulis
    I have a service that provides HTML code which at some point it is not updated anymore. The code is always generated dynamically from a database with 10 million entries so each HTML code page rendering searches there for say 60 or 70 of those entries and then renders the page. So, for those expired pages, I want to use a caching system which will be VERY simple (like just enter a record with the rendered HTML and (if I need) remove it). I tried to do it file-based but the search for the existence of a file and then passing it through php to actually render it , seems like too much for what I want to do. I was thinking of doing it on mysql with a table with MEDIUMBLOBs (each page is around 100k). It would hold about 150000 such records (for now, at least). My question is: Would it be faster to let mysql do the lookup of the file and the passing to php or is the file-based approach faster? The lookup code for the file based version looks like this: $page = @file_get_contents(getCacheFilename($pageId)); if($page!=NULL) { echo $page; } else { renderAndCachePage($pageId); } which does one lookup whether it finds the file or not. The mysql table would just have an ID (the page id) and the blob entry. The disk of the system is a simple SATA raid 1 , the mysql daemon can grab up to 2.5GB of memory (i have a proxy running too, eating the rest of the 16GB of the machine. ) In general the disk is quite busy already. My not using PEAR cache, is because I think (please feel free to correct me on this) it adds overhead I do not need because the page rendering code is called about 2M times per day and I wouldn't want to go through the whole code each time (and yes, I have eaccelerator to cache the code too). Any pointer to what direction I should go, would be greatly welcome. Thanks!

    Read the article

  • Conheça a nova Windows Azure

    - by Leniel Macaferi
    Hoje estamos lançando um grande conjunto de melhorias para a Windows Azure. A seguir está um breve resumo de apenas algumas destas melhorias: Novo Portal de Administração e Ferramentas de Linha de Comando O lançamento de hoje vem com um novo portal para a Windows Azure, o qual lhe permitirá gerenciar todos os recursos e serviços oferecidos na Windows Azure de uma forma perfeitamente integrada. O portal é muito rápido e fluido, suporta filtragem e classificação dos dados (o que o torna muito fácil de usar em implantações/instalações de grande porte), funciona em todos os navegadores, e oferece um monte de ótimos e novos recursos - incluindo suporte nativo à VM (máquina virtual), Web site, Storage (armazenamento), e monitoramento de Serviços hospedados na Nuvem. O novo portal é construído em cima de uma API de gerenciamento baseada no modelo REST dentro da Windows Azure - e tudo o que você pode fazer através do portal também pode ser feito através de programação acessando esta Web API. Também estamos lançando hoje ferramentas de linha de comando (que, igualmente ao portal, chamam as APIs de Gerenciamento REST) para tornar ainda ainda mais fácil a criação de scripts e a automatização de suas tarefas de administração. Estamos oferecendo para download um conjunto de ferramentas para o Powershell (Windows) e Bash (Mac e Linux). Como nossos SDKs, o código destas ferramentas está hospedado no GitHub sob uma licença Apache 2. Máquinas Virtuais ( Virtual Machines [ VM ] ) A Windows Azure agora suporta a capacidade de implantar e executar VMs duráveis/permanentes ??na nuvem. Você pode criar facilmente essas VMs usando uma nova Galeria de Imagens embutida no novo Portal da Windows Azure ou, alternativamente, você pode fazer o upload e executar suas próprias imagens VHD customizadas. Máquinas virtuais são duráveis ??(o que significa que qualquer coisa que você instalar dentro delas persistirá entre as reinicializações) e você pode usar qualquer sistema operacional nelas. Nossa galeria de imagens nativa inclui imagens do Windows Server (incluindo o novo Windows Server 2012 RC), bem como imagens do Linux (incluindo Ubuntu, CentOS, e as distribuições SUSE). Depois de criar uma instância de uma VM você pode facilmente usar o Terminal Server ou SSH para acessá-las a fim de configurar e personalizar a máquina virtual da maneira como você quiser (e, opcionalmente, capturar uma snapshot (cópia instantânea da imagem atual) para usar ao criar novas instâncias de VMs). Isto te proporciona a flexibilidade de executar praticamente qualquer carga de trabalho dentro da plataforma Windows Azure.   A novo Portal da Windows Azure fornece um rico conjunto de recursos para o gerenciamento de Máquinas Virtuais - incluindo a capacidade de monitorar e controlar a utilização dos recursos dentro delas.  Nosso novo suporte à Máquinas Virtuais também permite a capacidade de facilmente conectar múltiplos discos nas VMs (os quais você pode então montar e formatar como unidades de disco). Opcionalmente, você pode ativar o suporte à replicação geográfica (geo-replication) para estes discos - o que fará com que a Windows Azure continuamente replique o seu armazenamento em um data center secundário (criando um backup), localizado a pelo menos 640 quilômetros de distância do seu data-center principal. Nós usamos o mesmo formato VHD que é suportado com a virtualização do Windows hoje (o qual nós lançamos como uma especificação aberta), de modo a permitir que você facilmente migre cargas de trabalho existentes que você já tenha virtualizado na Windows Azure.  Também tornamos fácil fazer o download de VHDs da Windows Azure, o que também oferece a flexibilidade para facilmente migrar cargas de trabalho das VMs baseadas na nuvem para um ambiente local. Tudo o que você precisa fazer é baixar o arquivo VHD e inicializá-lo localmente - nenhuma etapa de importação/exportação é necessária. Web Sites A Windows Azure agora suporta a capacidade de rapidamente e facilmente implantar web-sites ASP.NET, Node.js e PHP em um ambiente na nuvem altamente escalável que te permite começar pequeno (e de maneira gratuita) de modo que você possa em seguida, adaptar/escalar sua aplicação de acordo com o crescimento do seu tráfego. Você pode criar um novo web site na Azure e tê-lo pronto para implantação em menos de 10 segundos: O novo Portal da Windows Azure oferece suporte integrado para a administração de Web sites, incluindo a capacidade de monitorar e acompanhar a utilização dos recursos em tempo real: Você pode fazer o deploy (implantação) para web-sites em segundos usando FTP, Git, TFS e Web Deploy. Também estamos lançando atualizações para as ferramentas do Visual Studio e da Web Matrix que permitem aos desenvolvedores uma fácil instalação das aplicações ASP.NET nesta nova oferta. O suporte de publicação do VS e da Web Matrix inclui a capacidade de implantar bancos de dados SQL como parte da implantação do site - bem como a capacidade de realizar a atualização incremental do esquema do banco de dados com uma implantação realizada posteriormente. Você pode integrar a publicação de aplicações web com o controle de código fonte ao selecionar os links "Set up TFS publishing" (Configurar publicação TFS) ou "Set up Git publishing" (Configurar publicação Git) que estão presentes no dashboard de um web-site: Ao fazer isso, você habilitará a integração com o nosso novo serviço online TFS (que permite um fluxo de trabalho do TFS completo - incluindo um build elástico e suporte a testes), ou você pode criar um repositório Git e referenciá-lo como um remote para executar implantações automáticas. Uma vez que você executar uma implantação usando TFS ou Git, a tab/guia de implantações/instalações irá acompanhar as implantações que você fizer, e permitirá que você selecione uma implantação mais antiga (ou mais recente) para que você possa rapidamente voltar o seu site para um estado anterior do seu código. Isso proporciona uma experiência de fluxo de trabalho muito poderosa.   A Windows Azure agora permite que você implante até 10 web-sites em um ambiente de hospedagem gratuito e compartilhado entre múltiplos usuários e bancos de dados (onde um site que você implantar será um dos vários sites rodando em um conjunto compartilhado de recursos do servidor). Isso te fornece uma maneira fácil para começar a desenvolver projetos sem nenhum custo envolvido. Você pode, opcionalmente, fazer o upgrade do seus sites para que os mesmos sejam executados em um "modo reservado" que os isola, de modo que você seja o único cliente dentro de uma máquina virtual: E você pode adaptar elasticamente a quantidade de recursos que os seus sites utilizam - o que te permite por exemplo aumentar a capacidade da sua instância reservada/particular de acordo com o aumento do seu tráfego: A Windows Azure controla automaticamente o balanceamento de carga do tráfego entre as instâncias das VMs, e você tem as mesmas opções de implantação super rápidas (FTP, Git, TFS e Web Deploy), independentemente de quantas instâncias reservadas você usar. Com a Windows Azure você paga por capacidade de processamento por hora - o que te permite dimensionar para cima e para baixo seus recursos para atender apenas o que você precisa. Serviços da Nuvem (Cloud Services) e Cache Distribuído (Distributed Caching) A Windows Azure também suporta a capacidade de construir serviços que rodam na nuvem que suportam ricas arquiteturas multicamadas, gerenciamento automatizado de aplicações, e que podem ser adaptados para implantações extremamente grandes. Anteriormente nós nos referíamos a esta capacidade como "serviços hospedados" - com o lançamento desta semana estamos agora rebatizando esta capacidade como "serviços da nuvem". Nós também estamos permitindo um monte de novos recursos com eles. Cache Distribuído Um dos novos recursos muito legais que estão sendo habilitados com os serviços da nuvem é uma nova capacidade de cache distribuído que te permite usar e configurar um cache distribuído de baixa latência, armazenado na memória (in-memory) dentro de suas aplicações. Esse cache é isolado para uso apenas por suas aplicações, e não possui limites de corte. Esse cache pode crescer e diminuir dinamicamente e elasticamente (sem que você tenha que reimplantar a sua aplicação ou fazer alterações no código), e suporta toda a riqueza da API do Servidor de Cache AppFabric (incluindo regiões, alta disponibilidade, notificações, cache local e muito mais). Além de suportar a API do Servidor de Cache AppFabric, esta nova capacidade de cache pode agora também suportar o protocolo Memcached - o que te permite apontar código escrito para o Memcached para o cache distribuído (sem que alterações de código sejam necessárias). O novo cache distribuído pode ser configurado para ser executado em uma de duas maneiras: 1) Utilizando uma abordagem de cache co-localizado (co-located). Nesta opção você aloca um percentual de memória dos seus roles web e worker existentes para que o mesmo seja usado ??pelo cache, e então o cache junta a memória em um grande cache distribuído.  Qualquer dado colocado no cache por uma instância do role pode ser acessado por outras instâncias do role em sua aplicação - independentemente de os dados cacheados estarem armazenados neste ou em outro role. O grande benefício da opção de cache "co-localizado" é que ele é gratuito (você não precisa pagar nada para ativá-lo) e ele te permite usar o que poderia ser de outra forma memória não utilizada dentro das VMs da sua aplicação. 2) Alternativamente, você pode adicionar "cache worker roles" no seu serviço na nuvem que são utilizados unicamente para o cache. Estes também serão unidos em um grande anel de cache distribuído que outros roles dentro da sua aplicação podem acessar. Você pode usar esses roles para cachear dezenas ou centenas de GBs de dados na memória de forma extramente eficaz - e o cache pode ser aumentado ou diminuído elasticamente durante o tempo de execução dentro da sua aplicação: Novos SDKs e Ferramentas de Suporte Nós atualizamos todos os SDKs (kits para desenvolvimento de software) da Windows Azure com o lançamento de hoje para incluir novos recursos e capacidades. Nossos SDKs estão agora disponíveis em vários idiomas, e todo o código fonte deles está publicado sob uma licença Apache 2 e é mantido em repositórios no GitHub. O SDK .NET para Azure tem em particular um monte de grandes melhorias com o lançamento de hoje, e agora inclui suporte para ferramentas, tanto para o VS 2010 quanto para o VS 2012 RC. Estamos agora também entregando downloads do SDK para Windows, Mac e Linux nos idiomas que são oferecidos em todos esses sistemas - de modo a permitir que os desenvolvedores possam criar aplicações Windows Azure usando qualquer sistema operacional durante o desenvolvimento. Muito, Muito Mais O resumo acima é apenas uma pequena lista de algumas das melhorias que estão sendo entregues de uma forma preliminar ou definitiva hoje - há muito mais incluído no lançamento de hoje. Dentre estas melhorias posso citar novas capacidades para Virtual Private Networking (Redes Privadas Virtuais), novo runtime do Service Bus e respectivas ferramentas de suporte, o preview público dos novos Azure Media Services, novos Data Centers, upgrade significante para o hardware de armazenamento e rede, SQL Reporting Services, novos recursos de Identidade, suporte para mais de 40 novos países e territórios, e muito, muito mais. Você pode aprender mais sobre a Windows Azure e se cadastrar para experimentá-la gratuitamente em http://windowsazure.com.  Você também pode assistir a uma apresentação ao vivo que estarei realizando às 1pm PDT (17:00Hs de Brasília), hoje 7 de Junho (hoje mais tarde), onde eu vou passar por todos os novos recursos. Estaremos abrindo as novas funcionalidades as quais me referi acima para uso público poucas horas após o término da apresentação. Nós estamos realmente animados para ver as grandes aplicações que você construirá com estes novos recursos. Espero que ajude, - Scott   Texto traduzido do post original por Leniel Macaferi.

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >