Search Results

Search found 7323 results on 293 pages for 'cache manifest'.

Page 20/293 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Invalidating Memcached Keys on save() in Django

    - by Zack
    I've got a view in Django that uses memcached to cache data for the more highly trafficked views that rely on a relatively static set of data. The key word is relatively: I need invalidate the memcached key for that particular URL's data when it's changed in the database. To be as clear as possible, here's the meat an' potatoes of the view (Person is a model, cache is django.core.cache.cache): def person_detail(request, slug): if request.is_ajax(): cache_key = "%s_ABOUT_%s" % settings.SITE_PREFIX, slug # Check the cache to see if we've already got this result made. json_dict = cache.get(cache_key) # Was it a cache hit? if json_dict is None: # That's a negative Ghost Rider person = get_object_or_404(Person, display = True, slug = slug) json_dict = { 'name' : person.name, 'bio' : person.bio_html, 'image' : person.image.extra_thumbnails['large'].absolute_url, } cache.set(cache_key) # json_dict will now exist, whether it's from the cache or not response = HttpResponse() response['Content-Type'] = 'text/javascript' response.write(simpljson.dumps(json_dict)) # Make sure it's all properly formatted for JS by using simplejson return response else: # This is where the fully templated response is generated What I want to do is get at that cache_key variable in it's "unformatted" form, but I'm not sure how to do this--if it can be done at all. Just in case there's already something to do this, here's what I want to do with it (this is from the Person model's hypothetical save method) def save(self): # If this is an update, the key will be cached, otherwise it won't, let's see if we can't find me try: old_self = Person.objects.get(pk=self.id) cache_key = # Voodoo magic to get that variable old_key = cache_key.format(settings.SITE_PREFIX, old_self.slug) # Generate the key currently cached cache.delete(old_key) # Hit it with both barrels of rock salt # Turns out this doesn't already exist, let's make that first request even faster by making this cache right now except DoesNotExist: # I haven't gotten to this yet. super(Person, self).save() I'm thinking about making a view class for this sorta stuff, and having functions in it like remove_cache or generate_cache since I do this sorta stuff a lot. Would that be a better idea? If so, how would I call the views in the URLconf if they're in a class?

    Read the article

  • How Do I Cache Just the Homepage with Apache .htaccess?

    - by Volomike
    This config is close... <FilesMatch "\.(php)$"> Header set Cache-Control "max-age=7200, must-revalidate" </FilesMatch> ...but it does all php pages, not just the home page like I want. Basically the developer said he wants example.com to be cached, while: http://example.com/electronics/ would not be cached. Note the developer is using pretty URLs with an MVC framework that runs everything through index.php.

    Read the article

  • Is "Turn Off Windows write-cache buffer flushing" safe on a laptop?

    - by Earlz
    my laptop's internal harddrive is a bit slow. I looked at the drive properties and there are two options: [X] Enable write caching on the device [ ] Turn off windows write-cache buffer flushing on the device As you can see, the first option is checked already, but the second option isn't. I've heard the second option can really speed things up, but it also sounds very risky. Is it safe to do on a laptop that rarely is off of AC power? (but still has battery as well)

    Read the article

  • How to tell nginx to honor backend's cache?

    - by ChocoDeveloper
    I'm using php-fpm with nginx as http server (I don't know much about reverse proxies, I just installed it and didn't touch anything), without Apache nor Varnish. I need nginx to understand and honor the http headers I send. I tried with this config (taken from the docs) but didn't work: /etc/nginx/nginx.conf: fastcgi_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=website:10m inactive=10m; fastcgi_cache_key "$scheme$request_method$host$request_uri"; /etc/nginx/sites-available/website: server { fastcgi_cache website; #fastcgi_cache_valid 200 302 1h; #fastcgi_cache_valid 301 1d; #fastcgi_cache_valid any 1m; #fastcgi_cache_min_uses 1; #fastcgi_cache_use_stale error timeout invalid_header http_503; add_header X-Cache $upstream_cache_status; } I always get "MISS" and the cache dir is empty. If I uncomment the other directives, I get hit, but I don't want those "dumb" settings, I need to control them within my backend. For example, if my backend says "public, s-maxage=10", the cache should be considered stale after 10 secs. Instead, nginx will store it for 1h, because of these directives. I was thinking whether I should try proxy_cache, not sure what's the difference. In both fastcgi and proxy modules docs it says this: The cache honors backend's Cache-Control, Expires, and etc. since version 0.7.48, Cache-Control: private and no-store only since 0.7.66, though. Vary handling is not implemented. nginx version: nginx/1.1.19 Any thoughts? pd: I also have the reverse proxy that is offered by Symfony2 (which I turn off to use nginx's). The headers are interpreted correctly by it, so I think I'm doing it right.

    Read the article

  • Why is apt-cache so slow?

    - by Damn Terminal
    After upgrade to Trusty (14.04) from Saucy (13.10), all apt operations are very slow. Even those that do not include downloading anything, or connecting to any servers. For example, displaying the apt policy # time apt-cache policy [...] real 0m8.951s user 0m5.069s sys 0m3.861s takes almost ten seconds! Mostly a weird lag right after issuing the command. And it's the same even if I issue the same command again. On another system it doesn't take a tenth of a second real 0m0.096s user 0m0.070s sys 0m0.023s The other system is a little beefier but there was no noticeable difference before the upgrade. It's the same with apt-get, anything apt-related. How do I find out the source of this lag and fix it? Additional info: # cat /etc/nsswitch.conf # /etc/nsswitch.conf # # Example configuration of GNU Name Service Switch functionality. # If you have the `glibc-doc-reference' and `info' packages installed, try: # `info libc "Name Service Switch"' for information about this file. passwd: compat group: compat shadow: compat hosts: files dns networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis BTW is my understanding of how apt-cache works correct? It doesn't make any network connections when I run apt-cache policy, right? In case I'm wrong and it matters, here are my sources https://gist.github.com/anonymous/02920270ff68e23fc3ec

    Read the article

  • Including hibernate jar dependencies in ant build

    - by Patrick
    Hi, I'm trying to compile a runnable jar-file for a project that makes use of hibernate. I'm trying to construct an ant build.xml file to streamline my build process, but I'm having troubles with the inclusion of the hibernate3.jar inside the final jar-file. If I run the ant script I manage to include all my library jars, and they are put in the final jar-file's root. When I run the jar-file I get a java.lang.NoClassDefFoundError: org/hibernate/Session error. If I make use of the built-in export to jar in Eclipse, it works only if I choose "extract required libraries into jar". But that bloats the jar, and includes too much of my project (i.e. unit tests). Below is my generated manifest: Manifest-Version: 1.0 Main-Class: main.ServerImpl Class-Path: ./ antlr-2.7.6.jar commons-collections-3.1.jar dom4j-1.6.1.jar hibernate3.jar javassist-3.9.0.GA.jar jta-1.1.jar slf4j-api-1.5.11.jar slf4j-simple-1.5.11.jar mysql-connector-java-5.1.12-bin.jar rmiio-2.0.2.jar commons-logging-1.1.1.jar And the part of the build.xml looks like this: <target name="dist" depends="compile" description="Generates the Distribution Jar(s)"> <mkdir dir="${dist.dir}" /> <jar destfile="${dist.dir}/${dist.file.name}.jar" basedir="${build.prod.dir}" filesetmanifest="mergewithoutmain"> <manifest> <attribute name="Main-Class" value="${main.class}" /> <attribute name="Class-Path" value="./ ${manifest.classpath} " /> <attribute name="Implementation-Title" value="${app.name}" /> <attribute name="Implementation-Version" value="${app.version}" /> <attribute name="Implementation-Vendor" value="${app.vendor}" /> </manifest> <zipfileset refid="hibernatefiles" /> <zipfileset refid="slf4jfiles" /> <zipfileset refid="mysqlfiles" /> <zipfileset refid="commonsloggingfiles" /> <zipfileset refid="rmiiofiles" /> </jar> </target> The refids' for the zipfilesets point to the directories in a library directory lib in the root of the project. The manifest.classpath-variable takes the classpath of all those library jar-files, and flattens them with pathconvert and mapper. I've also tried to set the manifest classpath to ".", "./" and only the library jar, but to no difference at all. I'm hoping there's a simple remedy to my problems...

    Read the article

  • .NET Embedded Manifest Crashes XP

    - by Alan Spark
    Hi, I am embedding a manifest in a .NET exe so that it can request elevated permissions in Vista and Windows 7. The manifest that I am using is as follows: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <assemblyIdentity version="1.0.0.0" name="ElevationTest" type="win32"/> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="requireAdministrator"/> </requestedPrivileges> </security> </trustInfo> </assembly> It works as expected in Vista and Windows 7 but crashes XP with the standard "... has encountered a problem and needs to close..." error. If I don't embed any manifest then it works as expected but will obviously not have the required permissions in Vista and Windows 7. What is a standard way of producing an exe that will function with the correct permissions in XP and Vista / Windows 7? Thanks, Alan

    Read the article

  • Page cache flushing behavior under heavy append load

    - by Bryce
    I'm trying to understand the behavior of the Linux pdflush daemon when: The page cache is initially pretty much empty There is a large amount of free memory The system starts undergoing heavy write load My understanding right now is that the vm.dirty_ratio and vm.dirty_background_ratio that control page cache flushing behavior are with respect to the present size of the page cache, which means that my writes will flush earlier than they would if the page cache was pre-populated (even with dummy data from some random file), and thus throughput will be lower. Is this accurate?

    Read the article

  • Why does Tomcat try to use the cache when compilation failed?

    - by etheros
    For some reason, it appears Tomcat is trying to hit its compilation cache when compilation failed. For example, if I create a JSP containing nothing but Hello, <%=world%>!, predictably, I get an error: org.apache.jasper.JasperException: Unable to compile class for JSP. Subsequent requests however alternate between this and org.apache.jasper.JasperException: org.apache.jasper.JasperException: Unable to load class for JSP. Further, if I create a JSP containing Hello!, it of course works just fine. If I modify it contain Hello, <%=name%>!, the response alternates between the previously-mentioned compilation error, and the cached Hello!. What's going on?

    Read the article

  • Does ZFS cache Compressed or Uncompressed data in a ZFS file-system with compression turned on?

    - by George Bailey
    ZFS supports file-system compression and it also caches frequently or recently accessed data. If a system has lots of CPU but the underlying data storage system is slow. It is possible that ZFS would perform better with compression turned on. This can be easily tested when writing files by measuring CPU and disk usage and throughput. (of course latency may exist,, but this would not be an issue for large files). But what about cache? If data will have to be decompressed every time it is read then this is probably less of a good idea. Is the cached data compressed?. Does anybody have some information on this?

    Read the article

  • How can I diagnose cache misses when using Apache as a reverse proxy?

    - by johnstok
    I have set up Apache 2.2 as a reverse proxy with the following configuration: # jBoss proxying ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /foo http://localhost:9080/foo ProxyPassReverse /foo http://localhost:9080/foo ProxyPassReverseCookiePath /foo /foo # Reverse proxy caching CacheEnable disk /foo # Compression SetOutputFilter DEFLATE BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE\s(7|8) !no-gzip !gzip-only-text/html DeflateCompressionLevel 9 Header append Vary User-Agent env=!dont-vary However, in a number of cases where I expect a cached response to be returned the request is sent through to the origin server at localhost:9080. Responses have a HTTP Vary header of 'Accept-Encoding,User-Agent' which is to be expected given the mod_deflate configuration. How can I determine why Apache is unable to serve a response from the cache?

    Read the article

  • Any way to get back Chrome's Dialog box for cache clearing instead of the new tab?

    - by Stuart P.
    As of today's release of chrome (Tuesday, March 8, 2011) on both Mac & PC the settings are now in a tab (chrome://settings/advanced), needless to say when you're clearing your cache very frequently (cmd-shift-delete on mac, cntl+shift+delete on PC) it's quite tedious going back and forth in tabs. The click & clean chrome extension doesn't have a mac counterpart (plus I like the keyboard much more than the mouse). I've searched and have yet to find a way to get a dialog box instead of the new tab.

    Read the article

  • Tell the linux kernel to put a file in the disk cache?

    - by Rory
    Is there any command to for a file to be read in and loaded into the linux disk cache? This is on an up-to-date debian system. I know in the general case, it's better to let the linux kernel figure this out. But I have an edge case. I have a laptop that has an NFS director mounted, and i want to play a long video file, but I don't want to have a network problem interrupt the playnig. I know that (largeish) file will be read in it's entirety later on. I know that nothing else (really) will be running while playing this video. There is enough free memory to store this file. (I know I could just copy the file into a new tmpfs filesystem, but I'm curious if there's an even shorter way to do it)

    Read the article

  • ubuntu is very slow

    - by johnny smithens
    Hello all. I am new with Ubuntu, and it is very slow(even in Ubuntu 2D). The performance is degraded for almost any task. I just reinstalled with amd64, and tried updating the Nvidia drivers with Nvidia Xserver. but it made no difference. This is the output of free -m: total used free shared buffers cached Mem: 3006 1318 1688 0 61 699 -/+ buffers/cache: 556 2449 Swap: 3064 0 3064 tl;dr - total: 3006, used: 1318 When I see the virtual console with Ctrl+Alt+F2, I see constantly: Assuming Drive Cache: write through; asking for cache data failed; It is very frustrating. Thanks in advance!

    Read the article

  • When and How is an image cached for an ASPX with ContentType = image/jpeg ?

    - by Aamir Hasan
     In asp.net you can cache your page. You can vary the output cache by the followingThe query string in an initial request (HTTP GET).Control values passed on postback (HTTP POST values).The HTTP headers passed with a request.The major version number of the browser making the request.      A custom string in the page. In that case, you create custom code in the Global.asax file to specify the page's caching behavior.Link: http://msdn2.microsoft.com/en-us/library/xadzbzd6(VS.80).aspxyou can set the output caching for your GetImage.aspx, so that you dont have to requery the database every image request ,but you must use varybyParam , so that you have a cached version for every parameters arrangement:set the output cache for your page like this :At top of ASPX page: <%@ OutputCache Duration="600" VaryByParam="ID,Height,Width" %>VaryByParam  attribute allows you to vary the cached output depending on the query string.Adding this will make your images cached for 600 seconds, so that if the image request within this period ,the cahed version will be returned

    Read the article

  • I'm confused, how do I control cache so my clients can see website edits.

    - by Jared Christensen
    I host about 10 websites for clients. Every so often a client will ask for an update to their website. It may be a simple image change, new PDF or a simple text change. I make the change and then send them a link to the web page with the update. About an hour later I will get an email back from the client telling me they still see the old page. I will then explaining to them how to empty their browsers cache. What I'm trying to figure out is if there is a way I can tell their browser that I made an update to the website and that it should reload the page and update the cache. I thought about trying a meta tag but I read that they are not very reliable. Also I would still like the page to cache I just want to be able to clear it when I make an update. Is this possible? I'm an advanced front end web developer (HTML, CSS, Javascript) and know some PHP. Cache is just one of those things I don't really understand that well.

    Read the article

  • How do I choose what and when to cache data with ob_start rather than query the database?

    - by Tim Santeford
    I have a home page that has several independent dynamic parts. The parts consist of a list of recent news from the company, a site statistics panel, and the online status of certain employees. The recent news changes monthly, site statistics change daily, and online statuses change on a per minute bases. I would like to cache these panels so that the db is not hit on every page load. Is using ob_start() then ob_get_contents() to cache these parts to a file the correct way to do this or is there a better method in PHP5 for doing this? In asking this question I'm trying to answer these additional questions: How can I determine the correct approach for caching this data without doing extensive benchmarking? Does it make sense to cache these parts in different files and then join them together per requests or should I re-query the data and cache once per minute? I'm looking for a rule of thumb for planning pages and for situations where doing testing is not cost effective (The client is not paying enough for it I mean). Thanks!

    Read the article

  • I made an .htaccess template; is there anything else that should be added or changed?

    - by purpler
    # DEFAULTS ServerSignature Off AddDefaultCharset UTF-8 DefaultLanguage en-US SetEnv Europe/Belgrade SetEnv SERVER_ADMIN [email protected] # Rewrites RewriteEngine On RewriteBase / # Redirect to WWW RewriteCond %{HTTP_HOST} ^serpentineseo.com RewriteRule (.*) http://www.serpentineseo.com/$1 [R=301,L] # Cache media files <filesMatch "\.(gif|jpg|jpeg|png|ico|swf|js)$"> Header set Cache-Control "max-age=2592000, public" </filesMatch> <FilesMatch "\.(js|css|pdf|swf)$"> Header set Cache-Control "max-age=604800" </FilesMatch> <FilesMatch "\.(html|htm|txt)$"> Header set Cache-Control "max-age=600" </FilesMatch> # DONT CACHE <FilesMatch "\.(pl|php|cgi|spl|scgi|fcgi)$"> Header unset Cache-Control </FilesMatch> # Deny access to .htaccess <Files .htaccess> order allow,deny deny from all </Files>

    Read the article

  • How do you handle browser cache with login/logout?

    - by Julien
    To improve performances, I'd like to add a fairly long Cache-Control (up to 30 minutes) to each page since they do not change often. However, each page also displays the name of the user logged in (like this website). The problem is when the user logs in or logs out: the user name must change. How can I change the user name after each login/logout action while keeping a long Cache-Control? Here are the solutions I can think of: Ajax request (not cached) to retrieve and display the user name. If I have 2 requests (/user?registered and /user?new), they could be cached as well. But I am afraid this extra request would nullify my caching performance-wise Add a unique URL variable (?time=) to make the URL different, and cancel the cache. However, I would have to add this variable to all links on my webpage, not very convenient code-wise This problems becomes greater if I actually have more content that is not the same for registered users and new users.

    Read the article

  • Google Chrome does not honor cache-policy in page header if the page is displayed in a FRAME

    - by Tim
    No matter what I do: <meta http-equiv="Cache-Control" content="no-cache" /> <meta http-equiv="Expires" content="Fri, 30 Apr 2010 11:12:01 GMT" /> <meta http-equiv="Expires" content="0" /> <HTTP-EQUIV="PRAGMA" CONTENT="NO-STORE" /> Google Chrome does not reload any page according to the page's internal cache policy if the page is displayed in a frame. It is as though the meta tags are not even there. Google Chrome seems to be ignoring these tags. Since I've gotten answers to this question on other forums where the person responding has ignored the operative condition, I will repeat it: this behavior occurs when the page is displayed in a frame. I was using the latest released version and have since upgraded to 5.0.375.29 beta but the behavior is the same in both versions. Would someone please care to confirm one way or another the behavior you are seeing with framesets and the caching/expiration policies given in meta tags? Thanks

    Read the article

  • Optimal ASP.Net cache duration for a large site?

    - by HeroicLife
    I've read lots of material on how to do ASP.Net caching but little on the optimal duration that pages should be cached for. Let's say that I have a popular site with 50,000 pages. The content does not change frequently, so I could cache pages for up to an hour if I wanted. The server has 16 GB of RAM, but database connections are limited. How long should pages be cached for? My thinking is that if I set the cache duration too high (let's say 60 minutes), I will fill up memory with a fraction of the total content, which will continually be shuffled in and out of memory. Furthermore, let's say that 10% of the pages are responsible for 90% of traffic. If the popular pages are hit every second, and the unpopular ones every hour, then a 60 second cache would only keep the load-intensive content cached without sacrificing freshness. Should numerous but rarely-accessed content be cached at all?

    Read the article

  • How can I cache a Subversion password on a server, without storing it in unencrypted form?

    - by Zilk
    My Subversion server only provides access via HTTPS; support for svn+ssh has been dropped because we wanted to avoid creating system users on that machine just for SVN access. Now I'm trying to provide a way for users to cache their passwords for a while, without leaving them stored on the filesystem in unencrypted form. This is no problem for Gnome or KDE users, because they can use gnome-keyring and kwallet, respectively. IIRC, TortoiseSVN has a similar caching mechanism, too. But what about users on a non-GUI system? Some context: in this case, we have a development/testing server where one project has been checked out into the Apache htdocs directory. Development for this project is almost complete, and only minor text/layout changes are performed directly on this server. Nevertheless, the changes should be checked into the repository. There's no kwallet and no gnome-keyring on this system, and the ssh-agent can't help because the repository is accessed via https instead of svn+ssh. As far as I know, that leaves them the choice of entering the password every time they talk to the SVN server, or storing it in an insecure way. Is there any way to get something like what gnome-keyring and kwallet provide in a non-GUI environment?

    Read the article

  • Entity Framework Code First: Get Entities From Local Cache or the Database

    - by Ricardo Peres
    Entity Framework Code First makes it very easy to access local (first level) cache: you just access the DbSet<T>.Local property. This way, no query is sent to the database, only performed in already loaded entities. If you want to first search local cache, then the database, if no entries are found, you can use this extension method: 1: public static class DbContextExtensions 2: { 3: public static IQueryable<T> LocalOrDatabase<T>(this DbContext context, Expression<Func<T, Boolean>> expression) where T : class 4: { 5: IEnumerable<T> localResults = context.Set<T>().Local.Where(expression.Compile()); 6:  7: if (localResults.Any() == true) 8: { 9: return (localResults.AsQueryable()); 10: } 11:  12: IQueryable<T> databaseResults = context.Set<T>().Where(expression); 13:  14: return (databaseResults); 15: } 16: }

    Read the article

  • Des chercheurs découvrent le premier malware à pratiquer l'overwrite, caché sous la forme d'un Adobe

    Des chercheurs découvrent le premier malware à pratiquer l'overwrite, caché sous la forme d'un Adobe Updater Un code malicieux vient d'être repéré pour la première fois par des spécialistes en sécurité informatique. En effet, des chercheurs viennent de découvrir un malware qui se substitue aux mises à jour de certaines applications. Habituellement, ce type de programmes ne pratique pas l'overwrite. Seuls les ordinateurs tournant sous Windows sont touchés. Le malware se cache sous la forme d'un updater pour les produits Adobe ou Java. Une variante imite Adobe Reader v.9 et overwrite AdobeUpdater.exe, lequel a pour mission de se connecter régulièrement auprès des serveurs d'Adobe pour vérifier si une nouvelle version ...

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >