Search Results

Search found 6431 results on 258 pages for 'cache invalidation'.

Page 59/258 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Caching factory design

    - by max
    I have a factory class XFactory that creates objects of class X. Instances of X are very large, so the main purpose of the factory is to cache them, as transparently to the client code as possible. Objects of class X are immutable, so the following code seems reasonable: # module xfactory.py import x class XFactory: _registry = {} def get_x(self, arg1, arg2, use_cache = True): if use_cache: hash_id = hash((arg1, arg2)) if hash_id in _registry: return _registry[hash_id] obj = x.X(arg1, arg2) _registry[hash_id] = obj return obj # module x.py class X: # ... Is it a good pattern? (I know it's not the actual Factory Pattern.) Is there anything I should change? Now, I find that sometimes I want to cache X objects to disk. I'll use pickle for that purpose, and store as values in the _registry the filenames of the pickled objects instead of references to the objects. Of course, _registry itself would have to be stored persistently (perhaps in a pickle file of its own, in a text file, in a database, or simply by giving pickle files the filenames that contain hash_id). Except now the validity of the cached object depends not only on the parameters passed to get_x(), but also on the version of the code that created these objects. Strictly speaking, even a memory-cached object could become invalid if someone modifies x.py or any of its dependencies, and reloads it while the program is running. So far I ignored this danger since it seems unlikely for my application. But I certainly cannot ignore it when my objects are cached to persistent storage. What can I do? I suppose I could make the hash_id more robust by calculating hash of a tuple that contains arguments arg1 and arg2, as well as the filename and last modified date for x.py and every module and data file that it (recursively) depends on. To help delete cache files that won't ever be useful again, I'd add to the _registry the unhashed representation of the modified dates for each record. But even this solution isn't 100% safe since theoretically someone might load a module dynamically, and I wouldn't know about it from statically analyzing the source code. If I go all out and assume every file in the project is a dependency, the mechanism will still break if some module grabs data from an external website, etc.). In addition, the frequency of changes in x.py and its dependencies is quite high, leading to heavy cache invalidation. Thus, I figured I might as well give up some safety, and only invalidate the cache only when there is an obvious mismatch. This means that class X would have a class-level cache validation identifier that should be changed whenever the developer believes a change happened that should invalidate the cache. (With multiple developers, a separate invalidation identifier is required for each.) This identifier is hashed along with arg1 and arg2 and becomes part of the hash keys stored in _registry. Since developers may forget to update the validation identifier or not realize that they invalidated existing cache, it would seem better to add another validation mechanism: class X can have a method that returns all the known "traits" of X. For instance, if X is a table, I might add the names of all the columns. The hash calculation will include the traits as well. I can write this code, but I am afraid that I'm missing something important; and I'm also wondering if perhaps there's a framework or package that can do all of this stuff already. Ideally, I'd like to combine in-memory and disk-based caching.

    Read the article

  • How long do FireFox, Chrome, Safari, and Opera cache SSL/TLS session keys?

    - by MJ
    To try to use a reason SSL/TLS session key timeout on the server-side, I'd like to know how long popular browsers cache session keys on the client. Microsoft describes this information for Windows/IE here: http://technet.microsoft.com/en-us/library/cc776467(WS.10).aspx But, I haven't been able to find similar information for other popular browsers. Does anyone know? Thanks!

    Read the article

  • My iphone mobile safari does not cache , but i have a manifest file,...

    - by Ploetzeneder
    Hello, i have now put the site on: http://www.ploetzeneder.eu/Dateien/test/index4.html the manifest is there: http://www.ploetzeneder.eu/Dateien/test/app-cache-demo.manifest Why does it not work? The Webserver where the relevant problem has this url: http://www.pharao.mobi/WebAppproblem/ Username is the Username Passwort is the Password the problem is on index4.html where all images should be cached but are not

    Read the article

  • Deploying multiple Grails instances with shared cache and sessions ?

    - by Kedare
    Hello, I am looking for a solution that allows me to deploy multiple load balanced Grails instances that have shared cache (EhCache Server ?) and sessions, is this possible ? I can't find any documentation on this (connecting to a common EhCache server or using Distributed EhCache, and sharing sessions (using EhCache too ?))... I'm looking for something that will work like multiple Rails instances with a common memcached and sessions/caches stored in the memcached... Can you help me ? Thank you !

    Read the article

  • Should I cache the XRDS file returned in openid?

    - by Joseph
    In the initial part of the openid sequence, I request the OP (e.g. Yahoo.com) and get back the XRDS file which tells me the actual URL I need to use for the rest of the openid process. So, can I cache this initial file. E.g. if I have hundreds of users using a Yahoo openid, I would only have to do the initial fetch once every hour?

    Read the article

  • My mobile does not cache , but i have a manifest file,...

    - by Ploetzeneder
    Hello, i have now put the site on: http://www.ploetzeneder.eu/Dateien/test/index4.html the manifest is there: http://www.ploetzeneder.eu/Dateien/test/app-cache-demo.manifest Why does it not work? The Webserver where the relevant problem has this url: http://www.pharao.mobi/WebAppproblem/ Username is the Username Passwort is the Password the problem is on index4.html where all images should be cached but are not

    Read the article

  • Use HTTP PUT to create new cache (ehCache) running on the same Tomcat?

    - by socal_javaguy
    I am trying to send a HTTP PUT (in order to create a new cache and populate it with my generated JSON) to ehCache using my webservice which is on the same local tomcat instance. Am new to RESTful Web Services and am using JDK 1.6, Tomcat 7, ehCache, and JSON. I have my POJOs defined like this: Person POJO: import javax.xml.bind.annotation.XmlRootElement; @XmlRootElement public class Person { private String firstName; private String lastName; private List<House> houses; // Getters & Setters } House POJO: import javax.xml.bind.annotation.XmlRootElement; @XmlRootElement public class House { private String address; private String city; private String state; // Getters & Setters } Using a PersonUtil class, I hardcoded the POJOs as follows: public class PersonUtil { public static Person getPerson() { Person person = new Person(); person.setFirstName("John"); person.setLastName("Doe"); List<House> houses = new ArrayList<House>(); House house = new House(); house.setAddress("1234 Elm Street"); house.setCity("Anytown"); house.setState("Maine"); houses.add(house); person.setHouses(houses); return person; } } Am able to create a JSON response per a GET request: @Path("") public class MyWebService{ @GET @Produces(MediaType.APPLICATION_JSON) public Person getPerson() { return PersonUtil.getPerson(); } } When deploying the war to tomcat and pointing the browser to http://localhost:8080/personservice/ Generated JSON: { "firstName" : "John", "lastName" : "Doe", "houses": [ { "address" : "1234 Elmstreet", "city" : "Anytown", "state" : "Maine" } ] } So far, so good, however, I have a different app which is running on the same tomcat instance (and has support for REST): http://localhost:8080/ehcache/rest/ While tomcat is running, I can issue a PUT like this: echo "Hello World" | curl -S -T - http://localhost:8080/ehcache/rest/hello/1 When I "GET" it like this: curl http://localhost:8080/ehcache/rest/hello/1 Will yield: Hello World What I need to do is create a POST which will put my entire Person generated JSON and create a new cache: http://localhost:8080/ehcache/rest/person And when I do a "GET" on this previous URL, it should look like this: { "firstName" : "John", "lastName" : "Doe", "houses": [ { "address" : "1234 Elmstreet", "city" : "Anytown", "state" : "Maine" } ] } So, far, this is what my PUT looks like: @PUT @Path("/ehcache/rest/person") @Produces(MediaType.APPLICATION_JSON) @Consumes(MediaType.APPLICATION_JSON) public Response createCache() { ResponseBuilder response = Response.ok(PersonUtil.getPerson(), MediaType.APPLICATION_JSON); return response.build(); } Question(s): (1) Is this the correct way to write the PUT? (2) What should I write inside the createCache() method to have it PUT my generated JSON into: http://localhost:8080/ehcache/rest/person (3) What would the command line CURL comment look like to use the PUT? Thanks for taking the time to read this...

    Read the article

  • Can I cache a ManyToOne hibernate object without it being lazy loaded?

    - by Andrew
    @ManyToOne @JoinColumn(name = "play_template_id", table = "team_play_mapping" ) public Play getPlay() { return play; } public void setPlay( Play play ) { this.play = play; } By default, this is eager loading. Can I get it so that it will read the play object from a cache without making it lazy loading? Am I correct that eager loading will force it to do a join query and hence no caching?

    Read the article

  • Multithreaded java cache for objects that are heavy to create ?

    - by krosenvold
    I need a cache some objects with fairly heavy creation times, and I need exactly-once creation semantics. It should be possible to create objects for different CacheKeys concurrently. I think I need something that (under the hood) does something like this: ConcurrentHashMap<CacheKey, Future<HeavyObject>> Are there any existing open-source implementations of this that I can re-use ?

    Read the article

  • Firefox addon to remove cache and cookies of one domain?

    - by flybywire
    I use firefox to develop a web site and at the same time to browse the web, read my gmail, etc. The problem is every now and then I need to delete the cache and or remove the cookies of the web app, but I want to stayed logged in in the other web pages I am visiting. Do you know a firefox plugin (or firefox trick) that can help with this issue?

    Read the article

  • Is it possible to cache an asp page on the server side?

    - by SLC
    Let's assume you have a big complex index page, that shows news articles and stuff. It's not going to change very often. Can you cache it somehow on the serverside, so requests don't force to server to dynamically generate the entire page every time someone visits it? Or does ASP.NET do this automatically? If so, how does it know if something has changed?

    Read the article

  • New To StructureMap - Getting No Default Instance Error 202

    - by Code Sherpa
    Hi. I am very new to StructureMap and am getting the following error: StructureMap Exception Code: 202 No Default Instance defined for PluginFamily Company.ProjectCore.Core.IUserSession, Company.ProjectCore, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null It seems to be hitting the first interface instance on compile and then throws the above error: private readonly IUserSession _userSession; public SiteMaster() { _userSession = ObjectFactory.GetInstance<IUserSession>(); // ERROR THROWN HERE ... } For what it is worth, I have the PluginFamily reference above all of interfaces: [PluginFamily("Default")] public interface IUserSession Below is my entire StructureMap.config <StructureMap> <Assembly Name="Company.ProjectWeb" /> <Assembly Name="Company.ProjectCore" /> <!-- Use DefaultKey="Default" for standard cache or DefaultKey="MemCached" for memcached cache. --> <PluginFamily Assembly="Company.ProjectCore" Type="Company.ProjectCore.Core.ICache" DefaultKey="MemCached" /> <!-- Use DefaultKey="Default" for sending the email in real time through the configured mail server or use DefaultKey="MailQueue" to send the mail in batches through another process --> <PluginFamily Assembly="Company.ProjectCore" Type="Company.ProjectCore.Core.IEmailService" DefaultKey="MailQueue" /> <!-- Use DefaultKey="Default" for standard cache or DefaultKey="UserSession" for memcached cache. --> <PluginFamily Assembly="Company.ProjectCore" Type="Company.ProjectCore.Core.IUserSession" DefaultKey="UserSession" /> <!-- Use DefaultKey="Default" for standard cache or DefaultKey="Redirector" for memcached cache. --> <PluginFamily Assembly="Company.ProjectCore" Type="Company.ProjectCore.Core.IRedirector" DefaultKey="Redirector" /> <!-- Use DefaultKey="Default" for standard cache or DefaultKey="Navigation" for memcached cache. --> <PluginFamily Assembly="Company.ProjectCore" Type="Company.ProjectCore.Core.INavigation" DefaultKey="Navigation" /> Any suggestions? Thanks.

    Read the article

  • ajax call to servlet puzzler

    - by vector
    Greetings! I'm having a problem getting a text value of a captcha from a servlet through ajax call. When my captcha gets created, its text value is written to session, but after refreshing the image itself though ajax call, I only get one old value of the text. Refreshing the image itself works ok, but I'm stuck getting the correct values from the session on subsequent call. On page reload I get both the new image and its new text value, no joy with ajax though. This works great for the image refresh: $("#asos").attr("src", "/ImageServlet?="+((new Date()).getTime()) ) This call to another method to get text value gives me old stuff: $.ajax({ url:"checkCaptcha", type:"GET", cache: false, success: function( data) { alert(data); } }); Any feedback will be appreciated. ps: here's the meat of the method getting the call: PrintWriter out = response.getWriter(); response.setContentType("text/html"); response.setDateHeader("Expires", 0 ); // Set standard HTTP/1.1 no-cache headers. response.setHeader("Cache-Control", "no-store, no-cache, must-revalidate"); // Set IE extended HTTP/1.1 no-cache headers (use addHeader). response.addHeader("Cache-Control", "post-check=0, pre-check=0"); // Set standard HTTP/1.0 no-cache header. response.setHeader("Pragma", "no-cache"); out.print( request.getSession( ).getAttribute("randomPixValue") ); out.close();

    Read the article

  • ASP.NET output caching - dynamically update dependencies

    - by ColinE
    Hi All, I have an ASP.NET application which requires output caching. I need to invalidate the cached items when the data returned from a web service changes, so a simple duration is not good enough. I have been doing a bit of reading about cache dependencies and think I have the right idea. It looks like I will need to create a cache dependency to my web service. To associate the page output with this dependency I think I should use the following method: Response.AddCacheItemDependency(cacheKey); The thing I am struggling with is what I should add to the cache? The dependency my page has is to a single value returned by the web service. My current thinking is that I should create a Custom Cache Dependency via subclassing CacheDependency, and store the current value in the cache. I can then use Response.AddCacheItemDependency to form the dependency. I can then periodically check the value and for a NotifyDependencyChange in order to invalidate my cached HTTP response. The problem is, I need to ensure that the cache is flushed immediately, so a periodic check is not good enough. How can I ensure that my dependant object in the cache which represents the value returned by the web service is re-evaluated before the HTTP response is fetched from the cache? Regards, Colin E.

    Read the article

  • Static file serving only works if root is a subfolder under public

    - by lulalala
    I am trying to serve static cache files using nginx. There are index.html files under the rails_root/public/cache directory. I tried the following configuration first, which doesn't work: root <%= current_path %>/public; # $uri always contains one slash(the first slash but not the last) try_files /cache$uri/index.html /cache$uri.html @rails; This give error: [error] 4056#0: *13503414 directory index of "(...)current/public/" is forbidden, request: "GET / HTTP/1.1" I then tried root <%= current_path %>/public/cache; # $uri always contains one slash(the first slash but not the last) try_files $uri/index.html $uri.html @rails; And to my surprise this works. Why is it that I can do the latter not the former( since they point to the same location) The permissions of the folders are: 775 public 755 cache 644 index.html The thing is that my favicon sitting under public/ is served correctly: # asset server server { listen 80; server_name assets.<%= server_name %>; expires max; add_header Cache-Control public; charset utf-8; root <%= current_path %>/public; }

    Read the article

  • LSI MegaRAID LINUX got Optimal after degradation but strange POST message

    - by kesrut
    Linux server box with LSI MegaRAID controller got degraded. But after some time RAID status changed to Optimal. Adapter 0 -- Virtual Drive Information: Virtual Drive: 0 (Target Id: 0) Name : RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 Size : 2.727 TB Mirror Data : 2.727 TB State : Optimal Strip Size : 256 KB Number Of Drives per span:2 Span Depth : 3 Default Cache Policy: WriteBack, ReadAdaptive, Cached, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Cached, No Write Cache if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Disk's Default Encryption Type : None Is VD Cached: No But now I'm getting RAID BIOS POST message: Your battery is either charging, bad or missing, and you have VDs configured for write-back mode. Because the battery is not currently usable, these VDs willl actually run in write-through mode until the battery is fully charged or replaced if it is bad or missing. (Image: http://cl.ly/image/1h1O093b1i2d) So may it be battery issue caused problem ? I get information about battery: BatteryType: iBBU Voltage: 4001 mV Current: 0 mA Temperature: 22 C Battery State : Operational BBU Firmware Status: Charging Status : None Voltage : OK Temperature : OK Learn Cycle Requested : No Learn Cycle Active : No Learn Cycle Status : OK Learn Cycle Timeout : No I2c Errors Detected : No Battery Pack Missing : No Battery Replacement required : No Remaining Capacity Low : No Periodic Learn Required : No Transparent Learn : No No space to cache offload : No Pack is about to fail & should be replaced : No Cache Offload premium feature required : No Module microcode update required : No Where can be problem ? I'm disabled alarms, but get them if enabled. But don't know how find root of problem.

    Read the article

  • How can I disable DNSSC for Google Apps (GMail) MX records on my authoritative domains?

    - by meinemitternacht
    I'm running a BIND Master / Slave setup with DNSSEC, but some of my domains use Google Apps for e-mail services. Google doesn't support DNSSEC and BIND doesn't like it at all. Log output: Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ALT2.ASPMX.L.GOOGLE.COM.dlv.isc.org/DLV/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ALT2.ASPMX.L.GOOGLE.COM/A/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ALT2.ASPMX.L.GOOGLE.COM/AAAA/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f755cb83950: ALT2.ASPMX.L.GOOGLE.COM AAAA: bad cache hit (ALT2.ASPMX.L.GOOGLE.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ALT2.ASPMX.L.GOOGLE.COM/AAAA/IN': 69.147.224.178#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f755ca52c30: ALT2.ASPMX.L.GOOGLE.COM A: bad cache hit (ALT2.ASPMX.L.GOOGLE.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ALT2.ASPMX.L.GOOGLE.COM/A/IN': 69.147.224.178#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f755ca52c30: ASPMX2.GOOGLEMAIL.COM AAAA: bad cache hit (ASPMX2.GOOGLEMAIL.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ASPMX2.GOOGLEMAIL.COM/AAAA/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f755cb83950: ASPMX2.GOOGLEMAIL.COM A: bad cache hit (ASPMX2.GOOGLEMAIL.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ASPMX2.GOOGLEMAIL.COM/A/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f754c1b0bd0: ASPMX2.GOOGLEMAIL.COM A: bad cache hit (ASPMX2.GOOGLEMAIL.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ASPMX2.GOOGLEMAIL.COM/A/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f754c1a6a30: ASPMX2.GOOGLEMAIL.COM AAAA: bad cache hit (ASPMX2.GOOGLEMAIL.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ASPMX2.GOOGLEMAIL.COM/AAAA/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f755cb83950: ASPMX3.GOOGLEMAIL.COM AAAA: bad cache hit (ASPMX3.GOOGLEMAIL.COM.dlv.isc.org/DLV) I'm not absolutely sure this is stopping Google Apps from working, because I just enabled all of the DNSSEC features. Does anyone here have experience with this?

    Read the article

  • Apache caching with mod_headers mod_expires

    - by Aaron Moodie
    Hi, I'm working on homework for uni and was hoping someone could clarify something for me. I need to set up the following: Configure the response header "Cache-Control" to have a "max-age" value of 7 days since access for all image files Configure the response header "Cache-Control" to have a "max-age" value of 5 days since modification for all static HTML files. Configure the response header "Cache-Control" to have a value of "public" for all static HTML and image files. Configure the response header "Cache-Control" to have a value of "private" for all PHP files. My question is whether it is better to use a FilesMatch, or the mod_expires ExpiresByType to best achieve this? I've so far used the following: <FilesMatch "\.(gif|jpe?g|png)$"> ExpiresDefault "access plus 7 days" Header set Cache-Control "public" </FilesMatch> <FilesMatch "\.(html)$"> ExpiresDefault "modification plus 5 days" Header set Cache-Control "public" </FilesMatch> <FilesMatch "\.(php)$"> Header set Cache-Control "private" </FilesMatch> Thanks.

    Read the article

  • Varnish does not start properly (crashes after startup) with no error messages

    - by Matthew Savage
    I am running Varnish (2.0.4 from the Ubuntu unstable apt repository, though I have also used the standard repository) in a test environment (Virtual Machines) on Ubuntu 9.10, soon to be 10.04. When I have a working configuration and the server starts successfully it seems like everything is fine, however if, for whatever reason, I stop and then restart the varnish daemon it doesn't always startup properly, and there are no errors going into syslog or messages to indicate what might be wrong. If I run varnish in debug mode (-d) and issue start when prompted then 7 times out of time it will run, but occasionally it will just shut down 'silently'. My startup command is (the $1 allows for me to pass -d to the script this lives in): varnishd -a :80 $1 \ -T 127.0.0.1:6082 \ -s malloc,1GB \ -f /home/deploy/mysite.vcl \ -u deploy \ -g deploy \ -p obj_workspace=4096 \ -p sess_workspace=262144 \ -p listen_depth=2048 \ -p overflow_max=2000 \ -p ping_interval=2 \ -p log_hashstring=off \ -h classic,5000009 \ -p thread_pool_max=1000 \ -p lru_interval=60 \ -p esi_syntax=0x00000003 \ -p sess_timeout=10 \ -p thread_pools=1 \ -p thread_pool_min=100 \ -p shm_workspace=32768 \ -p thread_pool_add_delay=1 and the VCL looks like this: # nginx/passenger server, HTTP:81 backend default { .host = "127.0.0.1"; .port = "81"; } sub vcl_recv { # Don't cache the /useradmin or /admin path if (req.url ~ "^/(useradmin|admin|session|sessions|login|members|logout|forgot_password)") { pipe; } # If cache is 'regenerating' then allow for old cache to be served set req.grace = 2m; # Forward to cache lookup lookup; } # This should be obvious sub vcl_hit { deliver; } sub vcl_fetch { # See link #16, allow for old cache serving set obj.grace = 2m; if (req.url ~ "\.(png|gif|jpg|swf|css|js)$") { deliver; } remove obj.http.Set-Cookie; remove obj.http.Etag; set obj.http.Cache-Control = "no-cache"; set obj.ttl = 7d; deliver; } Any suggestions would be greatly appreciated, this is driving me absolutely crazy, especially because its such an inconsistent behaviour.

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >