Search Results

Search found 3912 results on 157 pages for 'distributed caching'.

Page 9/157 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Django Per-site caching using memcached

    - by Paul
    Hi, So I'm using per-site caching on a project and I've observed the following, which is kind of confusing. When I load a flat page in my browser then change it through admin and then do a refresh (within the cache timeout) there is no change in the page--as expected. However when I stat a new session in a different browser and load the page (still within the timeout) the app is hit instead of the cache, with the Isn't the cache key generated from the URL? it seems that the session state is getting in there somewhere, which is causing a cache miss. Any ideas? thanks MIDDLEWARE_CLASSES = ( 'django.middleware.cache.UpdateCacheMiddleware', 'django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.middleware.gzip.GZipMiddleware', 'django.middleware.http.ConditionalGetMiddleware', 'django.middleware.doc.XViewMiddleware', 'ittybitty.middleware.IttyBittyURLMiddleware', 'django.contrib.flatpages.middleware.FlatpageFallbackMiddleware', 'maintenancemode.middleware.MaintenanceModeMiddleware', 'djangodblog.middleware.DBLogMiddleware', 'SSL.middleware.SSLRedirect', #SSL middleware to handle SSL 'django.middleware.cache.FetchFromCacheMiddleware', )

    Read the article

  • HTML 5 offline caching

    - by kRON
    I've read the following Mozilla Developer article that explains how to implement HTML 5 offline resource caching in web apps. I've tried testing this locally: added the mime type to the list, created the manifest file, changed my doctype to the HTML 5 doctype, specified the manifest attribute and the correct path on the HTML element--but still I don't see the manifest file being consumed by Firefox at all. I've also checked the access logs on Apache and didn't see any requests for the manifest file being made. Has anyone given it a jab and had any success? I just don't know how to further troubleshoot the issue and would welcome any suggestions.

    Read the article

  • XAMPP is caching .html files running as PHP

    - by Lee
    I have XAMPP (latest version) installed on my Mac OS 10.6.3 I've added the following to .htaccess because I want .html to be interpreted as PHP. AddType application/x-httpd-php .php .html The problem is that the default XAMPP config seems to be caching .html files as static... so even though the PHP statements inside are being called (for example, 'echo time()' in index.html displays the dynamic output)... the actual file is being cached. When I make changes to a .html file, I've having to restart Apache for it to load the newest changes. Looking at httpd.conf, it looks like it's loading the following cache mods.. LoadModule file_cache_module modules/mod_file_cache.so LoadModule cache_module modules/mod_cache.so LoadModule disk_cache_module modules/mod_disk_cache.so LoadModule mem_cache_module modules/mod_mem_cache.so Any idea how to implement a system whereby it checks the timestamp of the file, before loading it from cache? Thanks!

    Read the article

  • Browser caching issue on a https site pressing f5

    - by sushil bharwani
    i am working on a website where i have content entry form. This form contains a tiny mce control. The control is composed of some 40-50 files. The testing reported that the entry form loads slow and evertime shows up 50 files loading to completely load the page. Is there a way i can decrease this time. I have taken help of browser caching by setting the expires header of static content to very far date. When i access the form through its link second or later times it loads fast without saying 40 files remaining. but when i do f5 it reloads the entire page. I m confused as to how is f5 different from clicking on the link. Just to add my url is https.Any suggestion to increase the performance of this form will be great.

    Read the article

  • Value Not Updating? Check for Caching!

    - by Ken Cox [MVP]
    Here’s today’s dumb mistake: A value that was supposedly updated by a routine on one page, wasn’t changing on another ASP.NET screen. I carefully traced the progress of the update and everything looked right – all the way to the database. After puzzling over why the value wouldn’t show correctly on the ASP.NET grid, it finally dawned on me: <%@ OutputCache Duration="30" VaryByParam="none" %> Ouch! To improve efficiency, I had told the page to cache the output for 30 seconds...(read more)

    Read the article

  • IIS7 + ASP.NET MVC Client Caching Headers Not Working

    - by Tobin Harris
    Hey folks I've deployed an ASP.NET MVC app on IIS7 and Windows Server 2008. I've read posts on here, and around the web, but can't get the darn client-side caching to work. I'm trying to cache everything in the /Content folder. So far I've select that folder in IIS manager, and set the appropriate HTTP Response Headers (under Common Headers). I've also checked the web.config file in the /Content folder and the values there are being set. All resources in /Content come back with this (from FireBug): Cache-Control no-cache, no-store, must-revalidate Pragma no-cache Content-Type image/png Expires -1 Last-Modified Sun, 11 Oct 2009 19:01:40 GMT Accept-Ranges bytes Etag "f318d643a54aca1:0" Server Microsoft-IIS/7.0 X-Powered-By ASP.NET Date Sun, 11 Oct 2009 20:40:01 GMT Content-Length 620 Note the Cache-Control and Expires values for this static image being requested. The site is currently compiled in Debug (this will change), but surely that wouldn't make a difference? Obviously I'm overlooking something, any ideas would be appreciated. Thanks

    Read the article

  • Caching linked pages in ASP.NET

    - by n0e
    I'm thinking of a way of creating a local backup for the pages I will be linking to from my site. This would be text-only, similar to Google's 'Copy' feature on the search pages. The idea is to be sure that the pages I would reference to, or cite from, do not dissapear from the Web in the near future. I know I could just keep local copies, but I will have A LOT of citations. What would be the best way of achieving this in ASP.NET? Some custom caching in database?

    Read the article

  • Caching the response of an ASP.NET HTTP Handler server and client side

    - by Bert Vandamme
    Is it possible to cache the response of a http handler on the server and on the client? This doesn't seem to be doing the trick: _context.Response.Cache.SetCacheability(HttpCacheability.Public); _context.Response.Cache.SetExpires(DateTime.Now.AddDays(7)); The _context is the HTTPContext passed as an argument to the ProcessRequest method on the IHttpHandler implementation. Any ideas? Update: The client does cache images that are loaded through the httphandler, but if another client does the same call, the server hasn't got it cached. So for each client that asks for the image, the server goes to the database (and filestream). If we use a aspx page instead of a httphandler together with a caching profile, then the images are cached both on the client and the server.

    Read the article

  • Oracle Fusion Distributed Order Orchestration

    Designed from the ground-up using the latest technology advances and incorporating the best practices gathered from Oracle's thousands of customers, Fusion Applications are 100 percent open standards-based business applications that set a new standard for the way we innovate, work and adopt technology. Delivered as a complete suite of modular applications, Fusion Applications work with your existing portfolio to evolve your business to a new level of performance. In this AppCast, part of a special series on Fusion Applications, you hear lean how Oracle Fusion Distributed Order Orchestration can help companies improve customer service, reduce fulfillment costs, and optimize fulfillment decision making. Supporting a strategy for improving operational efficiency and boosting customer satisfaction, Fusion Distributed Order Orchestration alleviates or tempers critical production challenges many organizations face today by consolidating order information into a central location. You'll also discover how Fusion Distributed Order Orchestration works with your existing order management solutions.

    Read the article

  • Symfony caching question (caching a partial)

    - by morpheous
    I am using Symfony 1.3.2 and I have a page that uses a partial from another module. I have two modules: 'foo' and 'foobar'. In module 'foo', I have an 'index' action, which uses a partial from the 'foobar' module. so foo/indexSuccess.php looks something like this: Some data here ? I want to cache 'part2' of my foo/indexSuccess.php page, because it is very expensive (slow). I want the cache to have a lifetime of about 10 minutes. In apps/frontend/modules/foo/config/cache.yml I need to know how to cache 'part2' of the page (i.e. the [very expensive] partial part of the page. can anyone tell me what entries are required in the cache.yml file?

    Read the article

  • Apache server-side files caching via .htaccess?

    - by purpler
    Hi, I'm starting new website and gonna include several JS libs and would like to know how .htaccess file template should look like with caching of media and JS files on? Whats better for compression, GZip or Deflate? Is it better/faster solution to serve those JS libs of the Google CDN perhaps then locally? I'm asking CDN question since some of scripts served off GoogleCDN are potentially going to update and eventually break the website layout so i thought it would be better for me to host them locally and cache via webserver if its going to work with same/near-same speed.

    Read the article

  • Caching by in-memory dictionaries. Are we doing it all wrong?

    - by user73983
    This approach is pretty much the accepted way to do anything in our company. A simple example : when a piece of data for a customer is requested from a service, we fetch all the data for that customer(relevant part to the service) and save it in a in-memory dictionary then serve it from there on following requests(we run singleton services). Any update goes to DB, then updates the in memory dictionary. It seems all simple and harmless but as we implement more complicated business rules the cache gets out of sync and we have to deal with hard to find bugs. Sometimes we defer writing to database, keeping new data in cache till then. There are cases when we store millions of rows in memory because the table has many relations to other tables and we need to show aggregate data quickly. All this cache handling is a big part of our codebase and I sense this is not the right way to do it. All of this juggling adds too much noise to the code and it makes it hard to understand the actual business logic. However I don't think we can serve data in a reasonable amount of time if we have to hit the database every time. I am unhappy about the current situation but I don't have a better alternative. My only solution would be to use NHibernate 2nd level cache but I have nearly no experience with it. I know many campanies use Redis or MemCached heavily to gain performance but I have no idea how I would integrate them into our system. I also don't know if they can perform better than in-memory data structures and queries. Are there any alternative approaches that I should look into?

    Read the article

  • "No caching mode page present" when USB flash disk attached

    - by evgeny9
    When attaching a USB flash disk (NTFS formatted) to a laptop with Ubuntu Server 12.04 on board, I get following messages: [ 3572.355603] sd 2:0:0:0: [sdb] No Caching mode page present [ 3572.355640] sd 2:0:0:0: [sdb] Assuming drive cache: write through [ 3572.361599] sd 2:0:0:0: [sdb] No Caching mode page present [ 3572.361636] sd 2:0:0:0: [sdb] Assuming drive cache: write through I get them right in the terminal, so that I should press Ctrl+C to proceed with working (entering commands). Is it normal or do I have to setup Caching mode somehow? Thank you.

    Read the article

  • Experience with MooseFS?

    - by brown.2179
    Anyone have any experience using MooseFS? I want an easy distributed storage platform to store static data archive of about 10 TB and serve it to 20-40 nodes. Also I want to be able to add storage as the archive grows without having to rebuild the filesystem. I don't care if it's a bit slow. I just want it to be simple and stable. Basically from what I can see for OS X it's between MooseFS and Gluster. Any other suggestions?

    Read the article

  • Problem with caching images on server- jQuery

    - by klon
    Hi I have the weirdest problem. I am implementing a simple gallery with a use of nivo slider jQuery plugin. Everything works perfectly when I test it on my local machine, however I am having an issue on an online hosted server. The images do not tend to appear when you first open the website. There seems to be an issue with caching the images. when you reload the page (simple f5) everything works fine. Rather than showing you the code, I think it would be better to show the site so you can see what firebug shows you: http://teslacreations.com/orangery/test.php Does anyone have any ideas how to solve it?

    Read the article

  • Apache file caching

    - by danp
    How does apache handle caching of certain files, and is it possible to explicitly say that certain files should be aggressively cached more than others, through the standard config files for a given host or virtualhost? To put it in context, I keep a lot of site content in various XML files, and I'd like to be able to say that this file will be used a lot, and therefore cache it as much as possible..? Does apache do this kind of thing intelligently and on the fly..? Will it observe which files are more popular than others and try to match cache hits appropriately..? Lots of questions in one, but the basic idea should be clear enough. edit: to be clear, these are resource files which are loaded and interpreted by PHP - but php as a process is spawned inside apache.. right? Please help!

    Read the article

  • Tuning Distributed Applications to Access Big Data

    Distributed applications are just that: distributed across one or more hardware platforms across the enterprise. The database administrator (DBA) has the unenviable task of monitoring these environments and configuring and tuning the database server to meet multiple needs. As multiple distributed applications now require access to a very large data store, what tuning options are available to help? Get your SQL Server database under version control now!Version control is standard for applications, but databases haven’t caught up. So how can you bring database development up to speed? Why should you start? Find out…

    Read the article

  • Looking for efficient scaling patterns for Silverlight application with distributed text-file data s

    - by Edward Tanguay
    I'm designing a Silverlight software solution for students and teachers to record flashcards, e.g. words and phrases that students find while reading and errors that teachers notice while teaching. Requirements are: each person publishes his own flashcards in a file on a web server, e.g. http://:www.mywebserver.com/flashcards.txt other people subscribe to that person's flashcards by using a Silverlight flashcard reader that I have developed and entering the URLs of flashcard files they want to subscribe to, URLs and imported flashcards being saved in IsolatedStorage the flashcards.txt file has the following simple format: title, then blocks of question/answers: Jim Smith's flashcards from English class 53-222, winter semester 2009 ==fla Das kann nicht sein. That can't be. ==fla Es sei denn, er kommt nicht. Unless he doesn't come. The user then makes public the URL to his flashcard file and other readers begin reading in his flashcards. In order to lower the bar for non-technical users to contribute, it will even be possible for them to save this text in a Google Document, which they publish and distribute the URL. The flashcard readers will then recognize it is a google document and perform the necessary screen scraping to get at the raw text. I have two technical questions about this approach: What is a best way to plan now for scalability issues: e.g. if your reader is subscribed to 10 flashcard files that are each 200K, it will have to download 2MB of text just to find out if any new flashcards are available. Or can I somehow accurately and consistently get at the last update date/time of text files on servers and published google docs? Each reader will have the ability to allow the person to test himself on imported flashcards and add meta information to them, e.g. categorize them, edit them, etc. This information will be stored in IsolatedStorage along with the important flashcards themselves. What is a good pattern to allow these readers to share and synchronize this meta data, e.g. so when you are looking at a flashcard you can see that 5 other people have made corrections to it. The best solution I can think of now is that the Silverlight readers will have to republish their data to a central database, but then there is the problem of uniquely identifying each flashcard, the best approach seems to be URL + position-in-file, or even better URL + original text of both question and answer fields, but both of these have their obvious drawbacks. The main requirement is that the bar for participation is kept as low as possible, i.e. type text in a google document, publish it, distribute the URL, and you're publishing within the flashcard community. So I want to come up with the most efficient technical solutions in order to compensate for the lack of database, lack of unique ids, etc. For those who have designed or developed similar non-traditional, distributed database projects like this, what advice, experience or best-practice tips you can share on the above two points?

    Read the article

  • Release management with a distributed version control system

    - by See Sharp Cheddar
    We're considering a switch from SVN to a distributed VCS at my workplace. I'm familiar with all the reasons for wanting to using a DVCS for day-to-day development: local version control, easier branching and merging, etc., but I haven't seen that much that's compelling in terms of managing software releases. Here's our release process: Discover what changes are available for merging. Run a query to find the defects/tickets associated with these changes. Filter out changes associated with "open" tickets. In our environment, tickets must be in a closed state in order to merged with a release branch. Filter out changes we don't want in the release branch. We are very conservative when it comes to merging changes. If a change isn't absolutely necessary, it doesn't get merged. Merge available changes, preferably in chronological order. We group changes together if they're associated with the same ticket. Block unwanted changes from the release branch (svnmerge block) so we don't have to deal with them again. Sometimes we can be juggling 3-5 different milestones at a time. Some milestones have very different constraints, and the block list can get quite long. I've been messing around with git, mercurial and plastic, and as far as I can tell none of them address this model very well. It seems like they would work very well when you have only one product you're releasing, but I can't imagine using them for juggling multiple, very different products from the same codebase. For example, cherry-picking seems to be an afterthought in mercurial. (You have to use the 'transplant' command). After you cherry-pick a change into a branch it still shows up as an available integration. Cherry-picking breaks the mercurial way of working. DVCS seems to be better suited for feature branches. There's no need for cherry-picking if you merge directly from a feature branch to trunk and the release branch. But who wants to do all that merging all the time? And how do you query for what's available to merge? And how do you make sure all the changes in a feature branch belong together? It sounds like total chaos. I'm torn because the coder in me wants DVCS for day-to-day work. I really want it. But I fear the day when I have to put the release manager hat and sort out what needs to be merged and what doesn't. I want to write code, I don't want to be a merge monkey.

    Read the article

  • Caching Authentication Data

    - by PartlyCloudy
    Hi, I'm currently implementing a REST web service using CouchDB and RESTlet. The RESTlet layer is mainly for authentication and some minor filtering of the JSON data served by CouchDB: Clients <= HTTP = [ RESTlet <= HTTP = CouchDB ] I'm using CouchDB also to store user login data, because I don't want to add an additional database server for that purpose. Thus, each request to my service causes two CouchDB requests conducted by RESTlet (auth data + "real" request). In order to keep the service as efficent as possible, I want to reduce the number of requests, in this case redundant requests for login data. My idea now is to provide a cache (i.e.LRU-Cache via LinkedHashMap) within my RESTlet application that caches login data, because HTTP caching will probabily not be enough. But how do I invalidate the cache data, once a user changes the password, for instance. Thanks to REST, the application might run on several servers in parallel, and I don't want to create a central instance just to cache login data. Currently, I save requested auth data in the cache and try to auth new requests by using them. If a authentication fails or there is now entry available, I'll dispatch a GET request to my CouchDB storage in order to obtain the actual auth data. So in a worst case, users that have changed their data will perhaps still be able to login with their old credentials. How can I deal with that? Or what is a good strategy to keep the cache(s) up-to-date in general? Thanks in advance.

    Read the article

  • File based caching under PHP

    - by azatoth
    I've been using http://code.google.com/p/phpbrowscap/ for a project, and it usually works nice. But a few times it's cache, which is plain php-files (see http://code.google.com/p/phpbrowscap/source/browse/trunk/browscap/Browscap.php#372 et. al.), has been "zeroed", i.e. the whole cache file has become large blob of NULLs. Instead of trying to find out why the files become NULL, I though perhaps it might be better to change the caching strategy to something more resilient. So I do wonder if you has any good ideas what would be a good solution; I've been looking at http://www.jongales.com/blog/2009/02/18/simple-file-based-php-cache-class/ and http://www.phpclasses.org/package/313-PHP-Cache-arbitrary-data-in-files-.html and I also though of just saving an serialized array to the file instead of pure php as it's been doing now; But I'm uncertain what approach I should target here. I'm grateful for any insight into this area of technology, as I know it's complex from a performance point of view.

    Read the article

  • Approach for caching data from data logger

    - by filip-fku
    Greetings, I've been working on a C#.NET app that interacts with a data logger. The user can query and obtain logs for a specified time period, and view plots of the data. Typically a new data log is created every minute and stores a measurement for a few parameters. To get meaningful information out of the logger, a reasonable number of logs need to be acquired - data for at least a few days. The hardware interface is a UART to USB module on the device, which restricts transfers to a maximum of about 30 logs/second. This becomes quite slow when reading in the data acquired over a number of days/weeks. What I would like to do is improve the perceived performance for the user. I realize that with the hardware speed limitation the user will have to wait for the full download cycle at least the first time they acquire a larger set of data. My goal is to cache all data seen by the app, so that it can be obtained faster if ever requested again. The approach I have been considering is to use a light database, like SqlServerCe, that can store the data logs as they are received. I am then hoping to first search the cache prior to querying a device for logs. The cache would be updated with any logs obtained by the request that were not already cached. Finally my question - would you consider this to be a good approach? Are there any better alternatives you can think of? I've tried to search SO and Google for reinforcement of the idea, but I mostly run into discussions of web request/content caching. Thanks for any feedback!

    Read the article

  • asp.net, wcf authentication and caching

    - by andrew
    I need to place my app business logic into a WCF service. The service shouldn't be dependent on ASP.NET and there is a lot of data regarding the authenticated user which is frequently used in the business logic hence it's supposed to be cached (probably using a distributed cache). As for authentication - I'm going to use two level authentication: Front-End - forms authentication back-end (WCF Service) - message username authentication. For both authentications the same custom membership provider is supposed to be used. To cache the authenticated user data, I'm going to implement two service methods: 1) Authenticate - will retrieve the needed data and place it into the cache(where username will be used as a key) 2) SignOut - will remove the data from the cache Question 1. Is correct to perform authentication that way (in two places) ? Question 2. Is this caching strategy worth using or should I look at using aspnet compatible service and asp.net session ? Maybe, these questions are too general. But, anyway I'd like to get any suggestions or recommendations. Any Idea

    Read the article

  • Is Safari on iOS 6 caching $.ajax results?

    - by user1684978
    Since the upgrade to iOS 6, we are seeing Safari's web view take the liberty of caching $.ajax calls. This is in the context of a PhoneGap application so it is using the Safari WebView. Our $.ajax calls are POST methods and we have cache set to false {cache:false}, but still this is happening. We tried manually adding a timestamp to the headers but it did not help. We did more research and found that Safari is only returning cached results for web services that have a function signature that is static and does not change from call to call. For instance, imagine a function called something like: getNewRecordID(intRecordType) This function receives the same input parameters over and over again, but the data it returns should be different every time. Must be in Apple's haste to make iOS 6 zip along impressively they got too happy with the cache settings. Has anyone else seen this behavior on iOS 6? If so, what exactly is causing it? The workaround that we found was to modify the function signature to be something like this: getNewRecordID(intRecordType, strTimestamp) and then always pass in a timestamp parameter as well, and just discard that value on the server side. This works around the issue. I hope this helps some other poor soul who spends 15 hours on this issue like I did!

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >