Search Results

Search found 32994 results on 1320 pages for 'second level cache'.

Page 58/1320 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Programatic cache creation

    - by Pablo Fernandez
    I switched from xml to programmatically cache creation and now I can't retrieve my cache by name. Here's a code snippet that shows what I'm doing maybe you can spot an obvious error? http://gist.github.com/405546 (I'm only showing the relevant lines here).

    Read the article

  • How to clear APC cache entries?

    - by lo_fye
    I need to clear all APC cache entries when I deploy a new version of the site. APC.php has a button for clearing all opcode caches, but I don't see buttons for clearing all User Entries, or all System Entries, or all Per-Directory Entries. Is it possible to clear all cache entries via the command-line, or some other way?

    Read the article

  • How do I cache query results using LINQ?

    - by Vince
    Hi, Is there any way to cache LINQ to SQL queries by looking at the parameters that were previously passed and bypass the database all together? I know L2S caches some database calls, but I'm looking for a permanant solution as in, even if the applciation restarts, that cache reloads and never asks the database again. Are there any frameworks for C#?

    Read the article

  • Action Cache for root URL not working

    - by askegg
    Here's the setup. I have web site which is essentially a simple CMS. Here is the routes file: map.connect ':url', :controller => :pages, :action => :show map.root :controller => :pages, :action => :show, :url => "/" The page controller is thus: class PagesController < ApplicationController before_filter :verify_access, :except => [:show] # Cache show action if we are not logged in. caches_action :show, :layout => false, :unless => Proc.new { |controller| controller.logged_in? } def update @page = Page.find(params[:id]) respond_to do |format| expire_action :action => :show, :url => @page.url So when a visitor hits "/" it maps to :controller = "pages, :action = "show, :url = "/". This generates a cached version on first try, then returns the appropriate result there after. The log files show: Processing PagesController#show (for 127.0.0.1 at 2009-08-02 14:15:01) [GET] Parameters: {"action"=>"show", "url"=>"/", "controller"=>"pages"} Cached fragment hit: views/out.local// (0.1ms) Rendering template within layouts/application Filter chain halted as [#<ActionController::Filters::AroundFilter:0x23eb03c @identifier=nil, @method=#<Proc:0x01904858@/Library/Ruby/Gems/1.8/gems/actionpack-2.3.3/lib/action_controller/caching/actions.rb:64>, @kind=:filter, @options={:only=>#<Set: {"show"}>, :if=>nil, :unless=>#<Proc:0x025137ac@/Users/askegg/Sites/out/app/controllers/pages_controller.rb:6>}>] did_not_yield. Completed in 2ms (View: 1, DB: 0) | 200 OK [http://out.local/] OK - all good so far. When I update the page, it should expire the cache (see above). The logs show: Page Load (0.2ms) SELECT * FROM "pages" WHERE ("pages"."id" = 3) Page Load (0.1ms) SELECT "pages".id FROM "pages" WHERE ("pages"."url" = '/' AND "pages".domain_id = 1 AND "pages".id <> 3) LIMIT 1 Expired fragment: views/out.local/index (0.1ms) Redirected to http://out.local/pages/3 Completed in 9ms (DB: 0) | 302 Found [http://out.local/pages/3] See the problem? Rails is clearing the cache named "index", but it sets it as "/". Naturally this results in the cache NOT being cleared, so visitors are now seeing the old version.

    Read the article

  • asp net javascript Cache clear

    - by Florim Maxhuni
    I have a website that i did some time ago now they request some new features and i did some changes in some javascript files, but when i publish the clients that use the IE have problems with cache so in they browser they have old version of javascript. How can i clear the client cache so when they visit website they use latest javascript files that i modify.

    Read the article

  • Passing data from page to page using System.Web.Caching.Cache

    - by Dan
    I'd like to pass data from one asp.net page to another. I've seen that using System.Web.Caching.Cache is a good way to accomplish this. I'm wondering if it's a good way to do it and also is there any cleanup or other things I need to keep in mind when you the Cache? I'm not passing very much, at most two integers. Thanks.

    Read the article

  • ASP.NET MVC, Cache individual User Control

    - by Alex
    How do I cache an individual user control with ASP.NET MVC? I also need the VaryByParam etc support that usually comes with ASPX Output Caching. I don't want to cache the entire action though, only one of my user controls in the view. An example would be nice :) Thank you!

    Read the article

  • how to set cache for css/js file.

    - by coderex
    Hi all, I have to use the cache for the css files and js file which i used in the site. my site running in a shared hosting server. nothing can be done with server. so what could be the solution for use cache and compression for js and css files.

    Read the article

  • MonoRails 2.0 CombineJS doesnt cache

    - by olemarius
    We just upgraded from MonoRails 1 to MonoRails 2.0, and want to use the CombineJS as seen here: http://erichauser.net/2009/01/27/javascript-compression-for-monorail/ In Firebug Net, it loads as http://www.domain.com/MonoRail/Files/BuiltJS.rails?name=deflayout&version=8204059377542922030 But it has must-revalidate in the cache-control: Cache-Control public, must-revalidate, max-age=259200 How can I get rid of that? Thanks in advance! :)

    Read the article

  • Local network cache of PHP and Apache2 on Win Server 2008 R2

    - by Ahmed Benlahsen
    Software configuration : I have a new server with Windows Server 2008 R2 installed via VMWare. I have installed Apache2.2, PHP5.2 and MySQL5.5 as separate packages. Issue : On my first installation of my application, all works great. When I updated some JS and CSS files and accessed my application again from a PC on local network, I got the old JS and CSS versions. When I access the same application on local server I got the latest versions of those files. Link of my application on local server is : http://localhost/BADIL Link of my application from local network is : http://LOCAL_SERVER_IP/BADIL I think that must be some cache but I don't know where. Maybe on Win Server 2008 R2 or on VMWare? The question is: Why, when I access my application on the server, everything works fine, but when I access the same application from a local network, I do not see the updated versions of JS and CSS files?

    Read the article

  • Squid parent cache for text/html only

    - by Salvador
    How do I configure the squid to only request text/html to the parent cache; right now I am using : cache_peer 127.0.0.1 parent 8080 0 no-query no-digest on the second hand I get a lot of direct request that do not use the parent proxy: some queries go like FIRST_UP_PARENT and some like DIRECT, how do I tell the squid to always use parent for text/html BTW .. is a transparent proxy I have tried : cache_peer 127.0.0.1 parent 8080 0 no-query no-digest acl elhtml req_mime_type -i ^text/html$ acl elhtml req_mime_type -i text/html cache_peer_access 127.0.0.1 allow elhtml cache_peer_access 127.0.0.1 deny all and it does not works Thanks in advance for the help.

    Read the article

  • Tomcat repeated 401 and the client nonce cache

    - by PaulNBN
    I've got a Tomcat 6.0.35 service with a SOAP based webapp protected by Digest Authentication. We are seeing issues with various users getting repeated 401 responses since we upgraded to 6.0.35. Additionally we are getting the following entries in Catalina log: WARNING: A valid entry has been removed from client nonce cache to make room for new entries. A replay attack is now possible. To prevent the possibility of replay attacks, reduce nonceValidity or increase cnonceCacheSize. Further warnings of this type will be suppressed for 5 minutes. Any idea what is going on?

    Read the article

  • Memory cache Ubuntu 9.10 server x86 doesn't work as expected

    - by Matthijs
    We're using an Ubuntu 9.10 server to transfer Ghost-image files. We configured it only with Samba. And the DOS-clients connect to Samba. The latest updates are installed and so far the servers is running fine. When we image 10 pc's with the same image of 2 files of 2GB there's no disk activity. Everything is loaded in the RAM. There's 4GB in the server. But when we use 2 pc's with 2 different image files of 500 MB (8x) files then there's a lot of continuous disk activity. The speed is lower. So it seems that Ubuntu doesn't cache more then one big file. Are there settings to change this behaviour?

    Read the article

  • How to Enable geoip on magento with varnish page cache

    - by molleman
    I currently have 3 stores online with 3 different domains, running magento with Apache and varnish (using Phoenix page cache extension) running on centos One store is for uk, another for Ireland and another for USA Trouble is (Example) If an US user hits the uk store , I would like the user to be notified to go to the correct store on the page, (I do not want them automatically redirected) I was able to php-pecl-geoip with maxmind database to get this to work, but as users on my website have increased I had to begin using varnish. how could I implement this functionality on with varnish so I know what country the user is from so I can display a message to the user to view their relevant website?

    Read the article

  • Flushing disk cache for performance benchmarks?

    - by Ido Hadanny
    I'm doing some performance benchmark on some heavy SQL script running on postgres 8.4 on a ubuntu box (natty). I'm experiencing some pretty un-stable performance, even though I'm supposed to be the only one running on the machine (the same script on the exact same data might run in 20m and then 40m for no specific reason). So, remembering my distant DBA training, I decided I should flush the postgres cache, using sudo /etc/init.d/postgresql restart, but it's still shaky! My question: maybe I'm missing some caches in my disk/os? I'm using a netapp appliance as my storage. Am I on the right track? Do I even want to make sure I get repeatable performance before I start tuning?

    Read the article

  • CentOS Insufficient space in download directory /var/cache/yum/base/packages

    - by Joao Heleno
    Hello! I was trying to yum install libpcap when I got Error Downloading Packages: 14:libpcap-0.9.4-15.el5.i386: Insufficient space in download directory /var/cache/yum/base/packages * free 0 * needed 108 k Here's output from df -h: Filesystem Size Used Avail Use% Mounted on /dev/sda1 20G 19G 0 100% / /dev/sda3 202G 38G 154G 20% /home tmpfs 1.5G 0 1.5G 0% /dev/shm And fdisk -l: Disk /dev/sda: 250.0 GB, 250000000000 bytes 255 heads, 63 sectors/track, 30394 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 2611 20972826 83 Linux /dev/sda2 2612 3251 5140800 82 Linux swap / Solaris /dev/sda3 3252 30394 218026147+ 83 Linux I have launched yum clean all with no success clearing up space. Please advise. Thanks.

    Read the article

  • Strange Inode/Ram cache drops happening in CentOS

    - by FunkyChicken
    I run a CentOS 5.7 machine (64bit) with 24GB ram and 4x SAS drives in RAID10 setup. This machine runs nginx/1.0.10, php-fpm & xcache. About a month back the RAM usage of this machine has changed. About every few hours the 'CACHE' is flushed from the RAM, this happens exactly when the 'Inode table usage' drops. I'm pretty sure these drops are related. (see the 2 attached images). This server hosts quite a lot of small files (20M all a few KB big). Not many files are deleted (maybe 100 per hour (total size a few MB max)), not enough to account for the huge Inode table drops. I also have no crons running which could cause these drops. Sar -r output: http://pastebin.com/C4D0B79i My question: Why are these huge RAM/Inode usage drops happening? How can I get Nginx/PHP to use all of my servers RAM?

    Read the article

  • How to cache streaming video and silverlight with squid windows reverse proxy

    - by V. Romanov
    We have an intranet web server running a silverlight application (ACTUS media monitor if anyone cares to know). The server is used to record video and stream it to clients through a CDN solution. We want to put a reverse proxy in between the server and CDN provider in order to remove the office network bottleneck that's currently strangling us. I've set up SQUID for windows on a separate machine outside the network using squid BasicAccelerator configuration setting. It seems to work as far as the reverse proxy is concerned, requests are forwarded and the application is working but it doesn't seem to cache anything (no space is used on the drive where squid is installed). I found to explicit setting to turn caching on in squid, so i assume it's on by default. Perhaps I need some other trick to make the video and/or silverlight cacheable? Any help will be appreciated. Any info you need to help me will be provided at once. Thanks in advance!

    Read the article

  • Software to cache a web application for use offline

    - by littlecharva
    My boss quite regularly has to demo our web application to clients in a situation with no wifi available and sketchy 3G access, quite often the 3G lets him down. I have considered setting a copy of our server up in a virtual machine on his laptop so he could demo it offline, but I fear this will just introduce more headaches when he forgets how to boot the VM up. What I'd ideally like is an app that records you logging into a web app, saves copies of all the pages and ties the links and buttons you click up to offline copies of the pages it saves. So you could run through the demonstration you're going to give and have it cache the pages. When you then click the same buttons and links in offline mode it will present the relevant offline pages. Does such a thing exist? Can anyone recommend any alternative solutions to this problem? Thanks, Anthony

    Read the article

  • Linux: don't use file system cache under a directory

    - by GetFree
    For a PHP website I'm monitoring, I need to see what files are being used each time the browser makes a request. I thought of using find . -type f -amin 1. With that I get all files which were read in the last minute (it's a developing server so only I am using the website). I took care of removing the noatime attribute from the mounting point. However there must be something else that's preventing the kernel from reading the actual files on disk because the access time is not being updated when I read a file. I guess it must be the file-system cache which is retrieving the files from memory. Is there a way to disable file caching under a specific directory? (public_html in my case) Also I read somewhere that there is the nobh mounting atributes which apparently disables file caching under that mounting point, but I'm not sure.

    Read the article

  • DreamPress WordPress site Varnish Cache Error

    - by rhand
    Every now and then, often when I write a post on my Dreamhost DreamPress WordPress blog I get this Varnish related error: Error 503 Service Unavailable Service Unavailable Guru Meditation: XID: 180706672 Varnish cache server I did a related post here Varnish & ISPConfig under Debian give error 503 but they only tell me it could be an Apache Virtual Hosts issue and that the defined hosts should be checked. But that thread was on a different XID and just a comment, not an accepted solution. So perhaps this situation is different. Any ideas?

    Read the article

  • yum update with shared cache

    - by Sammitch
    We've got a big batch of RHEL6 machines that are due for patching, and for some reason the process here does not involve a local repo. I'm new here, I've asked why, ["it just didn't work"] and I don't have enough time to make it work before the window that's already scheduled. So the usual method is to install yum-downloadonly and run yum update --downloadonly --downloaddir=/mnt/cifs_share and then yum update /mnt/cifs_share/*.rpm which just does not look right to me since not all of these machines have the same set of installed packages. The method I tried today was mounting the share to /var/cache/yum/x86_64/6Server/rhel-x86_64-server-6/packages/ which worked, but then yum automatically deleted everything once it finished. I've looked over the yum man page, but I don't see any flag I can feed it to stop it from deleting everything, nor a flag like up2date's --tmpdir=/mnt/cifs_share. Can anyone out there help me kludge this together until I can get a local repository working?

    Read the article

  • /manual/cache folder on my server?

    - by MrZombie
    Hi all, On our site's server, once managed by someone who's no longer with us, there's a folder named "/manual/cache" which contains txt files named+like+this, mostly using pornographic-related keywords. The content is mainly spam-like gibberish. My assumption on the matter is that it's somehow used to spam search engines, but I might be wrong, which is the reason of my question here. Any idea what it might mean/contain? As an additionnal note, the person's hiring period oddly correspond to the dates of the files, which seem to have automagically stopped being generated after the date we parted ways.

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >