Search Results

Search found 7128 results on 286 pages for 'httpcontext cache'.

Page 6/286 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • windows cache not working as it should?

    - by piotrektt
    I run windows 2012 server with data center. The setup is with 60GB of RAM. I have one file shared on VHD and when I copy this file locally the RAM cache is all used up but when multiple computers connect to the share it the cache is not used. The network is 8Gb. The whole network is around 200 computers that need to read that one file but on this setup only 10 connection kills the server. Is there any way to check what is going on? What other solution can I use to manage cache in windows?

    Read the article

  • Make nginx avoid cache if response contains Vary Accept-Language

    - by gioele
    The cache module of nginx version 1.1.19 does not take the Vary header into account. This means that nginx will serve the same request even if the content of one of the fields specified in the Vary header has changed. In my case I only care about the Accept-Language header, all the others have been taken care of. How can I make nginx cache everything except responses that have a Vary header that contains Accept-Language? I suppose I should have something like location / { proxy_cache cache; proxy_cache_valid 10m; proxy_cache_valid 404 1m; if ($some_header ~ "Accept-Language") { # WHAT IS THE HEADER TO USE? set $contains_accept_language # HOW SHOULD THIS VARIABLE BE SET? } proxy_no_cache $contains_accept_language proxy_http_version 1.1; proxy_pass http://localhost:8001; } but I do not know what is the variable name for "the Vary header received from the backend".

    Read the article

  • rake task can't access rails.cache

    - by mark
    Hi I want to call a rake task from a cron job that stores remote weather data in the rails cache. However, I must be doing something pretty wrong here because I cannot find any solution through countless fruitless searches. Say I define and call this task namespace :weather do desc "Store weather from remote source to cache" task :cache do Rails.cache.write('weather_data', Date.today) end end I get the error Anonymous modules have no name to be referenced by Which leads me to believe the rails cache isn't available. Outputting Rails.class from the rake file gives me Module but Rails.cache.class again returns the above error. Do I need to include something here? Am I just hopeless at internet? :) Thanks in advance.

    Read the article

  • rake tast can't access rails.cache

    - by mark
    Hi I want to call a rake task from a cron job that stores remote weather data in the rails cache. However, I must be doing something pretty wrong here because I cannot find any solution through countless fruitless searches. Say I define and call this task namespace :weather do desc "Store weather from remote source to cache" task :cache do Rails.cache.write('weather_data', Date.today) end end I get the error Anonymous modules have no name to be referenced by Which leads me to believe the rails cache isn't available. Outputting Rails.class from the rake file gives me Module but Rails.cache.class again returns the above error. Do I need to include something here? Am I just hopeless at internet? :) Thanks in advance.

    Read the article

  • Ram cache on Windows Server 2008

    - by Jonas Lincoln
    Scenario: We have a file cluster on a UNC share. A couple of IIS web servers serve files from this UNC share. This is done through a IIS-module, and this module does not use the built-in IIS-caching feature. We'd like to cache the files from the UNC share in a ram disk. So far, we've found this product: http://www.superspeed.com/servers/supercache.php Are there other products that can help us cache the files from the UNC-share in ram?

    Read the article

  • APC PHP cache size does not exceed 32MB, even though settings allow for more

    - by hardy101
    I am setting up APC (v 3.1.9) on a high-traffic WordPress installation on CentOS 6.0 64 bit. I have figured out many of the quirks with APC, but something is still not quite right. No matter what settings I change, APC never actually caches more than 32MB. I'm trying to bump it up to 256 MB. 32MB is a default amount for apc.shm_size, so I am wondering if it's stuck there somehow. I have run the following echo '2147483648' > /proc/sys/kernel/shmmax to increase my system's shared memory to 2G (half of my 4G box). Then ran ipcs -lm which returns ------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 2097152 max total shared memory (kbytes) = 8388608 min seg size (bytes) = 1 Also made a change in /etc/sysctl.conf then ran sysctl -p to make the settings stick on the server. Rebooted, too, for good measure. In my APC settings, I have mmap enabled (which happens by default in recent versions of APC). php.ini looks like: apc.stat=0 apc.shm_size="256M" apc.max_file_size="10M" apc.mmap_file_mask="/tmp/apc.XXXXXX" apc.ttl="7200" I am aware that mmap mode will ignore references to apc.shm_segments, so I have left it out with default 1. phpinfo() indicates the following about APC: Version 3.1.9 APC Debugging Disabled MMAP Support Enabled MMAP File Mask /tmp/apc.bPS7rB Locking type pthread mutex Locks Serialization Support php Revision $Revision: 308812 $ Build Date Oct 11 2011 22:55:02 Directive Local Value apc.cache_by_default On apc.canonicalize O apc.coredump_unmap Off apc.enable_cli Off apc.enabled On On apc.file_md5 Off apc.file_update_protection 2 apc.filters no value apc.gc_ttl 3600 apc.include_once_override Off apc.lazy_classes Off apc.lazy_functions Off apc.max_file_size 10M apc.mmap_file_mask /tmp/apc.bPS7rB apc.num_files_hint 1000 apc.preload_path no value apc.report_autofilter Off apc.rfc1867 Off apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 256M apc.slam_defense On apc.stat Off apc.stat_ctime Off apc.ttl 7200 apc.use_request_time On apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock On apc.php reveals the following graph, no matter how long the server runs (cache size fluctuates and hovers at just under 32MB. See image http://i.stack.imgur.com/2bwMa.png You can see that the cache is trying to allocate 256MB, but the brown piece of the pie keeps getting recycled at 32MB. This is confirmed as refreshing the apc.php page shows cached file counts that move up and down (implying that the cache is not holding onto all of its files). Does anyone have an idea of how to get APC to use more than 32 MB for its cache size?? **Note that the identical behavior occurs for eaccelerator, xcache, and APC. I read here: http://www.litespeedtech.com/support/forum/archive/index.php/t-5072.html that suEXEC could cause this problem.

    Read the article

  • How to invalidate a single data item in the .net cache in VB

    - by Craig
    I have the following .NET VB code to set and read objects in cache on a per user basis (i.e. a bit like session) '' Public Shared Sub CacheSet(ByVal Key As String, ByVal Value As Object) Dim userID As String = HttpContext.Current.User.Identity.Name HttpContext.Current.Cache(Key & "_" & userID) = Value End Sub Public Shared Function CacheGet(ByVal Key As Object) Dim returnData As Object = Nothing Dim userID As String = HttpContext.Current.User.Identity.Name returnData = HttpContext.Current.Cache(Key & "_" & userID) Return returnData End Function I use these functions to hold user data that I don't want to access the DB for all the time. However, when the data is updated, I want the cached item to be removed so it get created again. How do I make an Item I set disappear or set it to NOTHING or NULL? Craig

    Read the article

  • cache and web-farm

    - by user285336
    I need to deploy my web-application on web-farm. Application has the following strings: public static X509Certificate2 GetIdCertificate() { string cacheKey = "Neogov.Insight.IdentityProvider.PrivateKey"; if (HttpContext.Current.Cache[cacheKey] == null) { //Load new. HttpContext.Current.Cache[cacheKey] = new X509Certificate2( System.Web.HttpContext.Current.Server.MapPath("~/") + "\\ID\\" + Neogov.Insight.IdentityProvider.BLL.IdConfig.Instance.IdPKeyFile, Neogov.Insight.IdentityProvider.BLL.IdConfig.Instance.IdPKeyPassword, X509KeyStorageFlags.MachineKeySet); } return (X509Certificate2)HttpContext.Current.Cache[cacheKey]; } will it work or not? If not then how to solve and what is solution? Thanks

    Read the article

  • video and file caching with squid lusca?

    - by moon
    hello all i have configured squid lusca on ubuntu 11.04 version and also configured the video caching but the problem is the squid cannot configure the video more than 2 min long and the file of size upto 5.xx mbs only. here is my config please guide me how can i cache the long videos and files with squid: > # PORT and Transparent Option http_port 8080 transparent server_http11 on icp_port 0 > > # Cache Directory , modify it according to your system. > # but first create directory in root by mkdir /cache1 > # and then issue this command chown proxy:proxy /cache1 > # [for ubuntu user is proxy, in Fedora user is SQUID] > # I have set 500 MB for caching reserved just for caching , > # adjust it according to your need. > # My recommendation is to have one cache_dir per drive. zzz > > #store_dir_select_algorithm round-robin cache_dir aufs /cache1 500 16 256 cache_replacement_policy heap LFUDA memory_replacement_policy heap > LFUDA > > # If you want to enable DATE time n SQUID Logs,use following emulate_httpd_log on logformat squid %tl %6tr %>a %Ss/%03Hs %<st %rm > %ru %un %Sh/%<A %mt log_fqdn off > > # How much days to keep users access web logs > # You need to rotate your log files with a cron job. For example: > # 0 0 * * * /usr/local/squid/bin/squid -k rotate logfile_rotate 14 debug_options ALL,1 cache_access_log /var/log/squid/access.log > cache_log /var/log/squid/cache.log cache_store_log > /var/log/squid/store.log > > #I used DNSAMSQ service for fast dns resolving > #so install by using "apt-get install dnsmasq" first dns_nameservers 127.0.0.1 101.11.11.5 ftp_user anonymous@ ftp_list_width 32 ftp_passive on ftp_sanitycheck on > > #ACL Section acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl > to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 563 # https, snews > acl SSL_ports port 873 # rsync acl Safe_ports port 80 # http acl > Safe_ports port 21 # ftp acl Safe_ports port 443 563 # https, snews > acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl > Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port > 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port > 591 # filemaker acl Safe_ports port 777 # multiling http acl > Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl > Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method > CONNECT http_access allow manager localhost http_access deny manager > http_access allow purge localhost http_access deny purge http_access > deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow > localhost http_access allow all http_reply_access allow all icp_access > allow all > > #========================== > # Administrative Parameters > #========================== > > # I used UBUNTU so user is proxy, in FEDORA you may use use squid cache_effective_user proxy cache_effective_group proxy cache_mgr > [email protected] visible_hostname proxy.aacable.net unique_hostname > [email protected] > > #============= > # ACCELERATOR > #============= half_closed_clients off quick_abort_min 0 KB quick_abort_max 0 KB vary_ignore_expire on reload_into_ims on log_fqdn > off memory_pools off > > # If you want to hide your proxy machine from being detected at various site use following via off > > #============================================ > # OPTIONS WHICH AFFECT THE CACHE SIZE / zaib > #============================================ > # If you have 4GB memory in Squid box, we will use formula of 1/3 > # You can adjust it according to your need. IF squid is taking too much of RAM > # Then decrease it to 128 MB or even less. > > cache_mem 256 MB minimum_object_size 512 bytes maximum_object_size 500 > MB maximum_object_size_in_memory 128 KB > > #============================================================$ > # SNMP , if you want to generate graphs for SQUID via MRTG > #============================================================$ > #acl snmppublic snmp_community gl > #snmp_port 3401 > #snmp_access allow snmppublic all > #snmp_access allow all > > #============================================================ > # ZPH , To enable cache content to be delivered at full lan speed, > # To bypass the queue at MT. > #============================================================ tcp_outgoing_tos 0x30 all zph_mode tos zph_local 0x30 zph_parent 0 > zph_option 136 > > # Caching Youtube acl videocache_allow_url url_regex -i \.youtube\.com\/get_video\? acl videocache_allow_url url_regex -i > \.youtube\.com\/videoplayback \.youtube\.com\/videoplay > \.youtube\.com\/get_video\? acl videocache_allow_url url_regex -i > \.youtube\.[a-z][a-z]\/videoplayback \.youtube\.[a-z][a-z]\/videoplay > \.youtube\.[a-z][a-z]\/get_video\? acl videocache_allow_url url_regex > -i \.googlevideo\.com\/videoplayback \.googlevideo\.com\/videoplay \.googlevideo\.com\/get_video\? acl videocache_allow_url url_regex -i > \.google\.com\/videoplayback \.google\.com\/videoplay > \.google\.com\/get_video\? acl videocache_allow_url url_regex -i > \.google\.[a-z][a-z]\/videoplayback \.google\.[a-z][a-z]\/videoplay > \.google\.[a-z][a-z]\/get_video\? acl videocache_allow_url url_regex > -i proxy[a-z0-9\-][a-z0-9][a-z0-9][a-z0-9]?\.dailymotion\.com\/ acl videocache_allow_url url_regex -i vid\.akm\.dailymotion\.com\/ acl > videocache_allow_url url_regex -i > [a-z0-9][0-9a-z][0-9a-z]?[0-9a-z]?[0-9a-z]?\.xtube\.com\/(.*)flv acl > videocache_allow_url url_regex -i \.vimeo\.com\/(.*)\.(flv|mp4) acl > videocache_allow_url url_regex -i > va\.wrzuta\.pl\/wa[0-9][0-9][0-9][0-9]? acl videocache_allow_url > url_regex -i \.youporn\.com\/(.*)\.flv acl videocache_allow_url > url_regex -i \.msn\.com\.edgesuite\.net\/(.*)\.flv acl > videocache_allow_url url_regex -i \.tube8\.com\/(.*)\.(flv|3gp) acl > videocache_allow_url url_regex -i \.mais\.uol\.com\.br\/(.*)\.flv acl > videocache_allow_url url_regex -i > \.blip\.tv\/(.*)\.(flv|avi|mov|mp3|m4v|mp4|wmv|rm|ram|m4v) acl > videocache_allow_url url_regex -i > \.apniisp\.com\/(.*)\.(flv|avi|mov|mp3|m4v|mp4|wmv|rm|ram|m4v) acl > videocache_allow_url url_regex -i \.break\.com\/(.*)\.(flv|mp4) acl > videocache_allow_url url_regex -i redtube\.com\/(.*)\.flv acl > videocache_allow_dom dstdomain .mccont.com .metacafe.com > .cdn.dailymotion.com acl videocache_deny_dom dstdomain > .download.youporn.com .static.blip.tv acl dontrewrite url_regex > redbot\.org \.php acl getmethod method GET > > storeurl_access deny dontrewrite storeurl_access deny !getmethod > storeurl_access deny videocache_deny_dom storeurl_access allow > videocache_allow_url storeurl_access allow videocache_allow_dom > storeurl_access deny all > > storeurl_rewrite_program /etc/squid/storeurl.pl > storeurl_rewrite_children 7 storeurl_rewrite_concurrency 10 > > acl store_rewrite_list urlpath_regex -i > \/(get_video\?|videodownload\?|videoplayback.*id) acl > store_rewrite_list urlpath_regex -i \.flv$ \.mp3$ \.mp4$ \.swf$ \ > storeurl_access allow store_rewrite_list storeurl_access deny all > > refresh_pattern -i \.flv$ 10080 80% 10080 override-expire > override-lastmod reload-into-ims ignore-reload ignore-no-cache > ignore-private ignore-auth refresh_pattern -i \.mp3$ 10080 80% 10080 > override-expire override-lastmod reload-into-ims ignore-reload > ignore-no-cache ignore-private ignore-auth refresh_pattern -i \.mp4$ > 10080 80% 10080 override-expire override-lastmod reload-into-ims > ignore-reload ignore-no-cache ignore-private ignore-auth > refresh_pattern -i \.swf$ 10080 80% 10080 override-expire > override-lastmod reload-into-ims ignore-reload ignore-no-cache > ignore-private ignore-auth refresh_pattern -i \.gif$ 10080 80% 10080 > override-expire override-lastmod reload-into-ims ignore-reload > ignore-no-cache ignore-private ignore-auth refresh_pattern -i \.jpg$ > 10080 80% 10080 override-expire override-lastmod reload-into-ims > ignore-reload ignore-no-cache ignore-private ignore-auth > refresh_pattern -i \.jpeg$ 10080 80% 10080 override-expire > override-lastmod reload-into-ims ignore-reload ignore-no-cache > ignore-private ignore-auth refresh_pattern -i \.exe$ 10080 80% 10080 > override-expire override-lastmod reload-into-ims ignore-reload > ignore-no-cache ignore-private ignore-auth > > # 1 year = 525600 mins, 1 month = 10080 mins, 1 day = 1440 refresh_pattern (get_video\?|videoplayback\?|videodownload\?|\.flv?) > 10080 80% 10080 ignore-no-cache ignore-private override-expire > override-lastmod reload-into-ims refresh_pattern > (get_video\?|videoplayback\?id|videoplayback.*id|videodownload\?|\.flv?) > 10080 80% 10080 ignore-no-cache ignore-private override-expire > override-lastmod reload-into-ims refresh_pattern \.(ico|video-stats) > 10080 80% 10080 override-expire ignore-reload ignore-no-cache > ignore-private ignore-auth override-lastmod negative-ttl=10080 > refresh_pattern \.etology\? 10080 > 80% 10080 override-expire ignore-reload ignore-no-cache > refresh_pattern galleries\.video(\?|sz) 10080 > 80% 10080 override-expire ignore-reload ignore-no-cache > refresh_pattern brazzers\? 10080 > 80% 10080 override-expire ignore-reload ignore-no-cache > refresh_pattern \.adtology\? 10080 > 80% 10080 override-expire ignore-reload ignore-no-cache > refresh_pattern > ^.*(utm\.gif|ads\?|rmxads\.com|ad\.z5x\.net|bh\.contextweb\.com|bstats\.adbrite\.com|a1\.interclick\.com|ad\.trafficmp\.com|ads\.cubics\.com|ad\.xtendmedia\.com|\.googlesyndication\.com|advertising\.com|yieldmanager|game-advertising\.com|pixel\.quantserve\.com|adperium\.com|doubleclick\.net|adserving\.cpxinteractive\.com|syndication\.com|media.fastclick.net).* > 10080 20% 10080 ignore-no-cache ignore-private override-expire > ignore-reload ignore-auth negative-ttl=40320 max-stale=10 > refresh_pattern ^.*safebrowsing.*google 10080 80% 10080 > override-expire ignore-reload ignore-no-cache ignore-private > ignore-auth negative-ttl=10080 refresh_pattern > ^http://((cbk|mt|khm|mlt)[0-9]?)\.google\.co(m|\.uk) 10080 80% > 10080 override-expire ignore-reload ignore-private negative-ttl=10080 > refresh_pattern ytimg\.com.*\.jpg > 10080 80% 10080 override-expire ignore-reload refresh_pattern > images\.friendster\.com.*\.(png|gif) 10080 80% > 10080 override-expire ignore-reload refresh_pattern garena\.com > 10080 80% 10080 override-expire reload-into-ims refresh_pattern > photobucket.*\.(jp(e?g|e|2)|tiff?|bmp|gif|png) 10080 80% > 10080 override-expire ignore-reload refresh_pattern > vid\.akm\.dailymotion\.com.*\.on2\? 10080 80% > 10080 ignore-no-cache override-expire override-lastmod refresh_pattern > mediafire.com\/images.*\.(jp(e?g|e|2)|tiff?|bmp|gif|png) 10080 80% > 10080 reload-into-ims override-expire ignore-private refresh_pattern > ^http:\/\/images|pics|thumbs[0-9]\. 10080 80% > 10080 reload-into-ims ignore-no-cache ignore-reload override-expire > refresh_pattern ^http:\/\/www.onemanga.com.*\/ > 10080 80% 10080 reload-into-ims ignore-no-cache ignore-reload > override-expire refresh_pattern > ^http://v\.okezone\.com/get_video\/([a-zA-Z0-9]) 10080 80% 10080 > override-expire ignore-reload ignore-no-cache ignore-private > ignore-auth override-lastmod negative-ttl=10080 > > #images facebook refresh_pattern -i \.facebook.com.*\.(jpg|png|gif) 10080 80% 10080 ignore-reload override-expire ignore-no-cache > refresh_pattern -i \.fbcdn.net.*\.(jpg|gif|png|swf|mp3) > 10080 80% 10080 ignore-reload override-expire ignore-no-cache > refresh_pattern static\.ak\.fbcdn\.net*\.(jpg|gif|png) > 10080 80% 10080 ignore-reload override-expire ignore-no-cache > refresh_pattern ^http:\/\/profile\.ak\.fbcdn.net*\.(jpg|gif|png) > 10080 80% 10080 ignore-reload override-expire ignore-no-cache > > #All File refresh_pattern -i \.(3gp|7z|ace|asx|bin|deb|divx|dvr-ms|ram|rpm|exe|inc|cab|qt) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims refresh_pattern -i > \.(rar|jar|gz|tgz|bz2|iso|m1v|m2(v|p)|mo(d|v)|arj|lha|lzh|zip|tar) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims refresh_pattern -i > \.(jp(e?g|e|2)|gif|pn[pg]|bm?|tiff?|ico|swf|dat|ad|txt|dll) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims refresh_pattern -i > \.(avi|ac4|mp(e?g|a|e|1|2|3|4)|mk(a|v)|ms(i|u|p)|og(x|v|a|g)|rm|r(a|p)m|snd|vob) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims refresh_pattern -i > \.(pp(t?x)|s|t)|pdf|rtf|wax|wm(a|v)|wmx|wpl|cb(r|z|t)|xl(s?x)|do(c?x)|flv|x-flv) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims > > refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern ^gopher: > 1440 0% 1440 refresh_pattern ^ftp: 10080 95% 10080 > override-lastmod reload-into-ims refresh_pattern . 1440 > 95% 10080 override-lastmod reload-into-ims

    Read the article

  • [Java] Cluster Shared Cache

    - by GuiSim
    Hi everyone. I am searching for a java framework that would allow me to share a cache between multiple JVMs. What I would need is something like Hazelcast but without the "distributed" part. I want to be able to add an item in the cache and have it automatically synced to the other "group member" cache. If possible, I'd like the cache to be sync'd via a reliable multicast (or something similar). I've looked at Shoal but sadly the "Distributed State Cache" seems like an insufficient implementation for my needs. I've looked at JBoss Cache but it seems a little overkill for what I need to do. I've looked at JGroups, which seems to be the most promising tool for what I need to do. Does anyone have experiences with JGroups ? Preferably if it was used as a shared cache ? Any other suggestions ? Thanks ! EDIT : We're starting tests to help us decide between Hazelcast and Infinispan, I'll accept an answer soon. EDIT : Due to a sudden requirements changes, we don't need a distributed map anymore. We'll be using JGroups for a low level signaling framework. Thanks everyone for you help.

    Read the article

  • Distributed Cache with Serialized File as DataStore in Oracle Coherence

    - by user226295
    Weired but I am investigating the Oracle Coherence as a substitue for distribute cache. My primarr problem is that we dont have distribituted cache as such as of now in our app. Thats my major concern. And thats what I want to implement. So, lets say if I take up a machine and start a new (3rd) reading process, it will be able to connect to the cache and listen to the cache and will have a full set of cache triplicated (as of now its duplicated) Now thats waste from a common person stanpoint too. The size of the cache is 2 GB and without going distibuted its limiting us. Thats bring me to Coheremce. But now, we dont have database as persistent store too. we have the archival processes as our persistent store. (90 days worth of data) Ok now multiply that with soem where around 2 GB * 90 (thats the bare minimum we want to keep). Preliminary/Intermediate analysis of Coherence as a solution. And a (supposedly) brilliant thought crossed my mind. Why not have this as persistant storage with my distributed cache. Does Oracle Coherence support that. I will get rid of archiving infrastructure too (i hate daemon archiving processes). For some starnge reasons, I dont wanna go to the DB to replace those flat files. What say?, can Coherence be my savior? Any other stable alternate too. (Coherence is imposed on me by big guys, FYI)

    Read the article

  • Offline cache copies in Windows file sharing

    - by netvope
    I frequently access media files (music or video) on a remote Windows file share. My Internet connection is not very fast, and I find it a waste of bandwidth when I repeatedly access the same files. For example, I may listen to the same song 30 times in a month. So, I would like to cache files I've used. I know Windows has an "Always available offline" feature but I dont' think it suit my needs. I don't want to make the whole share "available offline" as the remote Windows file share is huge (in terabytes). Making individual files "available offline" is tedious as the files are scattered in many different directories. It would be much more convenient if the system can simply cache those I've used. I could also manually make a local copy each time I use a file... but this is even more troublesome than making each file "available offline" Also The files on the share seldom change. Many of the files are rarely used. Some of the files are frequently used. I don't have a list of the most frequently used files. It would be the best if I could tell Windows to cache the last accessed 10GB, but apparently it doesn't have this feature. So I think the best way is to have a SMB/CIFS caching proxy. What do you think? I have a Linux box sitting around. Perhaps I should try to setup samba4?

    Read the article

  • how to set cache control to public in iis 7.5

    - by ivymike
    I'm trying to set cache control header to max age using the following snippet in my web.config: <system.webServer> <staticContent> <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="1.00:00:00" /> </staticContent> </system.webServer> Some how this isn't being reflected in the response. Instead I see a Cache-Control: private header on the responses. I'm using NancyFx framework (which is a layer on top of Asp.net). Is there any thing else I need to do ? Below are the reponse headers I receive: HTTP/1.1 200 OK\r\n Cache-Control: private\r\n Content-Type: application/x-javascript\r\n Content-Encoding: gzip\r\n Last-Modified: Mon, 19 Mar 2012 16:42:03 GMT\r\n ETag: 8ced406593e38e7\r\n Vary: Accept-Encoding\r\n Server: Microsoft-IIS/7.5\r\n Nancy-Version: 0.9.0.0\r\n Set-Cookie: NCSRF=AAEAAAD%2f%2f%2f%2f%2fAQAAAAAAAAAMAgAAADxOYW5jeSwgVmVyc2lvbj0wLjkuMC4wLCBDdWx0dXJlPW5ldXRyYWwsIFB1YmxpY0tleVRva2VuPW51bGwFAQAAABhOYW5jeS5TZWN1cml0eS5Dc3JmVG9rZW4DAAAAHDxSYW5kb21CeXRlcz5rX19CYWNraW5nRmllbGQcPENyZWF0ZWREYXRlPmtfX0JhY2tpbmdGaWVsZBU8SG1hYz5rX19CYWNraW5nRmllbGQHAAcCDQICAAAACQMAAADTubwoldTOiAkEAAAADwMAAAAKAAAAAkpT5d9aTSzL3BAPBAAAACAAAAACPUCyrmSXQhkp%2bfrDz7lZa7O7ja%2fIg7HV9AW6RbPPRLYLAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA%3d; path=/; HttpOnly\r\n X-AspNet-Version: 4.0.30319\r\n Date: Tue, 20 Mar 2012 09:44:20 GMT\r\n Content-Length: 1624\r\n

    Read the article

  • Can varnish cache files without specific extension or residing in specific directory

    - by pataroulis
    I have a varnish installation to cache (MANY) images that my service serves. It is about 200 images of around 4k per second and varnish happily serves them according to the following rule: if (req.request == "GET" && req.url ~ "\.(css|gif|jpg|jpeg|bmp|png|ico|img|tga|wmf)$") { remove req.http.cookie; return(lookup); } Now, the thing is that I recently added another service on the same server that creates thumbnails to serve but it does not add a specific extension. The files are of the following filename pattern: http://www.example.com/thumbnails/date-of-thumbnail/xxxxxxxxx.xx where xx are numbers, so xxxxxxxxx.xx could be 6482364283.73 (two numbers at the end) (actually this is the timestamp so I can keep extra info in the filename) That has the side effect that varnish does not cache them and I see them constantly being served by apache itself. Even though I can change the format from now on to create thumbs ending in .jpg, is there a way to change the vcl file of my varnish daemon to either cache everything under a directory (the thumbnails directory) or everything with two numbers at its extension? Let me know if I can provide any additional info ! Thanks!

    Read the article

  • disk write cache buffer and separate power supply

    - by HugoRune
    Windows has a setting to turn off the write-cache buffer (see image) Turn off Windows write-cache buffer flushing on the device To prevent data loss, do not select this check box unless the device has a separate power supply that allows the device to flush its buffer in case of power failure. Is it feasible and economical to get such a "separate power supply" for the internal sata drives of a non-server PC? Under what name is such a power supply sold? I know that there are UPS devices that can be connected to external drives,but what is required to be able to switch this setting safely on for an internal disk? The setting has different descriptions in different version of windows Windows XP: Enable write caching on the disk This setting enables write caching in Windows to improve disk performance, but a power outage or equipment failure might result in data loss or corruption. Windows Server 2003: Enable write caching on the disk Recommended only for disks with a backup power supply. This setting further improves disk performance, but it also increases the risk of data loss if the disk loses power. Windows Vista: Enable advanced performance Recommended only for disks with a backup power supply. This setting further improves disk performance, but it also increases the risk of data loss if the disk loses power. Windows 7 and 8: Turn off Windows write-cache buffer flushing on the device To prevent data loss, do not select this check box unless the device has a separate power supply that allows the device to flush its buffer in case of power failure. This article by Raymond Chen has some more detailed information about what the setting does.

    Read the article

  • Gain Quick Access to the Cache in Firefox

    - by Asian Angel
    Are you looking for a quick and simple way to view the contents of the cache in Firefox? Then you will definitely want to see how easy it can be using the CacheViewer extension. Note: CacheViewer is a front-end app for easily accessing and searching the memory cache. Before Viewing the cache in Firefox using “about:cache” provides some information about the contents but may not be the most efficient method available for some people. CacheViewer in Action Once you have installed the extension there are three easy ways to access your new cache viewer. The first is using the “CacheViewer Command” available in the “Tools Menu” and the second is using the keyboard shortcut “Ctrl + Shift + C”. The third way is by adding a “Toolbar Button” to your browser’s UI. All three work equally well…choose the method that best suits your personal needs. When you access the “CacheViewer Window” this is what it will look like. You may decide to resize it and move (or hide) some of the columns for the best viewing. You can easily scroll through the cache contents and preview images if desired as shown here. If you keep the “CacheViewer Window” open you can refresh it as you browse using the “Refresh Button” in the lower right corner. This is a nice, quick, and very simple way to access the cache on demand and save items to your hard-drive if desired. Note: The “CacheViewer” can also be set to open in a new tab instead (see “Options”). Options Choose whether “CacheViewer” opens in a separate window (default) or in a new tab. Conclusion If you want a quick and simple way to view the cache in Firefox then the CacheViewer extension is just what you have been looking for. Link Download the CacheViewer extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Add a Cache Clearing Button to FirefoxSearch for Install Packages from the Ubuntu Command LineQuick Tip: Empty Internet Explorer 7 Cache when Browser is ClosedView Internet Explorer Cache Files the Easy WayQuick Hits: 11 Firefox Tab How-Tos TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Out of band Security Update for Internet Explorer 7 Cool Looking Screensavers for Windows SyncToy syncs Files and Folders across Computers on a Network (or partitions on the same drive) If it were only this easy Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook

    Read the article

  • Clarification On Write-Caching Policy, Its Underlying Options And How It Applies To Hard Drives And Solid-State Drives

    - by Boris_yo
    In last week after doing more research on subject matter, I have been wondering about what I have been neglecting all those years to understand write-caching policy, always leaving it on default setting. Write-caching policy improves writing performance and consists of write-back caching and write-cache buffer flushing. This is how I understand all the above, but correct me if I erred somewhere: Write-through cache / Write-through caching itself is not a part of write caching policy per se and it's when data is written to both cache and storage device so if Windows will need that data later again, it is retrieved from cache and not from storage device which means only improved read performance as there is no need for waiting for storage device to read required data again. Since data is still written to storage device, write performance isn't improved and represents no risk of data loss or corruption in case of power failure or system crash while only data in cache gets lost. This option seems to be enabled by default and is recommended for removable devices with no need to use function of "Safely Remove Hardware" on user's part. Write-back caching is similar to above but without writing data to storage device, periodically releasing data from cache and writing to storage device when it is idle. In my opinion this option improves both read and write performance but represents risk if power failure or system crash occurs with the outcome of not only losing data eventually to be written to storage device, but causing file inconsistencies or corrupted file system. Write-back caching cannot be enabled together with write-through caching and it is not recommended to be enabled if no backup power supply is availabe. Write-cache buffer flushing I reckon is similar to write-back caching but enables immediate release and writing of data from cache to storage device right before power outage occurs but I don't know if it applies also to occasional system crash. This option seem to be complementary to write-back cache reducing or potentially eliminating risk of data loss or corruption of file system. I have questions about relevance of last 2 options to today's modern SSDs in order to get best performance and with less wear on SSDs: I know that traditional hard drives come with onboard cache (I wonder what type of cache that is), but do SSDs also come with cache? Assuming they do, is this cache faster than their NAND flash and system RAM and worth taking the risk of utilizing it by enabling write-back cache? I read somewhere that generally storage device's cache is faster than RAM, but I want to be sure. Additionally I read that write-caching should be enabled since current data that is to be written later to NAND flash is kept for a while in cache and provided there is data that gets modified a lot before finally being written, holding of this data and its periodic release reduces its write times to SSD thereby reducing its wearing. Now regarding to write-cache buffer flushing, I heard that SSD controllers are so fast by themselves that enabling this option is not required, because they manage flushing. However, once again, I don't know if SSDs have their own onboard cache and whether or not it is faster than their NAND flash and system RAM because if it is, keeping this option enabled would make sense. Recently I have posted question about issue with my Intel 330 SSD 120GB which was main reason to do deeper research having suspicion of write-caching policy being the culprit of SSD's freezing issue assuming data being released is what causes freezes. Currently I have write-cache enabled and write-cache buffer flushing disabled because I believe SSD controller's management of write-cache flushing and Windows write-cache buffer flushing are conflicting with each other: Since I want to troubleshoot in small steps to finally determine the source of issue, I have decided to start with write-caching policy and the move to drivers, switching to AHCI later on and finally disabling DIPM (device initiated power management) through registry modification thanks to @TomWijsman

    Read the article

  • What's So Smart About Oracle Exadata Smart Flash Cache?

    - by kimberly.billings
    Want to know what's so "smart" about Oracle Exadata Smart Flash Cache? This three minute video explains how Oracle Exadata Smart Flash Cache helps solve the random I/O bottleneck challenge and delivers extreme performance for consolidated database applications. Exadata Smart Flash Cache is a feature of the Sun Oracle Database Machine. With it, you get ten times faster I/O response time and use ten times fewer disks for business applications from Oracle and third-party providers. Read the whitepaper for more information. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Can I copy large files faster without using the file cache?

    - by Veazer
    After adding the preload package, my applications seem to speed up but if I copy a large file, the file cache grows by more than double the size of the file. By transferring a single 3-4 GB virtualbox image or video file to an external drive, this huge cache seems to remove all the preloaded applications from memory, leading to increased load times and general performance drops. Is there a way to copy large, multi-gigabyte files without caching them (i.e. bypassing the file cache)? Or a way to whitelist or blacklist specific folders from being cached?

    Read the article

  • From HttpRuntime.Cache to Windows Azure Caching (Preview)

    - by Jeff
    I don’t know about you, but the announcement of Windows Azure Caching (Preview) (yes, the parentheses are apparently part of the interim name) made me a lot more excited about using Azure. Why? Because one of the great performance tricks of any Web app is to cache frequently used data in memory, so it doesn’t have to hit the database, a service, or whatever. When you run your Web app on one box, HttpRuntime.Cache is a sweet and stupid-simple solution. Somewhere in the data fetching pieces of your app, you can see if an object is available in cache, and return that instead of hitting the data store. I did this quite a bit in POP Forums, and it dramatically cuts down on the database chatter. The problem is that it falls apart if you run the app on many servers, in a Web farm, where one server may initiate a change to that data, and the others will have no knowledge of the change, making it stale. Of course, if you have the infrastructure to do so, you can use something like memcached or AppFabric to do a distributed cache, and achieve the caching flavor you desire. You could do the same thing in Azure before, but it would cost more because you’d need to pay for another role or VM or something to host the cache. Now, you can use a portion of the memory from each instance of a Web role to act as that cache, with no additional cost. That’s huge. So if you’re using a percentage of memory that comes out to 100 MB, and you have three instances running, that’s 300 MB available for caching. For the uninitiated, a Web role in Azure is essentially a VM that runs a Web app (worker roles are the same idea, only without the IIS part). You can spin up many instances of the role, and traffic is load balanced to the various instances. It’s like adding or removing servers to a Web farm all willy-nilly and at your discretion, and it’s what the cloud is all about. I’d say it’s my favorite thing about Windows Azure. The slightly annoying thing about developing for a Web role in Azure is that the local emulator that’s launched by Visual Studio is a little on the slow side. If you’re used to using the built-in Web server, you’re used to building and then alt-tabbing to your browser and refreshing a page. If you’re just changing an MVC view, you’re not even doing the building part. Spinning up the simulated Azure environment is too slow for this, but ideally you want to code your app to use this fantastic distributed cache mechanism. So first off, here’s the link to the page showing how to code using the caching feature. If you’re used to using HttpRuntime.Cache, this should be pretty familiar to you. Let’s say that you want to use the Azure cache preview when you’re running in Azure, but HttpRuntime.Cache if you’re running local, or in a regular IIS server environment. Through the magic of dependency injection, we can get there pretty quickly. First, design an interface to handle the cache insertion, fetching and removal. Mine looks like this: public interface ICacheProvider {     void Add(string key, object item, int duration);     T Get<T>(string key) where T : class;     void Remove(string key); } Now we’ll create two implementations of this interface… one for Azure cache, one for HttpRuntime: public class AzureCacheProvider : ICacheProvider {     public AzureCacheProvider()     {         _cache = new DataCache("default"); // in Microsoft.ApplicationServer.Caching, see how-to      }         private readonly DataCache _cache;     public void Add(string key, object item, int duration)     {         _cache.Add(key, item, new TimeSpan(0, 0, 0, 0, duration));     }     public T Get<T>(string key) where T : class     {         return _cache.Get(key) as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } public class LocalCacheProvider : ICacheProvider {     public LocalCacheProvider()     {         _cache = HttpRuntime.Cache;     }     private readonly System.Web.Caching.Cache _cache;     public void Add(string key, object item, int duration)     {         _cache.Insert(key, item, null, DateTime.UtcNow.AddMilliseconds(duration), System.Web.Caching.Cache.NoSlidingExpiration);     }     public T Get<T>(string key) where T : class     {         return _cache[key] as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } Feel free to expand these to use whatever cache features you want. I’m not going to go over dependency injection here, but I assume that if you’re using ASP.NET MVC, you’re using it. Somewhere in your app, you set up the DI container that resolves interfaces to concrete implementations (Ninject call is a “kernel” instead of a container). For this example, I’ll show you how StructureMap does it. It uses a convention based scheme, where if you need to get an instance of IFoo, it looks for a class named Foo. You can also do this mapping explicitly. The initialization of the container looks something like this: ObjectFactory.Initialize(x =>             {                 x.Scan(scan =>                         {                             scan.AssembliesFromApplicationBaseDirectory();                             scan.WithDefaultConventions();                         });                 if (Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.IsAvailable)                     x.For<ICacheProvider>().Use<AzureCacheProvider>();                 else                     x.For<ICacheProvider>().Use<LocalCacheProvider>();             }); If you use Ninject or Windsor or something else, that’s OK. Conceptually they’re all about the same. The important part is the conditional statement that checks to see if the app is running in Azure. If it is, it maps ICacheProvider to AzureCacheProvider, otherwise it maps to LocalCacheProvider. Now when a request comes into your MVC app, and the chain of dependency resolution occurs, you can see to it that the right caching code is called. A typical design may have a call stack that goes: Controller –> BusinessLogicClass –> Repository. Let’s say your repository class looks like this: public class MyRepo : IMyRepo {     public MyRepo(ICacheProvider cacheProvider)     {         _context = new MyDataContext();         _cache = cacheProvider;     }     private readonly MyDataContext _context;     private readonly ICacheProvider _cache;     public SomeType Get(int someTypeID)     {         var key = "somename-" + someTypeID;         var cachedObject = _cache.Get<SomeType>(key);         if (cachedObject != null)         {             _context.SomeTypes.Attach(cachedObject);             return cachedObject;         }         var someType = _context.SomeTypes.SingleOrDefault(p => p.SomeTypeID == someTypeID);         _cache.Add(key, someType, 60000);         return someType;     } ... // more stuff to update, delete or whatever, being sure to remove // from cache when you do so  When the DI container gets an instance of the repo, it passes an instance of ICacheProvider to the constructor, which in this case will be whatever implementation was specified when the container was initialized. The Get method first tries to hit the cache, and of course doesn’t care what the underlying implementation is, Azure, HttpRuntime, or otherwise. If it finds the object, it returns it right then. If not, it hits the database (this example is using Entity Framework), and inserts the object into the cache before returning it. The important thing not pictured here is that other methods in the repo class will construct the key for the cached object, in this case “somename-“ plus the ID of the object, and then remove it from cache, in any method that alters or deletes the object. That way, no matter what instance of the role is processing the request, it won’t find the object if it has been made stale, that is, updated or outright deleted, forcing it to attempt to hit the database. So is this good technique? Well, sort of. It depends on how you use it, and what your testing looks like around it. Because of differences in behavior and execution of the two caching providers, for example, you could see some strange errors. For example, I immediately got an error indicating there was no parameterless constructor for an MVC controller, because the DI resolver failed to create instances for the dependencies it had. In reality, the NuGet packaged DI resolver for StructureMap was eating an exception thrown by the Azure components that said my configuration, outlined in that how-to article, was wrong. That error wouldn’t occur when using the HttpRuntime. That’s something a lot of people debate about using different components like that, and how you configure them. I kinda hate XML config files, and like the idea of the code-based approach above, but you should be darn sure that your unit and integration testing can account for the differences.

    Read the article

  • Cached ObjectDataSource not firing Select Event even Cache Dependecy Removed

    - by John Polvora
    I have the following scenario. A Page with a DetailsView binded to an ObjectDatasource with cache-enabled. The SelectMethod is assigned at Page_Load event, depending on my User Level Logic. After assigned the selectMethod and Parameters for the ODS, if Cache not exists, then ODS will be cached the first time. The next time, the cache will be applied to the ODS and the select event don't need to be fired since the dataresult is cached. The problem is, the ODS Cache works fine, but I have a Refresh button to clear the cache and rebind the DetailsView. Am I doing correctly ? Below is my code. <asp:DetailsView ID="DetailsView1" runat="server" DataSourceID="ObjectDataSource_Summary" EnableModelValidation="True" EnableViewState="False" ForeColor="#333333" GridLines="None"> </asp:DetailsView> <asp:ObjectDataSource ID="ObjectDataSource_Summary" runat="server" SelectMethod="" TypeName="BL.BusinessLogic" EnableCaching="true"> <SelectParameters> <asp:Parameter Name="idCompany" Type="String" /> <SelectParameters> </asp:ObjectDataSource> <asp:ImageButton ID="ImageButton_Refresh" runat="server" OnClick="RefreshClick" ImageUrl="~/img/refresh.png" /> And here is the code behind public partial class Index : Page { protected void Page_Load(object sender, EventArgs e) { ObjectDataSource_Summary.SelectMethod = ""; ObjectDataSource_Summary.SelectParameters[0].DefaultValue = ""; switch (this._loginData.UserLevel) //this is a struct I use for control permissions e pages behaviour { case OperNivel.SysAdmin: case OperNivel.SysOperator: { ObjectDataSource_Summary.SelectMethod = "SystemSummary"; ObjectDataSource_Summary.SelectParameters[0].DefaultValue = "0"; break; } case OperNivel.CompanyAdmin: case OperNivel.CompanyOperator: { ObjectDataSource_Summary.SelectMethod = "CompanySummary"; ObjectDataSource_Summary.SelectParameters[0].DefaultValue = this._loginData.UserLevel.ToString(); break; } default: break; } } protected void Page_LoadComplete(object sender, EventArgs e) { if (Cache[ObjectDataSource_Summary.CacheKeyDependency] == null) { this._loginData.LoginDatetime = DateTime.Now; Session["loginData"] = _loginData; Cache[ObjectDataSource_Summary.CacheKeyDependency] = _loginData; DetailsView1.DataBind(); } } protected void RefreshClick(object sender, ImageClickEventArgs e) { Cache.Remove(ObjectDataSource_Summary.CacheKeyDependency); } } Can anyone help me? The Select() Event of the ObjectDasource is not firing even I Remove the CacheKey Dependency

    Read the article

  • RAID 0 Volatile Volume Cache Mode configuration

    - by SnippetSpace
    I discovered that in IRST there is an option to set a cache mode for my 3 ssd raid 0 array. I've read the documentation by Intel and have some questions: Are there any overall benefits/risks from enabling cache mode? As I'm on a laptop, would write back be recommended? I read it increases chance of data loss on power interruption. What is the difference between how windows handles data integrity and the intel driver? Read only mode seems to have the benefit of faster reads, does it have any downsides? Thanks for your help guys!

    Read the article

  • tweak windows 7 virtual memory and cache / caching settings

    - by bortao
    im on windows 7 64 bit, with 4gb of memory whenever i copy or deal with a big ammount of data, windows swaps out everything from memory to the virtual memory swapfile, to make room to data cache. the problem is: i dont really need caching of this data im copying, its being copied only once, cacheing this data won't help me. on the other hand, swapping out the programs will give me a big lag time whenever i want to use those open programs again. what i want: restrict data cache to a certain ammount, lets say 1gb, or reserve a certain ammount of memory, lets say 2gb, exclusively for running programs memory. my swap file is on a separate partition, but i still have problems with swapping time.

    Read the article

  • tweak windows 7 virtual memory and cache / caching settings

    - by bortao
    im on windows 7 64 bit, with 4gb of memory whenever i copy or deal with a big ammount of data, windows swaps out everything from memory to the virtual memory swapfile, to make room to data cache. the problem is: i dont really need caching of this data im copying, its being copied only once, cacheing this data won't help me. on the other hand, swapping out the programs will give me a big lag time whenever i want to use those open programs again. what i want: restrict data cache to a certain ammount, lets say 1gb, or reserve a certain ammount of memory, lets say 2gb, exclusively for running programs memory. my swap file is on a separate partition, but i still have problems with swapping time.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >