Search Results

Search found 7128 results on 286 pages for 'httpcontext cache'.

Page 3/286 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Varnish configuration to only cache for non-logged in users

    - by davidsmalley
    I have a Ruby on Rails application fronted by varnish+nginx. As most of the sites content is static unless you are a logged in user, I want to cache the site heavily with varnish when a user is logged out but only to cache static assets when they are logged in. When a user is logged in they will have the cookie 'user_credentials' present in their Cookie: header, in addition I need to skip caching on /login and /sessions in order that a user can get their 'user_credentials' cookie in the first place. Rails by default does not set a cache friendly Cache-control header, but my application sets a "public,s-max-age=60" header when a user is not logged in. Nginx is set to return 'far future' expires headers for all static assets. The configuration I have at the moment is totally bypassing the cache for everything when logged in, including static assets — and is returning cache MISS for everything when logged out. I've spent hours going around in circles and here is my current default.vcl director rails_director round-robin { { .backend = { .host = "xxx.xxx.xxx.xxx"; .port = "http"; .probe = { .url = "/lbcheck/lbuptest"; .timeout = 0.3 s; .window = 8; .threshold = 3; } } } } sub vcl_recv { if (req.url ~ "^/login") { pipe; } if (req.url ~ "^/sessions") { pipe; } # The regex used here matches the standard rails cache buster urls # e.g. /images/an-image.png?1234567 if (req.url ~ "\.(css|js|jpg|jpeg|gif|ico|png)\??\d*$") { unset req.http.cookie; lookup; } else { if (req.http.cookie ~ "user_credentials") { pipe; } } # Only cache GET and HEAD requests if (req.request != "GET" && req.request != "HEAD") { pipe; } } sub vcl_fetch { if (req.url ~ "^/login") { pass; } if (req.url ~ "^/sessions") { pass; } if (req.http.cookie ~ "user_credentials") { pass; } else { unset req.http.Set-Cookie; } # cache CSS and JS files if (req.url ~ "\.(css|js|jpg|jpeg|gif|ico|png)\??\d*$") { unset req.http.Set-Cookie; } if (obj.status >=400 && obj.status <500) { error 404 "File not found"; } if (obj.status >=500 && obj.status <600) { error 503 "File is Temporarily Unavailable"; } } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } }

    Read the article

  • How do Expires headers and cache manifest rules work together?

    - by Robert K
    I find the W3C's official Offline Web Applications specification to be rather vague about how the cache manifest interacts with headers such as ETag, Expires, or Pragma on cached assets. I know that the manifest should be checked with each request so that the browser knows when to check the other assets for updates. But because the specification doesn't define how the cache manifest interacts with normal cache instructions, I can't predict precisely how the browser will react. Will assets with a future expiration date be refreshed (no matter the cache headers) when the cache manifest is updated? Or, will those assets obey the normal caching rules? Which caching mechanism, HTTP cache versus cache manifest, will take precedence, and when?

    Read the article

  • Using memory-based cache together with conventional cache

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • HttpContext.Current.Request.UserHostName is empty when called from a class

    - by John Galt
    I have various web pages that need to build up a URL to display or place it in an emitted email message. The code I inherited had this value for the name of the webserver in a Public Const in a Public Class called FixedConstants. For example: Public Const cdServerName As String = "WEBSERVERNAME" Trying to improve on this, I wrote this: Public Class UIFunction Public Shared myhttpcontext As HttpContext Public Shared Function cdWebServer() As String Dim s As New StringBuilder("http://") Dim h As String h = String.Empty Try h = Current.Request.ServerVariables("REMOTE_HOST").ToString() Catch ex As Exception Dim m As String m = ex.Message.ToString() 'Ignore this should-not-occur thingy End Try If h = String.Empty Then h = "SomeWebServer" End If s.Append(h) s.Append("/") Return s.ToString() End Function I've tried different things while debugging such as HttpContext.Current.Request.UserHostName and I always get an empty string which pumps out my default string "SomeWebServer". I know Request.UserHostName or Request.ServerVariables("REMOTE_HOST") works when invoked from a page but why does this return empty when invoked from a called method of a class file (i.e. UIFunction.vb)?

    Read the article

  • cache-coherence MOESI protocol

    - by Yaron
    processor A owns a cache line which is shared with processor B. what happens when B tries to write to that line? also, if it was 'invalid' instead of 'shared' would it make any difference? thank you.

    Read the article

  • Cache Class compilation error using parent-child relationships and cache sql storage

    - by Fred Altman
    I have the global listed below that I'm trying to create a couple of cache classes using sql stoarage for: ^WHEAIPP(1,26,1)=2 ^WHEAIPP(1,26,1,1)="58074^^SMSNARE^58311" 2)="58074^59128^MPHILLIPS^59135" ^WHEAIPP(1,29,1)=2 ^WHEAIPP(1,29,1,1)="58074^^SMSNARE^58311" 2)="58074^59128^MPHILLIPS^59135" ^WHEAIPP(1,93,1)=2 ^WHEAIPP(1,93,1,1)="58884^^SSNARE^58948" 2)="58884^59128^MPHILLIPS^59135" ^WHEAIPP(1,166,1)=2 ^WHEAIPP(1,166,1,1)="58407^^SMSNARE^58420" 2)="58407^59128^MPHILLIPS^59135" ^WHEAIPP(1,324,1)=2 ^WHEAIPP(1,324,1,1)="58884^^SSNARE^58948" 2)="58884^59128^MPHILLIPS^59135" ^WHEAIPP(1,419,1)=3 ^WHEAIPP(1,419,1,1)="59707^^SSNARE^59708" 2)="59707^^MPHILLIPS^59910,58000^^^^" 3)="59707^59981^SSNARE^60117,53241^^^^" The first two subscripts of the global (Hmo and Keen) make a unique entry. The third subscript (Seq) has a property (IppLineCount) which is the number of IppLines in the fourth subscript level (Seq2). I create the class WIppProv below which is the parent class: /// <PRE> /// ============================ /// Generated Class Definition /// Table: WMCA_B_IPP_PROV /// Generated by: FXALTMAN /// Generated on: 05/21/2012 13:46:41 /// Generator: XWESTblClsGenV2 /// ---------------------------- /// </PRE> Class XFXA.MCA.WIppProv Extends (%Persistent, %XML.Adaptor) [ ClassType = persistent, Inheritance = right, ProcedureBlock, StorageStrategy = SQLMapping ] { /// .HMO Property Hmo As %Integer; /// .KEEN Property Keen As %Integer; /// .SEQ Property Seq As %String; Property IppLineCount As %Integer; Index iMaster On (Hmo, Keen, Seq) [ IdKey, Unique ]; Relationship IppLines As XFXA.MCA.WIppProvLine [ Cardinality = many, Inverse = relWIppProv ]; <Storage name="SQLMapping"> <DataLocation>^WHEAIPP</DataLocation> <ExtentSize>1000000</ExtentSize> <SQLMap name="DBMS"> <Data name="IppLineCount"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>1</Piece> </Data> <Global>^WHEAIPP</Global> <PopulationType>full</PopulationType> <Subscript name="1"> <AccessType>Sub</AccessType> <Expression>{Hmo}</Expression> <LoopInitValue>1</LoopInitValue> </Subscript> <Subscript name="2"> <AccessType>Sub</AccessType> <Expression>{Keen}</Expression> </Subscript> <Subscript name="3"> <AccessType>Sub</AccessType> <LoopInitValue>1</LoopInitValue> <Expression>{Seq}</Expression> </Subscript> <Type>data</Type> </SQLMap> <StreamLocation>^XFXA.MCA.WIppProvS</StreamLocation> <Type>%Library.CacheSQLStorage</Type> </Storage> } This class compiles fine. Next I created the WIppProvLine class listed below and made a parent-child relationship between the two: /// Used to represent a single line of IPP data Class XFXA.MCA.WIppProvLine Extends (%Persistent, %XML.Adaptor) [ ClassType = persistent, Inheritance = right, ProcedureBlock, StorageStrategy = SQLMapping ] { /// .CLM_AMT_ALLOWED node: 0 piece: 6<BR> /// This field should be used in conjunction with the Claim Operator field to /// define a whole claim dollar amount at which a particular claim should be /// flagged with a Pend status. Property ClmAmtAllowed As %String; /// .CLM_LINE_AMT_ALLOWED node: 0 piece: 8<BR> /// This field should be used in conjunction with the Clm Line Operator field to /// define a claim line dollar amount at which a particular claim should be flagged /// with a Pend status. Property ClmLineAmtAllowed As %String; /// .CLM_LINE_OP node: 0 piece: 7<BR> /// A new Table/Column Reference that gives the SIU (Special Investigative Unit) /// the ability to look for claim line dollars above, below, or equal to a set /// amount. Property ClmLineOp As %String; /// .CLM_OP node: 0 piece: 5<BR> /// A new Table/Column Reference that gives the SIU (Special Investigative Unit) /// the ability to look for claim dollars above, below, or equal to a set amount. Property ClmOp As %String; Property EffDt As %Date; Property Hmo As %Integer; /// .IPP_REASON node: 0 piece: 10<BR> /// IPP Reason Code Property IppCode As %Integer; Property Keen As %Integer; /// .LAST_CHG_DT node: 0 piece: 4<BR> /// Last Changed Date Property LastChgDt As %Date; /// .PX_DX_CDE_FLAG node: 0 piece: 9<BR> /// A Flag to indicate whether or not Procedure Codes or Diagnosis Codes are to be /// associated with this SIU Flag Type Entry. If the Flag = Y, then control would /// jump to a new screen where the user can enter the necessary codes. Property PxDxCdeFlag As %String; Property Seq As %String; Property Seq2 As %String; Index iMaster On (Hmo, Keen, Seq, Seq2) [ IdKey, PrimaryKey, Unique ]; /// .TERM_DT node: 0 piece: 2<BR> /// Term Date Property TermDt As %Date; /// .USER_INI node: 0 piece: 3 Property UserIni As %String; Relationship relWIppProv As XFXA.MCA.WIppProv [ Cardinality = one, Inverse = IppLines ]; Index relWIppProvIndex On relWIppProv; //Index NewIndex1 On (RelWIppProv, Seq2) [ IdKey, PrimaryKey, Unique ]; <Storage name="SQLMapping"> <ExtentSize>1000000</ExtentSize> <SQLMap name="DBMS"> <ConditionalWithHostVars></ConditionalWithHostVars> <Data name="ClmAmtAllowed"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>6</Piece> </Data> <Data name="ClmLineAmtAllowed"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>8</Piece> </Data> <Data name="ClmLineOp"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>7</Piece> </Data> <Data name="ClmOp"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>5</Piece> </Data> <Data name="EffDt"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>1</Piece> </Data> <Data name="Hmo"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>11</Piece> </Data> <Data name="IppCode"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>10</Piece> </Data> <Data name="LastChgDt"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>4</Piece> </Data> <Data name="PxDxCdeFlag"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>9</Piece> </Data> <Data name="TermDt"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>2</Piece> </Data> <Data name="UserIni"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>3</Piece> </Data> <Global>^WHEAIPP</Global> <Subscript name="1"> <AccessType>Sub</AccessType> <Expression>{Hmo}</Expression> <LoopInitValue>1</LoopInitValue> </Subscript> <Subscript name="2"> <AccessType>Sub</AccessType> <Expression>{Keen}</Expression> <LoopInitValue>1</LoopInitValue> </Subscript> <Subscript name="3"> <AccessType>Sub</AccessType> <Expression>{Seq}</Expression> <LoopInitValue>1</LoopInitValue> </Subscript> <Subscript name="4"> <AccessType>Sub</AccessType> <Expression>{Seq2}</Expression> <LoopInitValue>1</LoopInitValue> </Subscript> <Type>data</Type> </SQLMap> <StreamLocation>^XFXA.MCA.WIppProvLineS</StreamLocation> <Type>%Library.CacheSQLStorage</Type> </Storage> } When I try to compile this one I get the following error: ERROR #5502: Error compiling SQL Table 'XFXA_MCA.WIppProvLine %msg: Table XFXA_MCA.WIppProvLine has the following unmapped (not defined on the data map) fields: relWIppProv' ERROR #5030: An error occurred while compiling class XFXA.MCA.WIppProvLine Detected 1 errors during compilation in 2.745s. What am I doing wrong? Thanks in Advance, Fred

    Read the article

  • C# - Inserting and Removing from Cache

    - by Nir
    1 - If I insert to Cache by assigning the value: Cache["key"] = value; what's the expiration time? 2 - Removing the same value from Cache: I want to check if the value is in Cache by if(Cache["key"]!=null), is it better to remove it from Cache by Cache.Remove("key") or Cache["key"]=null ?

    Read the article

  • Safe HttpContext.Current.Cache Usage

    - by Burak SARICA
    Hello there, I use Cache in a web service method like this : var pblDataList = (List<blabla>)HttpContext.Current.Cache.Get("pblDataList"); if (pblDataList == null) { var PBLData = dc.ExecuteQuery<blabla>( @"SELECT blabla"); pblDataList = PBLData.ToList(); HttpContext.Current.Cache.Add("pblDataList", pblDataList, null, DateTime.Now.Add(new TimeSpan(0, 0, 15)), Cache.NoSlidingExpiration, CacheItemPriority.Normal, null); } I wonder is it thread safe? I mean the method is called by multiple requesters And more then one requester may hit the second line at the same time while the cache is empty. So all of these requesters will retrieve the data and add to cache. The query takes 5-8 seconds. May a surrounding lock statement around this code prevent that action? (I know multiple queries will not cause error but i want to be sure running just one query.)

    Read the article

  • Enabling OUD Entry Cache for large static groups

    - by Sylvain Duloutre
    Oracle Unified Directory can take advantage of several caches to improve performances. especially the so-called database cache and the file system cache. In addition to that, it is possible to use an entry cache to cache LDAP entries. By default, the entry cache is not used. In specific deployements involving large static groups, it may worth loading the group entries to the entry cache to speed up group membership and group-based aci evaluation. To do so, run the following commands: First, specify which entries should reside in the entry cache. In the commad below, only entries matching the LDAP filter " (|(objctclass=groupOfNames)(objectclass=groupOfUniqueNames)) " will be stored in the entry cache. dsconfig set-entry-cache-prop \          --cache-name FIFO \          --add include-filter:\(\|\(objctclass=groupOfNames\)\(objectclass=groupOfUniqueNames\)\)          --port <ADMIN_PORT> \          --bindDN cn=Directory\ Manager \          --bindPassword ****** \          --no-prompt Then enable the entry cache: dsconfig set-entry-cache-prop \          --cache-name FIFO \          --set enabled:true \          --port <ADMIN_PORT> \          --bindDN cn=Directory\ Manager \          --bindPassword ****** \          --no-prompt In addition to that, you can control how much memory the entry cache can use: oud@s96sec1d0-v3:/application/oud : dsconfig -X -n -p <ADMIN PORT> -D "cn=Directory Manager" -w <password> get-entry-cache-prop --cache-name FIFO Property           : Value(s) -------------------:----------------------------------------------------------- cache-level        : 1 enabled            : true exclude-filter     : - include-filter     : (|(objctclass=groupOfNames)(objectclass=groupOfUniqueNames)) max-entries        : 2147483647 max-memory-percent : 90 You can change the max-entries amd max-memory-percent properties to control the entry cache size using the dsconfig set-entry-cache-prop command.

    Read the article

  • Changing Flash player of Firefox cache

    - by Prasenjit Chatterjee
    I want to relocate the Flash player plugin of Firefox cache so that it saves my C: drive space when I watch youtube videos. I successfully changed the firefox cache by opening about:config and then created a new string key "browser.cache.disk.parent_directory" where I put the value of the new cache location. But it doesn't work with online streaming contents such as youtube videos. Please guide me where does it get stored and how to change its cache into another drive.

    Read the article

  • Elmah for non-HTTP protocol applications OR Elmah without HttpContext

    - by Josh
    We are working on a 3-tier application, and we've been allowed to use the latest and greatest (MVC2, IIS7.5, WCF, SQL2k8, etc). The application tier is exposed to the various web applications by WCF services. Since we control both the service and client side, we've decided to use net.tcp bindings for their performance advantage over HTTP. We would like to use ELMAH for the error logging, both on the web apps and services. Here's my question. There's lots of information about using ELMAH with WCF, but it is all for HTTP bindings. Does anyone know if/how you can use ELMAH with WCF services exposing non-HTTP endpoints? My guess is no, because ELMAH wants the HttpContext, which requires the AspNetCompatibilityEnabled flag to be true in the web.config. From MSDN: IIS 7.0 and WAS allows WCF services to communicate over protocols other than HTTP. However, WCF services running in applications that have enabled ASP.NET compatibility mode are not permitted to expose non-HTTP endpoints. Such a configuration generates an activation exception when the service receives its first message. If it is true that you cannot use ELMAH with WCF services having non-HTTP endpoints, then the follow-up question is: Can we use ELMAH in such a way that doesn't need HttpContext? Or more generally (so as not to commit the thin metal ruler error), is there ANY way to use ELMAH with WCF services having non-HTTP endpoints?

    Read the article

  • How to invalidate cache when benchmarking?

    - by Michael Buen
    I have this code, that when swapping the order of UsingAs and UsingCast, their performance also swaps. using System; using System.Diagnostics; using System.Linq; using System.IO; class Test { const int Size = 30000000; static void Main() { object[] values = new MemoryStream[Size]; UsingAs(values); UsingCast(values); Console.ReadLine(); } static void UsingCast(object[] values) { Stopwatch sw = Stopwatch.StartNew(); int sum = 0; foreach (object o in values) { if (o is MemoryStream) { var m = (MemoryStream)o; sum += (int)m.Length; } } sw.Stop(); Console.WriteLine("Cast: {0} : {1}", sum, (long)sw.ElapsedMilliseconds); } static void UsingAs(object[] values) { Stopwatch sw = Stopwatch.StartNew(); int sum = 0; foreach (object o in values) { if (o is MemoryStream) { var m = o as MemoryStream; sum += (int)m.Length; } } sw.Stop(); Console.WriteLine("As: {0} : {1}", sum, (long)sw.ElapsedMilliseconds); } } Outputs: As: 0 : 322 Cast: 0 : 281 When doing this... UsingCast(values); UsingAs(values); ...Results to this: Cast: 0 : 322 As: 0 : 281 When doing just this... UsingAs(values); ...Results to this: As: 0 : 322 When doing just this: UsingCast(values); ...Results to this: Cast: 0 : 322 Aside from running them independently, how to invalidate the cache so the second code being benchmarked won't receive the cached memory of first code? Benchmarking aside, just loved the fact that modern processors do this caching magic :-)

    Read the article

  • Throttling Cache Events

    - by dxfelcey
    The real-time eventing feature in Coherence is great for relaying state changes to other systems or to users. However, sometimes not all changes need to or can be sent to consumers. For instance; If rapid changes cannot be consumed or interpreted as fast as they are being sent. A user looking at changing Stock prices may only be able to interpret and react to 1 change per second. A client may be using low bandwidth connection, so rapidly sending events will only result in them being queued and delayed A large number of clients may need to be notified of state changes and sending 100 events p/s to 1000 clients cannot be supported with the available hardware, but 10 events p/s to 1000 clients can. Note this example assumes that many of the state changes are to the same value. One simple approach to throttling Coherence cache events is to use a cache store to capture changes to one cache (data cache) and insert those changes periodically in another cache (events cache). Consumers interested in state changes to entires in the first cache register an interest (event listener) against the second event cache. By using the cache store write-behind feature rapid updates to the same cache entry are coalesced so that updates are merged and written at the interval configured to the event cache. The time interval at which changes are written to the events cache can easily be configured using the write-behind delay time in the cache configuration, as shown below.   <caching-schemes>     <distributed-scheme>       <scheme-name>CustomDistributedCacheScheme</scheme-name>       <service-name>CustomDistributedCacheService</service-name>       <thread-count>1</thread-count>       <backing-map-scheme>         <read-write-backing-map-scheme>           <scheme-name>CustomRWBackingMapScheme</scheme-name>           <internal-cache-scheme>             <local-scheme />           </internal-cache-scheme>           <cachestore-scheme>             <class-scheme>               <scheme-name>CustomCacheStoreScheme</scheme-name>               <class-name>com.oracle.coherence.test.CustomCacheStore</class-name>               <init-params>                 <init-param>                   <param-type>java.lang.String</param-type>                   <param-value>{cache-name}</param-value>                 </init-param>                 <init-param>                   <param-type>java.lang.String</param-type>                   <!-- The name of the cache to write events to -->                   <param-value>cqc-test</param-value>                 </init-param>               </init-params>             </class-scheme>           </cachestore-scheme>           <write-delay>1s</write-delay>           <write-batch-factor>0</write-batch-factor>         </read-write-backing-map-scheme>       </backing-map-scheme>       <autostart>true</autostart>     </distributed-scheme>   </caching-schemes> The cache store implementation to perform this throttling is trivial and only involves overriding the basic cache store functions. public class CustomCacheStore implements CacheStore { private String publishingCacheName; private String sourceCacheName; public CustomCacheStore(String sourceCacheStore, String publishingCacheName) { this.publishingCacheName = publishingCacheName; this.sourceCacheName = sourceCacheName; } @Override public Object load(Object key) { return null; } @Override public Map loadAll(Collection keyCollection) { return null; } @Override public void erase(Object key) { if (sourceCacheName != publishingCacheName) { CacheFactory.getCache(publishingCacheName).remove(key); CacheFactory.log("Erasing entry: " + key, CacheFactory.LOG_DEBUG); } } @Override public void eraseAll(Collection keyCollection) { if (sourceCacheName != publishingCacheName) { for (Object key : keyCollection) { CacheFactory.getCache(publishingCacheName).remove(key); CacheFactory.log("Erasing collection entry: " + key, CacheFactory.LOG_DEBUG); } } } @Override public void store(Object key, Object value) { if (sourceCacheName != publishingCacheName) { CacheFactory.getCache(publishingCacheName).put(key, value); CacheFactory.log("Storing entry (key=value): " + key + "=" + value, CacheFactory.LOG_DEBUG); } } @Override public void storeAll(Map entryMap) { if (sourceCacheName != publishingCacheName) { CacheFactory.getCache(publishingCacheName).putAll(entryMap); CacheFactory.log("Storing entries: " + entryMap, CacheFactory.LOG_DEBUG); } } }  As you can see each cache store operation on the data cache results in a similar operation on event cache. This is a very simple pattern which has a lot of additional possibilities, but it also has a few drawbacks you should be aware of: This event throttling implementation will use additional memory as a duplicate copy of entries held in the data cache need to be held in the events cache too - 2 if the event cache has backups A data cache may already use a cache store, so a "multiplexing cache store pattern" must also be used to send changes to the existing and throttling cache store.  If you would like to try out this throttling example you can download it here. I hope its useful and let me know if you spot any further optimizations.

    Read the article

  • How to use HTTPContext for SPList

    - by Azra
    I have the following requirement. public void GetSubSite(SPWeb site) { site.AllowUnsafeUpdates = true; SPList destinationList = site.Lists[TASKS]; SPWebCollection subSitesCollection = site.Webs; foreach (SPWeb subSite in subSitesCollection) { //..... Now I want to display the destinationList as a web part, and every time the user loads the page, it should be populated freshly. How can i achieve that with HTTPContext?

    Read the article

  • How do I handle 3rd party search result data (via cache)

    - by reikyoushin
    I have a search function on my site and it is taking data from 6 different 3rd party resources. The problem is, it takes too long requesting the data over and over again on the results page. I've read for questions like this on SO about session not being a good choice but for me 'memcache' is not an option, because the server doesn't have memcached installed and I have no way to install it now. Is there any other approach to do this? Storing in the database seem inappropriate because the data depends on the search terms requested. What I've been thinking is writing a file on the server that would act as a cache for this file but I don't know how I would know when to delete it after.

    Read the article

  • add packages to squid-deb-proxy cache

    - by zpletan
    To save bandwidth and data on my Internet plan, I have installed squid-deb-proxy on a desktop, and the client on it and a few other machines I've got. However, based on the post that put me onto this , it sounds like if I take my laptop to a different network and update it there, the downloaded updates will NOT be automatically copied back to the squid-deb-proxy server when I get back on my network. Assuming that this is correct (I will be testing later), is there a way I can stick these packages into the cache so I don't have to download them one more time for other machines in the network?

    Read the article

  • joomla sometimes messes up urls, probably cache involved

    - by Bakaburg
    Is a bit i'm having this problem and i really cannot get the hang of it... Every once in while my joomla site messes up links url and for example from something like this: http://www.sism.org/index.php?option=com_comprofiler&task=userslist&listid=4&Itemid=123 it becomes like this: http://www.sism.org/index.php/component/k2/administrator/components/com_dump/assets/css/images/stories/inrilievo/sism/htm/index.php?option=com_comprofiler&task=userslist&listid=4&Itemid=123 the new page has the right content but there are no css and other linked resources. Usually i solve the problem by deleting all the cache and turning it off and on again. Of course this is pretty annoying especially for my association. Does any one have any clue on this? Watching the URLs the components involved seems to be K2 and Jdump. Thanks

    Read the article

  • Tell the kernel to strongly cache a particular directory

    - by silviot
    This question is a rephrasing of Optimizing EXT4 performance. I have a directory that contains build files, most very small, but totaling 5.6G. I usually access the same subset of files (some thousands, for some tens of megabytes) over and over again. The subset changes daily (different projects, different versions of libraries). What takes longer when I use it seem to be disk seeks. For example if I do a du twice the second time it takes as much time as the first, and disk activity is similar. Ideally I'd like to tell the kernel to allocate X Mb to the metadata and Y to data in the folder, like the options for nfs cache. Is it possible in some way, other than mounting nfs from localhost and caching it to a ramdisk?

    Read the article

  • Open table cache in MySQL

    - by vvanscherpenseel
    I have my open table cache set to 1800 and I have a total of 1112 tables. MySQL Tuning Primer reports that 100% of my table cache is used yet my table cache hit rate is 5%. I understand that this happens due to concurrent connections all opening tables. I think I should raise the cache limit. I understand that the cache size is limited by the file descriptor limit of my operating system, but are there any other practical limitations I should be aware of? Searching Google or this very website yields mostly posts explaining the connection-factor or come up with indecisive answers. My question: can I safely increase the open table cache limit? Is there a maximum?

    Read the article

  • Help create a unit test for test response header, specifically Cache-Control, in determining if cach

    - by VajNyiaj
    Scenario: I have a base controller which disables caching within the OnActionExecuting override. protected override void OnActionExecuting(ActionExecutingContext filterContext) { filterContext.HttpContext.Response.Cache.SetExpires(DateTime.UtcNow.AddDays(-1)); filterContext.HttpContext.Response.Cache.SetValidUntilExpires(false); filterContext.HttpContext.Response.Cache.SetRevalidation(HttpCacheRevalidation.AllCaches); filterContext.HttpContext.Response.Cache.SetCacheability(HttpCacheability.NoCache); //IE filterContext.HttpContext.Response.Cache.SetNoStore(); //FireFox } How can I create a Unit Test to test this behavior?

    Read the article

  • Elastic versus Distributed in caching.

    - by Mike Reys
    Until now, I hadn't heard about Elastic Caching yet. Today I read Mike Gualtieri's Blog entry. I immediately thought about Oracle Coherence and got a little scare throughout the reading. Elastic Caching is the next step after Distributed Caching. As we've always positioned Coherence as a Distributed Cache, I thought for a brief instance that Oracle had missed a new trend/technology. But then I started reading the characteristics of an Elastic Cache. Forrester definition: Software infrastructure that provides application developers with data caching services that are distributed across two or more server nodes that consistently perform as volumes grow can be scaled without downtime provide a range of fault-tolerance levels Hey wait a minute, doesn't Coherence fullfill all these requirements? Oh yes, I think it does! The next defintion in the article is about Elastic Application Platforms. This is mainly more of the same with the addition of code execution. Now there is analytics functionality in Oracle Coherence. The analytics capability provides data-centric functions like distributed aggregation, searching and sorting. Coherence also provides continuous querying and event-handling. I think that when it comes to providing an Elastic Application Platform (as in the Forrester definition), Oracle is close, nearly there. And what's more, as Elastic Platform is the next big thing towards the big C word, Oracle Coherence makes you cloud-ready ;-) There you go! Find more info on Oracle Coherence here.

    Read the article

  • HttpContext User value changing on its own?

    - by larryq
    Hi everyone, I'm working on an ASP.Net 2.0 application and am having a strange issue involving the HttpContext User. It appears to be changing on its own when I go to a particular page/directory. All of our pages inherit from a base page. In that base page's Page_Load() method we run an authorization check to see if the user can see the page they're going to. We retrieve the user to check against with this code: GenericPrincipal objPrincipal = (GenericPrincipal)Context.User; When I go to this unusual directory the User value isn't me, it's some other username I've never heard of. This username isn't authorized to see these pages, so authorization fails. This mysterious directory isn't a virtual web, just a regular directory in our website, however I've noticed it has its own Web.Config file. I'm guessing that's a cause of the trouble here. My question is, how can I investigate this further, in determining what's changing the User value when I go to this directory?

    Read the article

  • How do I empty Drupal Cache (without Devel)

    - by alexanderpas
    Okay... Seems i can't find it with google... so here you go SO ;) How do i empty the drupal caches: without the Devel module without running some PHP Statement in a new node etc. without going into the database itself Effectively, how do you instruct a luser to clear his caches

    Read the article

  • Buffer cache spillover: only buffers

    - by Liu Maclean(???)
    ?????? (?database open)????recovery?redo??,??????????????????: > Buffer cache spillover: only 32768 of 117227 buffers available$ ????crash recovery?redo apply??????buffer cache,??buffer cache???recovery buffer? ?????recovery buffer,????????age out????????redo change??????????? ????????????recovery(???crash recovery??instance recovery),?????buffer cache ??????????????(recovery)???? ??????(lock claim phase),?????????buffer cache ??spillover??????(???swap)??????????????????,oracle???????redo change???????????buffer cache????????recovery buffer????????,???????????????????? ?????????????(spillover recovery)??????????????(?????????,?????????redo apply),???????????? ??????,??crash recovery???? buffer cache??????????; ?????????, ?????????????buffer cache????????,?????????????????????????,?????????????????db_cache_size???????,??alter database open??????(Buffer cache spillover: only 32768 of 117227 buffers available$),?????????????????? ????????buffer cache spillover,?????????10???????????????,????????alert.log 30??????????,?????spillover???hang,?????????OTN Ask Maclean

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >