Search Results

Search found 3752 results on 151 pages for 'offline caching'.

Page 40/151 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • How to cache code in PHP?

    - by Janis Peisenieks
    I am creating a custom form building system, which includes various tokens. These tokens are found using Regular Expressions, and depending on the type of toke, parsed. Some require simple replacement, some require cycles, and so forth. Now I know, that RegExp is quite resource and time consuming, so I would like to be able to parse the code for the form once, creating a php code, and then save the PHP code, for next uses. How would I go about doing this? So far I have only seen output caching. Is there a way to cache commands like echo and cycles like foreach()? Because of misunderstandings, I'll create an example. Unparsed template data: Thank You for Your interest, [*Title*] [*Firstname*] [*Lastname*]. Here are the details of Your order! [*KeyValuePairs*] Here is the link to Your request: [*LinkToRequest*]. Parsed template: "Thank You for Your interest, <?php echo $data->title;?> <?php echo $data->firstname;?> <?php echo $data->lastname;?>. Here are the details of Your order! <?php foreach($data->values as $key=>$value){ echo $key."-".$value }?> Here is the link to Your request: <?php echo $data->linkToRequest;?>. I would then save the parsed template, and instead of parsing the template every time, just pass the $data variable to the already parsed one, which would generate an output.

    Read the article

  • Chache problem running two consecutive HTTP GET requests from an APP1 to an APP2

    - by user502052
    I use Ruby on Rails 3 and I have 2 applications (APP1 and APP2) working on two subdomains: app1.domain.local app2.domain.local and I am tryng to run two consecutive HTTP GET requests from APP1 to APP2 like this: Code in APP1 (request): response1 = Net::HTTP.get( URI.parse("http://app2.domain.local?test=first&id=1") ) response2 = Net::HTTP.get( URI.parse("http://app2.domain.local/test=second&id=1") ) Code in APP2 (response): respond_to do |format| if <model_name>.find(params[:id]).<field_name> == "first" <model_name>.find(params[:id]).update_attribute ( <field_name>, <field_value> ) format.xml { render :xml => <model_name>.find(params[:id]).<field_name> } elsif <model_name>.find(params[:id]).<field_name> == "second" format.xml { render :xml => <model_name>.find(params[:id]).<field_name> } end end After the first request I get the correct XML (response1 is what I expect), but on the second it isn't (response2 isn't what I expect). Doing some tests I found that the second time that <model_name>.find(params[:id]).<field_name> run (for the elsif statements) it returns always a blank value so that the code in the elseif statement is never run. Is it possible that the problem is related on caching <model_name>.find(params[:id]).<field_name>? P.S.: I read about eTag and Conditional GET, but I am not sure that I must use that approach. I would like to keep all simple.

    Read the article

  • iOS - is it possible to cache CGContextDrawImage?

    - by woot586
    I used the timing profile tool to identify that 95% of the time is spent calling the function CGContextDrawImage. In my app there are a lot of duplicate images repeatably being chopped from a sprite map and drawn to the screen. I was wondering if it was possible to cache the output of CGContextDrawImage in an NSMutableDictionay, then if the same sprite is requested again it can be just pull it from the cache rather than doing all the work of clipping and rendering it again. This is what i’ve got but I have not been to successful: Definitions if(cache == NULL) cache = [[NSMutableDictionary alloc]init]; //Identifier based on the name of the sprite and location within the sprite. NSString* identifier = [NSString stringWithFormat:@"%@-%d",filename,frame]; Adding to cache CGRect clippedRect = CGRectMake(0, 0, clipRect.size.width, clipRect.size.height); CGContextClipToRect( context, clippedRect); //create a rect equivalent to the full size of the image //offset the rect by the X and Y we want to start the crop //from in order to cut off anything before them CGRect drawRect = CGRectMake(clipRect.origin.x * -1, clipRect.origin.y * -1, atlas.size.width, atlas.size.height); //draw the image to our clipped context using our offset rect CGContextDrawImage(context, drawRect, atlas.CGImage); [cache setValue:UIGraphicsGetImageFromCurrentImageContext() forKey:identifier]; UIGraphicsEndImageContext(); Rendering a cached sprite There is probably a better way to render CGImage which is my ultimate caching goal but at the moment I’m just looking to successfully render the cached image out however this has not been successful. UIImage* cachedImage = [cache objectForKey:identifier]; if(cachedImage){ NSLog(@"Cached %@",identifier); CGRect imageRect = CGRectMake(0, 0, cachedImage.size.width, cachedImage.size.height); if (NULL != UIGraphicsBeginImageContextWithOptions) UIGraphicsBeginImageContextWithOptions(imageRect.size, NO, 0); else UIGraphicsBeginImageContext(imageRect.size); //Use draw for now just to see if the image renders out ok CGContextDrawImage(context, imageRect, cachedImage.CGImage); UIGraphicsEndImageContext(); }

    Read the article

  • ASP.NET or PHP: Is Memcached useful for storing user-state information?

    - by hamlin11
    This question may expose my ignorance as a web developer, but that wouldn't exactly be a bad thing for me now would it? I have the need to store user-state information. Examples of information that I need to store per user. (define user: unauthenticated visitor) User arrived to the site from google/bing/yahoo User utilized the search feature (true/false) List of previous visited product pages on current visit It is my understanding that I could store this in the view state, but that causes a problem with page load from the end-users' perspective because a significant amount of non-viewable information is being transferred to and from the end-users even though the server is the only side that needs the info. On a similar note, it is my understanding that the session state can be used to store such information, but does not this also result in the same information being transferred to the user and stored in their cookie? (Not quite as bad as viewstate, but it does not feel ideal). This leaves me with either a server-only-session storage system or a mem-caching solution. Is memcached the only good option here?

    Read the article

  • Memory mapping of files and system cache behavior in WinXP

    - by Canopus
    Our application is memory intensive and deals with reading a large number of disk files. The total load can be more than 3 GB. There is a custom memory manager that uses memory mapped files to achieve reading of such a huge data. The files are mapped into the process memory space only when needed and with this the process memory is well under control. But what is observed is, with memory mapping, the system cache keeps on increasing until it occupies the available physical memory. This leads to the slowing down of the entire system. My question is how to prevent system cache from hogging the physical memory? I attempted to remove the file buffering (by using FILE_FLAG_NO_BUFFERING ), but with this, the read operations take considerable amount of time and slows down the application performance. How to achieve the scalability without sacrificing much on performance. What are the common techniques used in such cases? I dont have a good understanding of the WinXP OS caching behavior. Any good links explaining the same would also be helpful.

    Read the article

  • App_offline.htm, CSS, images, and aspnet_isapi.dll

    - by LookitsPuck
    Hey all! So, the site I'm working on is using urlrewriting in coordination with aspnet_isapi.dll (everything is mapped to it). I put up my app_offline.htm file, and all the text shows, however, the CSS or images aren't being served. I'm guessing they're being processed by ASP.NET due to the wildcard mapping instead of IIS. Is this correct? If so, how can I allow IIS to serve these files? Furthermore, an issue I can see arising..in the web.config for the rewriter settings: <rewrite url="^~/images/network/(.*)/(.*).jpg$" to="~/services/ImageHandler.ashx?type=$1&amp;id=$2"/> <rewrite url="^~/image/view/(.*).jpg$" to="~/ServePRView.aspx?id=$1"/> <rewrite url="^~/asset/view/(.*).jpg$" to="~/services/ImageHandler.ashx?id=$1&amp;type=asset"/> Thanks for the help all, -Steve

    Read the article

  • iPhone web apps running as native apps

    - by Henrikb4
    The browser on the iPhone is capable of using advanced web technologies introduced in HTML5. One of these is the app cache that allows web pages to run on the client, from the cache, without a connection to the internet. Together with Local Storage you can also save data permanently "in" the page. My question is, would it then be possible to make a website that when visited and set as a web clip (bookmark on the home screen), could be accessed again, at any moment. Using HTML5, Javascript and css, you can make some very good apps and at the same time avoid the pricey developer fee, the harsh app approval and the single platform development platform? Or am I just dreaming?

    Read the article

  • visual c# 2008 database application examples

    - by Omar
    hi, i just have a few weeks programming with vc# (2008) and i'm trying to build an application (winforms) and i have the following problem... i need my application to work with and without connection to the mssql database, this sounds like piece of cake for our friend DataSet right? i can persist the data as XML or binary until i can reach the database and the DataSet will magically sync; all without bothering the user. The problem is... the few books i have read just mention that logic like a fairy tale but dont give any practical example of how to do it, can you point me to one example/demo/whatever i can read or download of an application with (equal or) similar logic?

    Read the article

  • Once an HTML document has a manifest (cache.manifest), how can you remove it?

    - by Michael F
    It seems that once you have a manifest entry, a la: <html manifest="cache.manifest"> Then that page (the master entry in the cache) will always be cached (at least by Safari) until the user does something to remove the cache, even if you later remove the manifest attribute from the html tag and update the manifest (by changing something within it), forcing the master entry to be reloaded along with everything else. In other words, if you have: index.html (with manifest defined) file1.js (referenced in manifest) file2.js (referenced in manifest) cache.manifest (lists the two js files) -- removing the manifest entry from index.html and modifying the manifest (so it gets expired by the browser and all content reloaded) will not stop this page from behaving as if it's still fully cached. If you view source on index.html you won't see the manifest listed anymore, but the browser will still request only the cache.manifest file, and unless that file's content is changed, no other changes to any files will be shown to the user. It seems like a pretty glaring bug, and it's present on iOS as well as Mac versions of Safari. Has anyone found a way of resetting the page and getting rid of the cache without requiring user intervention?

    Read the article

  • Memcached - how to deal with adding/deploying servers

    - by Industrial
    Hi everybody, How do you handle replacing/adding/removing memcached nodes in your production applications? I will have a number of applications that are cloned and customized due to each customers need running on one and same webserver, so i'll guess that there will be a day when some of the nodes will be changed. Here's how memcached is populated by normal: $m = new Memcached(); $servers = array( array('mem1.domain.com', 11211, 33), array('mem2.domain.com', 11211, 67) ); $m->addServers($servers); My initial idea, is to make the $servers array to be populated from the database, also cached, but file-based, done once a day or something, with the option to force an update on next run of the function that holds the $addservers call. However, I am guessing that this might add some additional overhead since disks are quite slow storage... What do you think?

    Read the article

  • Plan Caching and Query Memory Part II (Hash Match) – When not to use stored procedure - Most common performance mistake SQL Server developers make.

    - by sqlworkshops
    SQL Server estimates Memory requirement at compile time, when stored procedure or other plan caching mechanisms like sp_executesql or prepared statement are used, the memory requirement is estimated based on first set of execution parameters. This is a common reason for spill over tempdb and hence poor performance. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Hash Match operations with examples. It is recommended to read Plan Caching and Query Memory Part I before this article which covers an introduction and Query memory for Sort. In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a query does not change significantly based on predicates.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts  Let’s create a Customer’s State table that has 99% of customers in NY and the rest 1% in WA.Customers table used in Part I of this article is also used here.To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'. --Example provided by www.sqlworkshops.com drop table CustomersState go create table CustomersState (CustomerID int primary key, Address char(200), State char(2)) go insert into CustomersState (CustomerID, Address) select CustomerID, 'Address' from Customers update CustomersState set State = 'NY' where CustomerID % 100 != 1 update CustomersState set State = 'WA' where CustomerID % 100 = 1 go update statistics CustomersState with fullscan go   Let’s create a stored procedure that joins customers with CustomersState table with a predicate on State. --Example provided by www.sqlworkshops.com create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1) end go  Let’s execute the stored procedure first with parameter value ‘WA’ – which will select 1% of data. set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' goThe stored procedure took 294 ms to complete.  The stored procedure was granted 6704 KB based on 8000 rows being estimated.  The estimated number of rows, 8000 is similar to actual number of rows 8000 and hence the memory estimation should be ok.  There was no Hash Warning in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Now let’s execute the stored procedure with parameter value ‘NY’ – which will select 99% of data. -Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  The stored procedure took 2922 ms to complete.   The stored procedure was granted 6704 KB based on 8000 rows being estimated.    The estimated number of rows, 8000 is way different from the actual number of rows 792000 because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘WA’ in our case. This underestimation will lead to spill over tempdb, resulting in poor performance.   There was Hash Warning (Recursion) in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Let’s recompile the stored procedure and then let’s first execute the stored procedure with parameter value ‘NY’.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts, www.sqlworkshops.com/webcasts for further details.   exec sp_recompile CustomersByState go --Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  Now the stored procedure took only 1046 ms instead of 2922 ms.   The stored procedure was granted 146752 KB of memory. The estimated number of rows, 792000 is similar to actual number of rows of 792000. Better performance of this stored procedure execution is due to better estimation of memory and avoiding spill over tempdb.   There was no Hash Warning in SQL Profiler.   Now let’s execute the stored procedure with parameter value ‘WA’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go  The stored procedure took 351 ms to complete, higher than the previous execution time of 294 ms.    This stored procedure was granted more memory (146752 KB) than necessary (6704 KB) based on parameter value ‘NY’ for estimation (792000 rows) instead of parameter value ‘WA’ for estimation (8000 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘NY’ in this case. This overestimation leads to poor performance of this Hash Match operation, it might also affect the performance of other concurrently executing queries requiring memory and hence overestimation is not recommended.     The estimated number of rows, 792000 is much more than the actual number of rows of 8000.  Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined data range.Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, recompile) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 297 ms and 1102 ms in line with previous optimal execution times.   The stored procedure with parameter value ‘WA’ has good estimation like before.   Estimated number of rows of 8000 is similar to actual number of rows of 8000.   The stored procedure with parameter value ‘NY’ also has good estimation and memory grant like before because the stored procedure was recompiled with current set of parameter values.  Estimated number of rows of 792000 is similar to actual number of rows of 792000.    The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.   There was no Hash Warning in SQL Profiler.   Let’s recreate the stored procedure with optimize for hint of ‘NY’. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, optimize for (@State = 'NY')) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 353 ms with parameter value ‘WA’, this is much slower than the optimal execution time of 294 ms we observed previously. This is because of overestimation of memory. The stored procedure with parameter value ‘NY’ has optimal execution time like before.   The stored procedure with parameter value ‘WA’ has overestimation of rows because of optimize for hint value of ‘NY’.   Unlike before, more memory was estimated to this stored procedure based on optimize for hint value ‘NY’.    The stored procedure with parameter value ‘NY’ has good estimation because of optimize for hint value of ‘NY’. Estimated number of rows of 792000 is similar to actual number of rows of 792000.   Optimal amount memory was estimated to this stored procedure based on optimize for hint value ‘NY’.   There was no Hash Warning in SQL Profiler.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.  Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • How to best design a date/geographic proximity query on GAE?

    - by Dane
    Hi all, I'm building a directory for finding athletic tournaments on GAE with web2py and a Flex front end. The user selects a location, a radius, and a maximum date from a set of choices. I have a basic version of this query implemented, but it's inefficient and slow. One way I know I can improve it is by condensing the many individual queries I'm using to assemble the objects into bulk queries. I just learned that was possible. But I'm also thinking about a more extensive redesign that utilizes memcache. The main problem is that I can't query the datastore by location because GAE won't allow multiple numerical comparison statements (<,<=,=,) in one query. I'm already using one for date, and I'd need TWO to check both latitude and longitude, so it's a no go. Currently, my algorithm looks like this: 1.) Query by date and select 2.) Use destination function from geopy's distance module to find the max and min latitude and longitudes for supplied distance 3.) Loop through results and remove all with lat/lng outside max/min 4.) Loop through again and use distance function to check exact distance, because step 2 will include some areas outside the radius. Remove results outside supplied distance (is this 2/3/4 combination inefficent?) 5.) Assemble many-to-many lists and attach to objects (this is where I need to switch to bulk operations) 6.) Return to client Here's my plan for using memcache.. let me know if I'm way out in left field on this as I have no prior experience with memcache or server caching in general. -Keep a list in the cache filled with "geo objects" that represent all my data. These have five properties: latitude, longitude, event_id, event_type (in anticipation of expanding beyond tournaments), and start_date. This list will be sorted by date. -Also keep a dict of pointers in the cache which represent the start and end indices in the cache for all the date ranges my app uses (next week, 2 weeks, month, 3 months, 6 months, year, 2 years). -Have a scheduled task that updates the pointers daily at 12am. -Add new inserts to the cache as well as the datastore; update pointers. Using this design, the algorithm would now look like: 1.) Use pointers to slice off appropriate chunk of list based on supplied date. 2-4.) Same as above algorithm, except with geo objects 5.) Use bulk operation to select full tournaments using remaining geo objects' event_ids 6.) Assemble many-to-manys 7.) Return to client Thoughts on this approach? Many thanks for reading and any advice you can give. -Dane

    Read the article

  • Output Caching with IIS7 - How To for an dynamic aspx page?

    - by Lieven Cardoen
    I have a RetrieveBlob.aspx that gets some query string variables and returns an asset. Eeach url corresponds to a unique asset. In the RetrieveBlob.aspx a Cache Profile is set. In Web.Config the profile looks like (under system.web tag: <caching> <outputCache enableOutputCache="true" /> <outputCacheSettings> <outputCacheProfiles> <add duration="14800" enabled="true" varyByParam="*" name="AssetCacheProfile" /> </outputCacheProfiles> </outputCacheSettings> </caching> Ok, this works fine. When I put a breakpoint in the code behind of RetrieveBlob.aspx, it gets triggered the first time, and all the other times not. Now, I throw away the Cache Profile and instead I'm having this in my Web.Config under System.WebServer: <caching> <profiles> <add extension=".swf" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".flv" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".gif" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".png" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".mp3" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpeg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> </profiles> </caching> Now the caching doesn't work anymore. What am I doing wrong? Is it possible to configure under Caching tag of System.WebServer a Caching Profile for a Dynamic aspx page? I already tried adding something like this: <add extension="RetrieveBlob.aspx" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:00:30" varyByQueryString="assetId, assetFileId" /> But it doesn't work. An example of an url is: http://{server}/{application}/trunk/RetrieveBlob.aspx?assetId=31809&assetFileId=11829

    Read the article

  • WordPress Write Cache Problem with Multiple Sessions

    - by Volomike
    I'm working on a content dripper custom plugin in WordPress that my client asked me to build. He says he wants it to catch a page view event, and if it's the right time of day (24 hours since last post), to pull from a resource file and output another post. He needed it to also raise a flag and prevent other sessions from firing that same snippet of code. So, raise some kind of flag saying, "I'm posting that post, go away other process," and then it makes that post and releases the flag again. However, the strangest thing is occurring when placed under load with multiple sessions hitting the site with page views. It's firing instead of one post -- it's randomly doing like 1, 2, or 3 extra posts, with each one thinking that it was the right time to post because it was 24 hours past the time of the last post. Because it's somewhat random, I'm guessing that the problem is some kind of write caching where the other sessions don't see the raised flag just yet until a couple microseconds pass. The plugin was raising the "flag" by simply writing to the wp_options table with the update_option() API in WordPress. The other user sessions were supposed to read that value with get_option() and see the flag, and then not run that piece of code that creates the post because a given session was already doing it. Then, when done, I lower the flag and the other sessions continue as normal. But what it's doing is letting those other sessions in. To make this work, I was using add_action('loop_start','checkToAddContent'). The odd thing about that function though is that it's called more than once on a page, and in fact some plugins may call it. I don't know if there's a better event to hook. Even still, even if I find an event to hook that only runs once on a page view, I still have multiple sessions to contend with (different users who may view the page at the same time) and I want only one given session to trigger the content post when the post is due on the schedule. I'm wondering if there are any WordPress plugin devs out there who could suggest another event hook to latch on to, and to figure out another way to raise a flag that all sessions would see. I mean, I could use the shared memory API in PHP, but many hosting plans have that disabled. Can't use a cookie or session var because that's only one single session. About the only thing that might work across hosting plans would be to drop a file as a flag, instead. If the file is present, then one session has the flag. If the file is not present, then other sessions can attempt to get the flag. Sure, I could use the file route, but it's kind of immature in my opinion and I was wondering if there's something in WordPress I could do.

    Read the article

  • Action Cache for root URL not working

    - by askegg
    Here's the setup. I have web site which is essentially a simple CMS. Here is the routes file: map.connect ':url', :controller => :pages, :action => :show map.root :controller => :pages, :action => :show, :url => "/" The page controller is thus: class PagesController < ApplicationController before_filter :verify_access, :except => [:show] # Cache show action if we are not logged in. caches_action :show, :layout => false, :unless => Proc.new { |controller| controller.logged_in? } def update @page = Page.find(params[:id]) respond_to do |format| expire_action :action => :show, :url => @page.url So when a visitor hits "/" it maps to :controller = "pages, :action = "show, :url = "/". This generates a cached version on first try, then returns the appropriate result there after. The log files show: Processing PagesController#show (for 127.0.0.1 at 2009-08-02 14:15:01) [GET] Parameters: {"action"=>"show", "url"=>"/", "controller"=>"pages"} Cached fragment hit: views/out.local// (0.1ms) Rendering template within layouts/application Filter chain halted as [#<ActionController::Filters::AroundFilter:0x23eb03c @identifier=nil, @method=#<Proc:0x01904858@/Library/Ruby/Gems/1.8/gems/actionpack-2.3.3/lib/action_controller/caching/actions.rb:64>, @kind=:filter, @options={:only=>#<Set: {"show"}>, :if=>nil, :unless=>#<Proc:0x025137ac@/Users/askegg/Sites/out/app/controllers/pages_controller.rb:6>}>] did_not_yield. Completed in 2ms (View: 1, DB: 0) | 200 OK [http://out.local/] OK - all good so far. When I update the page, it should expire the cache (see above). The logs show: Page Load (0.2ms) SELECT * FROM "pages" WHERE ("pages"."id" = 3) Page Load (0.1ms) SELECT "pages".id FROM "pages" WHERE ("pages"."url" = '/' AND "pages".domain_id = 1 AND "pages".id <> 3) LIMIT 1 Expired fragment: views/out.local/index (0.1ms) Redirected to http://out.local/pages/3 Completed in 9ms (DB: 0) | 302 Found [http://out.local/pages/3] See the problem? Rails is clearing the cache named "index", but it sets it as "/". Naturally this results in the cache NOT being cleared, so visitors are now seeing the old version.

    Read the article

  • Local Variables take 7x longer to access than global variables?

    - by ItzWarty
    I was trying to benchmark the gain/loss of "caching" math.floor, in hopes that I could make calls faster. Here was the test: <html> <head> <script> window.onload = function() { var startTime = new Date().getTime(); var k = 0; for(var i = 0; i < 1000000; i++) k += Math.floor(9.99); var mathFloorTime = new Date().getTime() - startTime; startTime = new Date().getTime(); window.mfloor = Math.floor; k = 0; for(var i = 0; i < 1000000; i++) k += window.mfloor(9.99); var globalFloorTime = new Date().getTime() - startTime; startTime = new Date().getTime(); var mfloor = Math.floor; k = 0; for(var i = 0; i < 1000000; i++) k += mfloor(9.99); var localFloorTime = new Date().getTime() - startTime; document.getElementById("MathResult").innerHTML = mathFloorTime; document.getElementById("globalResult").innerHTML = globalFloorTime; document.getElementById("localResult").innerHTML = localFloorTime; }; </script> </head> <body> Math.floor: <span id="MathResult"></span>ms <br /> var mathfloor: <span id="globalResult"></span>ms <br /> window.mathfloor: <span id="localResult"></span>ms <br /> </body> </html> My results from the test: [Chromium 5.0.308.0]: Math.floor: 49ms var mathfloor: 271ms window.mathfloor: 40ms [IE 8.0.6001.18702] Math.floor: 703ms var mathfloor: 9890ms [LOL!] window.mathfloor: 375ms [Firefox [Minefield] 3.7a4pre] Math.floor: 42ms var mathfloor: 2257ms window.mathfloor: 60ms [Safari 4.0.4[531.21.10] ] Math.floor: 92ms var mathfloor: 289ms window.mathfloor: 90ms [Opera 10.10 build 1893] Math.floor: 500ms var mathfloor: 843ms window.mathfloor: 360ms [Konqueror 4.3.90 [KDE 4.3.90 [KDE 4.4 RC1]]] Math.floor: 453ms var mathfloor: 563ms window.mathfloor: 312ms The variance is random, of course, but for the most part In all cases [this shows time taken]: [takes longer] mathfloor Math.floor window.mathfloor [is faster] Why is this? In my projects i've been using var mfloor = Math.floor, and according to my not-so-amazing benchmarks, my efforts to "optimize" actually slowed down the script by ALOT... Is there any other way to make my code more "efficient"...? I'm at the stage where i basically need to optimize, so no, this isn't "premature optimization"...

    Read the article

  • Output Caching with IIS7 - How To for a dynamic aspx page?

    - by Lieven Cardoen
    I have a RetrieveBlob.aspx that gets some query string variables and returns an asset. Eeach url corresponds to a unique asset. In the RetrieveBlob.aspx a Cache Profile is set. In Web.Config the profile looks like (under system.web tag): <caching> <outputCache enableOutputCache="true" /> <outputCacheSettings> <outputCacheProfiles> <add duration="14800" enabled="true" varyByParam="*" name="AssetCacheProfile" /> </outputCacheProfiles> </outputCacheSettings> </caching> Ok, this works fine. When I put a breakpoint in the code behind of RetrieveBlob.aspx, it gets triggered the first time, and all the other times not. Now, I throw away the Cache Profile and instead I'm having this in my Web.Config under System.WebServer: <caching> <profiles> <add extension=".swf" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".flv" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".gif" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".png" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".mp3" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpeg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> </profiles> </caching> Now the caching doesn't work anymore. What am I doing wrong? Is it possible to configure under Caching tag of System.WebServer a Caching Profile for a Dynamic aspx page? I already tried adding something like this: <add extension="RetrieveBlob.aspx" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:00:30" varyByQueryString="assetId, assetFileId" /> But it doesn't work. An example of an url is: http://{server}/{application}/trunk/RetrieveBlob.aspx?assetId=31809&assetFileId=11829

    Read the article

  • Trying get dynamic content hole-punched through Magento's Full Page Cache

    - by rlflow
    I am using Magento Enterprise 1.10.1.1 and need to get some dynamic content on our product pages. I am inserting the current time in a block to quickly see if it is working, but can't seem to get through full page cache. I have tried a variety of implementations found here: http://tweetorials.tumblr.com/post/10160075026/ee-full-page-cache-hole-punching http://oggettoweb.com/blog/customizations-compatible-magento-full-page-cache/ http://magentophp.blogspot.com/2011/02/magento-enterprise-full-page-caching.html (http://www.exploremagento.com/magento/simple-custom-module.php - custom module) Any solutions, thoughts, comments, advice is welcome. here is my code: app/code/local/Fido/Example/etc/config.xml <?xml version="1.0"?> <config> <modules> <Fido_Example> <version>0.1.0</version> </Fido_Example> </modules> <global> <blocks> <fido_example> <class>Fido_Example_Block</class> </fido_example> </blocks> </global> </config> app/code/local/Fido/Example/etc/cache.xml <?xml version="1.0" encoding="UTF-8"?> <config> <placeholders> <fido_example> <block>fido_example/view</block> <name>example</name> <placeholder>CACHE_TEST</placeholder> <container>Fido_Example_Model_Container_Cachetest</container> <cache_lifetime>86400</cache_lifetime> </fido_example> </placeholders> </config> app/code/local/Fido/Example/Block/View.php <?php /** * Example View block * * @codepool Local * @category Fido * @package Fido_Example * @module Example */ class Fido_Example_Block_View extends Mage_Core_Block_Template { private $message; private $att; protected function createMessage($msg) { $this->message = $msg; } public function receiveMessage() { if($this->message != '') { return $this->message; } else { $this->createMessage('Hello World'); return $this->message; } } protected function _toHtml() { $html = parent::_toHtml(); if($this->att = $this->getMyCustom() && $this->getMyCustom() != '') { $html .= '<br />'.$this->att; } else { $now = date('m-d-Y h:i:s A'); $html .= $now; $html .= '<br />' ; } return $html; } } app/code/local/Fido/Example/Model/Container/Cachetest.php <?php class Fido_Example_Model_Container_Cachetest extends Enterprise_PageCache_Model_Container_Abstract { protected function _getCacheId() { return 'HOMEPAGE_PRODUCTS' . md5($this->_placeholder->getAttribute('cache_id') . $this->_getIdentifier()); } protected function _renderBlock() { $blockClass = $this->_placeholder->getAttribute('block'); $template = $this->_placeholder->getAttribute('template'); $block = new $blockClass; $block->setTemplate($template); return $block->toHtml(); } protected function _saveCache($data, $id, $tags = array(), $lifetime = null) { return false; } } app/design/frontend/enterprise/[mytheme]/template/example/view.phtml <?php /** * Fido view template * * @see Fido_Example_Block_View * */ ?> <div> <?php echo $this->receiveMessage(); ?> </span> </div> snippet from app/design/frontend/enterprise/[mytheme]/layout/catalog.xml <reference name="content"> <block type="catalog/product_view" name="product.info" template="catalog/product/view.phtml"> <block type="fido_example/view" name="product.info.example" as="example" template="example/view.phtml" />

    Read the article

  • How do I change the Google Chrome offline icon?

    - by user1105047
    I am using some offline apps for google chrome and because of this an application indicator pops un in the gnome panel. My problem is that this indicator uses the google chrome default system tray icon and I think it looks ugly with my current theme so I would basically like to change the icon. I can't find the icons that google chrome is using for this purpose but I have no problem finding the other icons used by google chrome and changing them, like the 24x24 or 22x22 icons that you see in the gnome application menu (which by default looks like the application indicator for google chrome). It doens't seem like Google Chrome is taking the icon from e.g /usr/share/icons/ and I can't change the icon by changing the icon theme in like gconf-editor Is there anyway to see the preferences (like you can do with launchers) of the application indicators in the gnome-panel and then locate the indicator icon or change it in another way?

    Read the article

  • Can I use WUBI as an offline installer for 12.10?

    - by ning
    Can I use WUBI as an offline installer? I already had the .iso file of ubuntu 12.10 - i386, and inside the iso is wubi.exe. For example: I will do a partition of, E:30gb then i will target E: in wubi so that it will be Ubuntu then use the .iso file to install it on E: My Computer: Acer Aspire 5750-6683 i5-2450m processor Is it fine to do a dual boot? Can I run Windows 7 Home Premium OS, with the custom theme and boot screen, can that effect dual booting? I just don't want to damage any of files or either any of OS And if I can't boot any of the operating systems what should I do? Can I avoid doing a clean install if that ever happens?

    Read the article

  • Setting MSN or Yahoo! Messenger status to Invisible or Offline when idle for an hour

    - by Jian Lin
    Where, or how, do I set it up in MSN Messenger or Yahoo! Messenger to automatically switch my status to either "Invisible" or "Offline" when idle for a half hour, or an hour? I know how to set my status as "Away" or "Busy" after 10 minutes, but can't seem to find a way to set the offline status options without manual intervention. Back story As a software developer, I am very used to turning the computer on for the whole day and not turning it back off. (For example, checking email for urgent fixes, fix issue and push to web server). It's not even turned off when heading to sleep in case I might find it hard to fall asleep and come back to check on the computer. Or to have it there ready in the morning to check that everything is okay. If I'm seen as being online for 24 hours of a day, some people see me as weird. Their perception of my value decreases as I'm always there (hard to get = high value; always there = low value). Leaving it on makes everyone in my contacts list think I have nothing better to do all day than sit in front of the computer. Even though it's my job and I do admittedly spend more time online than other people. That's why I'd like to find a way to set my status as Invisible or Offline.

    Read the article

  • What strategy should be employed to access Facebook data offline?

    - by user686021
    I'm working on a project similar to Klout which provides detail about how you influence other people and who influenced you. We'll be fetching data from few social networking sites (i.e linked in, facebook, twitter etc) to analyze how users interacts with one another. For that we need to parse the data and store it in db and have to analyze it so that strength of relation of two user can be decided. We'll be accessing data offline as well to provide them with accurate results. If we consider facebook activities, we need to have access to Facebook users' news feed, wall data which includes likes,comments,shares etc. To decide how one user influence other, we'll store all the data and analyze it. I need suggestions on what steps need to be taken for great performance. We'll be using ASP.Net(C#) Web forms, SQL Server, jQuery. Main concern is parsing of data, it's storage and retrieval with least overhead. For that I've summarized few points as below : Should we switch over to document-oriented database, like MongoDB or RavenDB for the whole app or part of it even though none of team member have experience with them? Should we use SQL Server Analysis service? Is there any other library than Json.NET for parsing data? Is it advisable to use any C# library over FQL + GET Request ? I've tried to provide as much info as possible. Please share your views for the same.

    Read the article

  • Prevent your Silverlight XAP file from caching in your browser.

    - by mbcrump
    If you work with Silverlight daily then you have run into this problem. Your XAP file has been cached in your browser and you have to empty your browser cache to resolve it. If your using Google Chrome then you typically do the following: Go to Options –> Clear Browsing History –> Empty the Cache and finally click Clear Browsing data. As you can see, this is a lot of unnecessary steps. It is even worse when you have a customer that says, “I can’t see the new features you just implemented!” and you realize it’s a cached xap problem.  I have been struggling with a way to prevent my XAP file from caching inside of a browser for a while now and decided to implement the following solution. If the Visual Studio Debugger is attached then add a unique query string to the source param to force the XAP file to be refreshed. If the Visual Studio Debugger is not attached then add the source param as Visual Studio generates it. This is also in case I forget to remove the above code in my production environment. I want the ASP.NET code to be inline with my .ASPX page. (I do not want a separate code behind .cs page or .vb page attached to the .aspx page.) Below is an example of the hosting code generated when you create a new Silverlight project. As a quick refresher, the hard coded param name = “source” specifies the location of your XAP file.  <form id="form1" runat="server" style="height:100%"> <div id="silverlightControlHost"> <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%"> <param name="source" value="ClientBin/SilverlightApplication2.xap"/> <param name="onError" value="onSilverlightError" /> <param name="background" value="white" /> <param name="minRuntimeVersion" value="4.0.50826.0" /> <param name="autoUpgrade" value="true" /> <a href="http://go.microsoft.com/fwlink/?LinkID=149156&v=4.0.50826.0" style="text-decoration:none"> <img src="http://go.microsoft.com/fwlink/?LinkId=161376" alt="Get Microsoft Silverlight" style="border-style:none"/> </a> </object><iframe id="_sl_historyFrame" style="visibility:hidden;height:0px;width:0px;border:0px"></iframe></div> </form> We are going to use a little bit of inline ASP.NET to generate the param name = source dynamically to prevent the XAP file from caching. Lets look at the completed solution: <form id="form1" runat="server" style="height:100%"> <div id="silverlightControlHost"> <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%"> <% string strSourceFile = @"ClientBin/SilverlightApplication2.xap"; string param; if (System.Diagnostics.Debugger.IsAttached) //Debugger Attached - Refresh the XAP file. param = "<param name=\"source\" value=\"" + strSourceFile + "?" + DateTime.Now.Ticks + "\" />"; else { //Production Mode param = "<param name=\"source\" value=\"" + strSourceFile + "\" />"; } Response.Write(param); %> <param name="onError" value="onSilverlightError" /> <param name="background" value="white" /> <param name="minRuntimeVersion" value="4.0.50826.0" /> <param name="autoUpgrade" value="true" /> <a href="http://go.microsoft.com/fwlink/?LinkID=149156&v=4.0.50826.0" style="text-decoration:none"> <img src="http://go.microsoft.com/fwlink/?LinkId=161376" alt="Get Microsoft Silverlight" style="border-style:none"/> </a> </object><iframe id="_sl_historyFrame" style="visibility:hidden;height:0px;width:0px;border:0px"></iframe></div> </form> We add the location to our XAP file to strSourceFile and if the debugger is attached then it will append DateTime.Now.Ticks to the XAP file source and force the browser to download the .XAP. If you view the page source of your Silverlight Application then you can verify it worked properly by looking at the param name = “source” tag as shown below. <param name="source" value="ClientBin/SilverlightApplication2.xap?634299001187160148" /> If the debugger is not attached then it will use the standard source tag as shown below. <param name="source" value="ClientBin/SilverlightApplication2.xap"/> At this point you may be asking, How do I prevent my XAP file from being cached on my production app? Well, you have two easy options: 1) I really don’t recommend this approach but you can force the XAP to be refreshed everytime with the following code snippet.  <param name="source" value="ClientBin/SilverlightApplication2.xap?<%=Guid.NewGuid().ToString() %>"/> NOTE: You could also substitute the “Guid.NewGuid().ToString() for anything that create a random field. (I used DateTime.Now.Ticks earlier). 2) Another solution that I like even better involves checking the XAP Creation Date and appending it to the param name = source. This method was described by Lars Holm Jenson. <% string strSourceFile = @"ClientBin/SilverlightApplication2.xap"; string param; if (System.Diagnostics.Debugger.IsAttached) param = "<param name=\"source\" value=\"" + strSourceFile + "\" />"; else { string xappath = HttpContext.Current.Server.MapPath(@"") + @"\" + strSourceFile; DateTime xapCreationDate = System.IO.File.GetLastWriteTime(xappath); param = "<param name=\"source\" value=\"" + strSourceFile + "?ignore=" + xapCreationDate.ToString() + "\" />"; } Response.Write(param); %> As you can see, this problem has been solved. It will work with all web browsers and stubborn proxy servers that are caching your .XAP. If you enjoyed this article then check out my blog for others like this. You may also want to subscribe to my blog or follow me on Twitter.   Subscribe to my feed

    Read the article

  • ASP.NET MVC 2 disable cache for browser back button in partial views

    - by brainnovative
    I am using Html.RenderAction<CartController>(c => c.Show()); on my master Page to display the cart for all pages. The problem is when I add an item to the cart and then hit the browser back button. It shows the old cart (from Cache) until I hit the refresh button or navigate to another page. I've tried this and it works perfectly but it disables the Cache globally for the whole page an for all pages in my site (since this Action method is used on the master page). I need to enable cache for several other partial views (action methods) for performance reasons. I wouldn't like to use client side script with AJAX to refresh the cart (and login view) on page load - but that's the only solution I can think of right now. Does anyone know better?

    Read the article

  • Expire Output Cache ASP.Net MVC

    - by Max Fraser
    I am using the standared outputcache tag in my MVC app which works great but I need to force it to be dumped at certain times. How do I achieve this? The page that gets cached is built from a very simple route {Controller}/{PageName} - so most pages are something like this: /Pages/About-Us Here is the output cache tag that is at the top of my .aspx v iew page just to be clear: <@ OutputCache Duration="100" VaryByParam="None" %> So in another action on the same controller where content is updated I need to dump this cache, or even all of it - it's a very small app so not a big deal deal to dump all cached items.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >