Search Results

Search found 38689 results on 1548 pages for 'page caching'.

Page 47/1548 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Make two page navigations on top ang bottom of a list

    - by sees
    I'm creating a simple PHP page that reads CSV file content and display some selected columns to users in pages Currently, I'm reading each line and display it immediately. Because of this method, I only know total of lines after finishing reading entire file( searching in file also). What I want is displaying two page navigations on the top and bottom of the list. Like this: Page 1|2|3|4 Field 1|Field 2|Field 3|Field 4|Field 5....|Field n Row1 Row2 .... Rown Page 1|2|3|4 After displaying all rows, bottom page nav, I used jquery function: insertBefore to insert another page navi to the top. Problems are: 1) Top page nav not displayed in IE8 but displayed ater pressing F5(worked in FF, Chrome). 2) Using insertBefore function, the top page nav is suddenly poppep up afer displaying the bottom one. It doesn't look naturally Any suggestion?

    Read the article

  • Word 2007 COM - Can't directly access a page when word is set to invisible

    - by Robbie
    I'm using Word 2007 via COM from PHP 5.2 Apache 2.0 on a windows machine. The goal is to programmatically render jpeg thumbnails from each page in a Word document. The following code works correctly if you set $word-Visible to 1: try { $word = new COM('word.application'); $word->Visible = 0; $word->Documents->Open("C:\\test.doc"); echo "Number of pages: " . $word->ActiveDocument->ActiveWindow->ActivePane->Pages->Count() . "</br>"; $i = 1; foreach ($word->ActiveDocument->ActiveWindow->ActivePane->Pages as $page) { echo "Page number: $i </br>"; $i++; } //get the EMF image of the page $data = $word->ActiveDocument->ActiveWindow->ActivePane->Pages->Item(3)->EnhMetaFileBits; $word->ActiveDocument->Close(); $word->Quit(); } catch (Exception $e) { echo "Exception: " .$e->getMessage(); } The test document I'm using contains 35 pages. The code will display the correct number of pages but the for each loop only loops over 1 page. I can only directly access page 1 and 2 in the Pages-Item() collection. If I try to access another page I get the exception: "The requested member of the collection does not exist." If I set the $word-Visible property to 1 I do get all the pages in the foreach loop and I can access any page directly. Everything is working as expected if Word is set to be visible. Even stranger is the fact that if I set Word to be invisible and I don't have the foreach loop I can only access page 1 instead of page 1 and 2 if I do the for each loop. Any pointers on how I can access all the pages in the document and keeping word invisible?

    Read the article

  • LaTeX hyperref link goes to wrong page

    - by ecto
    I am trying to create a reference to a float that doesn't use a caption. If I include \label{foo} within the float and reference it using \pageref{foo}, the correct page number is displayed in my pdf document but the hyperlink created by the hyperref package links to a different page (the first page of the section). If I include a caption before the label in the float, the hyperref link goes to the correct page. Is there a way to get the hyperref link to work correctly without including a caption in the float? Or else is there a way to suppress the display of a caption so I can include one without it being shown? Below is a minimal example. If I process it using pdflatex, I get three pages. The "figure" is shown on the second page, and the third page says "See figure on page 2." But the hyperlink on the '2' says "Go to page 1", and if I click it it takes me to page 1. If I put an empty \caption{} before the \label{foo}, then the hyperlink works correctly, but I don't want to show a caption for my float. \documentclass[11pt]{memoir} \usepackage{hyperref} \begin{document} some text \clearpage \begin{figure} a figure \label{foo} \end{figure} more text \clearpage See figure on page \pageref{foo}. \end{document}

    Read the article

  • Custom ASP.NET MVC cache controllers in a shared hosting environment?

    - by Daniel Crenna
    I'm using custom controllers that cache static resources (CSS, JS, etc.) and images. I'm currently working with a hosting provider that has set me up under a full trust profile. Despite being in full trust, my controllers fail because the caching strategy relies on the File class to directly open a resource file prior to treatment and storage in memory. Is this something that would likely occur in all full trust shared hosting environments or is this specific to my host? The static files live within my application's structure and not in an arbitrary server path. It seems to me that custom caching would require code to access the file directly, and am hoping someone else has dealt with this issue.

    Read the article

  • Dispelling the UIImage imageNamed: FUD

    - by Roger Nolan
    I see a lot of people saying imageNamed is bad but equal numbers of people saying the performance is good - especially when rendering UITableViews. See this SO question for example or this article on iPhoneDeveloperTips.com UIImage's imageNamed method used to leak so it was best avoided but has been fixed in recent releases. I'd like to understand the caching algorithm better in order to make a reasoned decision about where I can trust the system to cache my images and where I need to go the extra mile and do it myself. My current basic understanding is that it's a simple NSMutableDictionary of UIImages referenced by filename. It gets bigger and when memory runs out it gets a lot smaller. For example, does anyone know for sure that the image cache behind imageNamed does not respond to didReceiveMemoryWarning? It seems unlikely that Apple would not do this. If you have any insight into the caching algorithm, please post it here.

    Read the article

  • PHP website Optimization

    - by ana
    I have a high traffic website and I need make sure my site is fast enough to display my pages to everyone rapidly. I searched on Google many articles about speed and optimization and here's what I found: Cache the page Save it to the disk Caching the page in memory: This is very fast but if I need to change the content of my page I have to remove it from cache and then re-save the file on the disk. Save it to disk This is very easy to maintain but every time the page is accessed I have to read on the disk. Which method should I go with? Thanks

    Read the article

  • Cache for everybody except staff members.

    - by Oli
    I have a django site where I want to stick an "admin bar" along the top of every non-admin page for staff members. It would contain useful things like page editing tools, etc. The problem comes from me using the @cache_page decorator on lots of pages. If a normal user hits a page, the cached version comes up without the admin bar (even for admin users) and if an admin hits the page first, normal users see the admin bar. I could tediously step through the templates, adding regional cache blocks but there are a lot of templates, and life is altogether too short. Ideally, there would be a way of telling the caching to ignore cache get/set requests from admin users... But I don't know how to best implement that. How would you tackle this problem?

    Read the article

  • Personal Cache vs Memcache?

    - by Kerry
    I have a personal caching class, which can be seen here ( based off WordPress' ): http://pastie.org/988427 I recently learned about memcache and it said to memcache EVERYTHING: http://highscalability.com/blog/2010/5/17/7-lessons-learned-while-building-reddit-to-270-million-page.html My first thought was just to keep my class with the current functions and make it use memcache instead -- is there any downside to doing this? The main difference I see is that memcache stays on with the server from page to page, while mine is for 1 page load. The problem I see arising, and this is with any system, is that they're dynamic. They change all the time. Whether its search results, visible products, etc. etc. If it's all cached, won't the create a problem? Is there a way to handle this? Obviously if something is bringing back the same results everytime it would be cached, but that's why I was doing it on a per page load basis. I'm sure there is a way to handle this, or is the cache time usually set between 5 minutes and an hour?

    Read the article

  • what is pagecache page

    - by kumar
    /* * Each physical page in the system has a struct page associated with * it to keep track of whatever it is we are using the page for at the * moment. Note that we have no way to track which tasks are using * a page, though if it is a pagecache page, rmap structures can tell us * who is mapping it. */ include/linux/mm_types.h Here Please lemme know what is "pagecache page" means? Thanks!

    Read the article

  • How can I limit the cache used by copying so there is still memory available for other cache?

    - by Peter
    Basic situation: I am copying some NTFS disks in openSuSE. Each one is 2TB. When I do this, the system runs slow. My guesses: I believe it is likely due to caching. Linux decides to discard useful cache (eg. kde4 bloat, virtual machine disks, LibreOffice binaries, Thunderbird binaries, etc.) and instead fill all available memory (24 GB total) with stuff from the copying disks, which will be read only once, then written and never used again. So then any time I use these apps (or kde4), the disk needs to be read again, and reading the bloat off the disk again makes things freeze/hiccup. Due to the cache being gone and the fact that these bloated applications need lots of cache, this makes the system horribly slow. Since it is USB,the disk and disk controller are not the bottleneck, so using ionice does not make it faster. I believe it is the cache rather than just the motherboard going too slow, because if I stop everything copying, it still runs choppy for a while until it recaches everything. And if I restart the copying, it takes a minute before it is choppy again. But also, I can limit it to around 40 MB/s, and it runs faster again (not because it has the right things cached, but because the motherboard busses have lots of extra bandwidth for the system disks). I can fully accept a performance loss from my motherboard's IO capability being completely consumed (which is 100% used, meaning 0% wasted power which makes me happy), but I can't accept that this caching mechanism performs so terribly in this specific use case. # free total used free shared buffers cached Mem: 24731556 24531876 199680 0 8834056 12998916 -/+ buffers/cache: 2698904 22032652 Swap: 4194300 24764 4169536 I also tried the same thing on Ubuntu, which causes a total system hang instead. ;) And to clarify, I am not asking how to leave memory free for the "system", but for "cache". I know that cache memory is automatically given back to the system when needed, but my problem is that it is not reserved for caching of specific things. Question: Is there some way to tell these copy operations to limit memory usage so some important things remain cached, and therefore any slowdowns are a result of normal disk usage and not rereading the same commonly used files? For example, is there a setting of max memory per process/user/file system allowed to be used as cache/buffers?

    Read the article

  • Can someone explain this block of ASP.NET MVC code to me, please?

    - by Pure.Krome
    Hi folks, this is the current code in ASP.NET MVC2 (RTM) System.Web.Mvc.AuthorizeAttribute class :- public virtual void OnAuthorization(AuthorizationContext filterContext) { if (filterContext == null) { throw new ArgumentNullException("filterContext"); } if (this.AuthorizeCore(filterContext.HttpContext)) { HttpCachePolicyBase cache = filterContext.HttpContext.Response.Cache; cache.SetProxyMaxAge(new TimeSpan(0L)); cache.AddValidationCallback( new HttpCacheValidateHandler(this.CacheValidateHandler), null); } else { filterContext.Result = new HttpUnauthorizedResult(); } } so if i'm 'authorized' then do some caching stuff, otherwise throw a 401 Unauthorized response. Question: What does those 3 caching lines do? cheers :)

    Read the article

  • How do I use master page container in partial view

    - by user200295
    I have several partial views with Javascript that I am trying to move to the bottom of the page. To do this I am trying to use a container in the master page Master Page - <asp:ContentPlaceHolder ID="Foot" runat="server"></asp:ContentPlaceHolder> Partial view(ascx) <asp:Content ID="header" ContentPlaceHolderID="head" runat="server"> ... </asp:Content> But I get this error Parser Error Message: Content controls have to be top-level controls in a content page or a nested master page that references a master page. So how do I ensure that the Javascript for the partial view is at the bottom of the page? Especially in cases where the html layout needs to be at the top of the page?

    Read the article

  • Caching for a Custom Repositiory Adapter for WebSphere Portal Virtual Member Manager

    - by Spike Williams
    I'm looking at writing a custom repository adapter to interact with Virtual Member Manager on WebSphere Portal 6.1. Basically, its a layer that takes a request in the form of a commonj.sco.DataObject and passes that on to an external web service, to get various information on our logged in users that is not otherwise available in LDAP. I'm concerned about the performance hit of going to a service every time we want to pull some permission from the back end. My question is, can the Virtual Member Manager handle caching of data going in and out of the custom repository adapters, or is that something I'm going to have to build into the adapter myself?

    Read the article

  • Redirecting page from IFrame to Parent when session expire

    - by Venkatesh
    I have IFrame load one of the menu item click in that page left side I have tree structure and top of the page menu Items.Center of the page loading with IFrame.Whenever session going to expire I am giving alert(session will expire soon). But this alert is coming for Iframe window (after some seconds complete) as well as parent page.That means two alerts are coming whenever session going to expire .That alert should come on parent page not in Iframe page.How to avoid this?

    Read the article

  • Cached/offline maps for iPhone?

    - by Konstantin
    I'd like to use use maps in my application, so that there will be as less as possible traffic. Perfect solution would be caching of map slices. I know it's not possible with google maps (license). I took a look on OpenStreetMaps and it seems as good solution. The next: SDK. The only one I've found is from CloudMade. The problem is, I found no related API methods for caching/offline calls. Are there any alternative solutions?

    Read the article

  • Thread Local Memory for Scratch Memory.

    - by Hassan Syed
    I am using Protocol Buffers and OpensSSL to generate, HMACs and then CBC encrypt the two fields to obfuscate the session cookies -- similar Kerberos tokens. Protocol Buffers' API communicates with std::strings and has a buffer caching mechanism; I exploit the caching mechanism, for successive calls in the the same thread, by placing it in thread local memory; additionally the OpenSSL HMAC and EVP CTX's are also placed in the same thread local memory structure ( see this question for some detail on why I use thread local memory and the massive amount of speedup it enables even with a single thread). The generation and deserialization, "my algorithms", of these cookie strings uses intermediary void *s and std::strings and since Protocol Buffers has an internal memory retention mechanism I want these characteristics for "my algorithms". So how do I implement a common scratch memory ? I don't know much about the rdbuf of the std::string object. I would presumeably need to grow it to the lowest common size ever encountered during the execution of "my algorithms". Thoughts ?

    Read the article

  • NoSQL or Ehcache caching ?

    - by paddydub
    I'm building a Route Planner Webapp using Spring/Hibernate/Tomcat and a mysql database, I have a database containing read only data, such as Bus Stop Coordinates, Bus times which is never updated. I'm trying to make the app run faster, each time the application is run it will preform approx 1000 reads to the database to calculate a route. I have setup a Ehcache which greatly improves the read from database times. I'm now setting terracotta + Ehcache distributed caching to share the cache with multiple Tomcat JVMs. This seems a bit complicated. I've tried memcached but it was not performing as fast as ehcache. I'm wondering if a MongoDb or Redis would be better suited. I have no experience with nosql but I would appreciate if anyone has any ideas. What i need is quick access to the read only database.

    Read the article

  • How do you set page level, page-SPECIFIC javascript events using a ContentPlaceHolder?

    - by donde
    I previously asked how to include Javascript in my page when I split the page into a MasterPage and ContentPlaceHolder (.NET 2.0 app) The issue was I only wanted the javascript functions on THAT page so I couldn't just put them on the masterpage. Based on the answers, I will inlcude common fucntions through MasterPage and can put the page-specific function right on the content page. However, 1 question remains: Events. I have 2 Javascript functions that I wanted to load when the page loads ala the HTML below. How do you load javascript page events on the specific content page? Or in the case below, the OnKeyPress event? <body onkeypress="javascript:keypressed();" onload="javascript:setDivVisibility();">

    Read the article

  • Sharepoint publishing cache- counter missing on WFE(Object Caching)

    - by Ryan
    I want to tune object caching in my sharepoint environment. The way to do is check sharepoint publishing cache counter through perfmon on your farm. I have one application server and 2 WFE but when I am trying to create counter for my WFE its showing Sharepoint publishing cache but I am not able to add any instance of it but when I select my application server I can see all the instances. But If I want to check publishing hit ratio i need to run this on WFE also ...correct me if I m wrong? How to resolve this issue? Also, how to check the hit ratio as our site is not go-live so we don't get enuf users to hit the site to check this thing. Does it means that I can tune it up only my site go live and real load will get on it. Thanks, Amit

    Read the article

  • Caching result of setUp() using Python unittest

    - by dbr
    I currently have a unittest.TestCase that looks like.. class test_appletrailer(unittest.TestCase): def setup(self): self.all_trailers = Trailers(res = "720", verbose = True) def test_has_trailers(self): self.failUnless(len(self.all_trailers) > 1) # ..more tests.. This works fine, but the Trailers() call takes about 2 seconds to run.. Given that setUp() is called before each test is run, the tests now take almost 10 seconds to run (with only 3 test functions) What is the correct way of caching the self.all_trailers variable between tests? Removing the setUp function, and doing.. class test_appletrailer(unittest.TestCase): all_trailers = Trailers(res = "720", verbose = True) ..works, but then it claims "Ran 3 tests in 0.000s" which is incorrect.. The only other way I could think of is to have a cache_trailers global variable (which works correctly, but is rather horrible): cache_trailers = None class test_appletrailer(unittest.TestCase): def setUp(self): global cache_trailers if cache_trailers is None: cache_trailers = self.all_trailers = all_trailers = Trailers(res = "720", verbose = True) else: self.all_trailers = cache_trailers

    Read the article

  • caching in memory on server

    - by zaharpopov
    I want to write web app with client Javascript and back-end server (Python). Client needs data from server frequently in AJAX way. Data in DB, and expensive to load for each request. However, in desktop app I would just load data from DB once to memory and then access it. In web app - the server code runs each time for request so I can't do it (each run has to load from DB to memory again). How can this work? Can a single process run on server or do I have to use something different here? An example is like auto-complete here on stackoverflow for tags - how is it implemented in the server for fast caching/loading?

    Read the article

  • Can a proxy server cache SSL GETs? If not, would response body encryption suffice?

    - by Damian Hickey
    Can a (||any) proxy server cache content that is requested by a client over https? As the proxy server can't see the querystring, or the http headers, I reckon they can't. I'm considering a desktop application, run by a number of people behind their companies proxy. This application may access services across the internet and I'd like to take advantage of the in-built internet caching infrastructure for 'reads'. If the caching proxy servers can't cache SSL delivered content, would simply encrypting the content of a response be a viable option? I am considering all GET requests that we wish to be cachable be requested over http with the body encrypted using asymmetric encryption, where each client has the decryption key. Anytime we wish to perform a GET that is not cachable, or a POST operation, it will be performed over SSL.

    Read the article

  • ms access negative page numbers

    - by FrustratedWithFormsDesigner
    I have an access report that generates 36505 pages (un filtered, and about half of each page is taken up by group headers and page headers) , though the footer at the bottom of the report page says "36505 of -29031". This looks like an overflow problem maybe, though I'm confused how it got the current page number of the last page OK, but failed to get total page count. Has anyone dealt with this before?

    Read the article

  • config.cache_classes = true in production mode has problems in IE

    - by techno_log
    Hi Dears, In my rails app. I am using link_to_function to bring an ajax tabs in one page.Everything works fine in Moazilla and other browsers. But in IE the tabs are not loading only when the server is started in production mode(doesn't matter whether its webrick or mongrel). In development mode everything is fine. So I figured out that the issue was with one line config.cache_classes = true in app/config/environments/production.rb when i changed the above code to config.cache_classes = false everything works fine. So I assume caching causes problem in rails. When I Googled about this I found many have the issues with caching. So my question is 1) is there any other fix for this? 2) Does this fix (config.cache_classes = false) causes any performance issues. If then how to overcome that? Any comments and suggestions are welcome. Techno_log

    Read the article

  • Does Google appengine cache external requests?

    - by Andy Hume
    I have a very simple application running on appengine that requests a web page every five minutes and parses for a specific piece of data. Everything works fine except that the response I get back from the external request (using urllib2) doesn't reflect the latest changes to the page. Sometimes it takes a few minutes to get the latest, sometimes over an hour. Is there a transparent layer of caching that appengine puts in place? Or is there something else I am missing here? I've looked at the caching headers of the requested page and there is no Expires or LastModified's sent. Update: Sometimes, it will get the new version of the page for a number of requests and then randomly later get an old out of date version.

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >