Search Results

Search found 6412 results on 257 pages for 'cache'.

Page 6/257 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • disk write cache buffer and separate power supply

    - by HugoRune
    Windows has a setting to turn off the write-cache buffer (see image) Turn off Windows write-cache buffer flushing on the device To prevent data loss, do not select this check box unless the device has a separate power supply that allows the device to flush its buffer in case of power failure. Is it feasible and economical to get such a "separate power supply" for the internal sata drives of a non-server PC? Under what name is such a power supply sold? I know that there are UPS devices that can be connected to external drives,but what is required to be able to switch this setting safely on for an internal disk? The setting has different descriptions in different version of windows Windows XP: Enable write caching on the disk This setting enables write caching in Windows to improve disk performance, but a power outage or equipment failure might result in data loss or corruption. Windows Server 2003: Enable write caching on the disk Recommended only for disks with a backup power supply. This setting further improves disk performance, but it also increases the risk of data loss if the disk loses power. Windows Vista: Enable advanced performance Recommended only for disks with a backup power supply. This setting further improves disk performance, but it also increases the risk of data loss if the disk loses power. Windows 7 and 8: Turn off Windows write-cache buffer flushing on the device To prevent data loss, do not select this check box unless the device has a separate power supply that allows the device to flush its buffer in case of power failure. This article by Raymond Chen has some more detailed information about what the setting does.

    Read the article

  • Gain Quick Access to the Cache in Firefox

    - by Asian Angel
    Are you looking for a quick and simple way to view the contents of the cache in Firefox? Then you will definitely want to see how easy it can be using the CacheViewer extension. Note: CacheViewer is a front-end app for easily accessing and searching the memory cache. Before Viewing the cache in Firefox using “about:cache” provides some information about the contents but may not be the most efficient method available for some people. CacheViewer in Action Once you have installed the extension there are three easy ways to access your new cache viewer. The first is using the “CacheViewer Command” available in the “Tools Menu” and the second is using the keyboard shortcut “Ctrl + Shift + C”. The third way is by adding a “Toolbar Button” to your browser’s UI. All three work equally well…choose the method that best suits your personal needs. When you access the “CacheViewer Window” this is what it will look like. You may decide to resize it and move (or hide) some of the columns for the best viewing. You can easily scroll through the cache contents and preview images if desired as shown here. If you keep the “CacheViewer Window” open you can refresh it as you browse using the “Refresh Button” in the lower right corner. This is a nice, quick, and very simple way to access the cache on demand and save items to your hard-drive if desired. Note: The “CacheViewer” can also be set to open in a new tab instead (see “Options”). Options Choose whether “CacheViewer” opens in a separate window (default) or in a new tab. Conclusion If you want a quick and simple way to view the cache in Firefox then the CacheViewer extension is just what you have been looking for. Link Download the CacheViewer extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Add a Cache Clearing Button to FirefoxSearch for Install Packages from the Ubuntu Command LineQuick Tip: Empty Internet Explorer 7 Cache when Browser is ClosedView Internet Explorer Cache Files the Easy WayQuick Hits: 11 Firefox Tab How-Tos TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Out of band Security Update for Internet Explorer 7 Cool Looking Screensavers for Windows SyncToy syncs Files and Folders across Computers on a Network (or partitions on the same drive) If it were only this easy Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook

    Read the article

  • Clarification On Write-Caching Policy, Its Underlying Options And How It Applies To Hard Drives And Solid-State Drives

    - by Boris_yo
    In last week after doing more research on subject matter, I have been wondering about what I have been neglecting all those years to understand write-caching policy, always leaving it on default setting. Write-caching policy improves writing performance and consists of write-back caching and write-cache buffer flushing. This is how I understand all the above, but correct me if I erred somewhere: Write-through cache / Write-through caching itself is not a part of write caching policy per se and it's when data is written to both cache and storage device so if Windows will need that data later again, it is retrieved from cache and not from storage device which means only improved read performance as there is no need for waiting for storage device to read required data again. Since data is still written to storage device, write performance isn't improved and represents no risk of data loss or corruption in case of power failure or system crash while only data in cache gets lost. This option seems to be enabled by default and is recommended for removable devices with no need to use function of "Safely Remove Hardware" on user's part. Write-back caching is similar to above but without writing data to storage device, periodically releasing data from cache and writing to storage device when it is idle. In my opinion this option improves both read and write performance but represents risk if power failure or system crash occurs with the outcome of not only losing data eventually to be written to storage device, but causing file inconsistencies or corrupted file system. Write-back caching cannot be enabled together with write-through caching and it is not recommended to be enabled if no backup power supply is availabe. Write-cache buffer flushing I reckon is similar to write-back caching but enables immediate release and writing of data from cache to storage device right before power outage occurs but I don't know if it applies also to occasional system crash. This option seem to be complementary to write-back cache reducing or potentially eliminating risk of data loss or corruption of file system. I have questions about relevance of last 2 options to today's modern SSDs in order to get best performance and with less wear on SSDs: I know that traditional hard drives come with onboard cache (I wonder what type of cache that is), but do SSDs also come with cache? Assuming they do, is this cache faster than their NAND flash and system RAM and worth taking the risk of utilizing it by enabling write-back cache? I read somewhere that generally storage device's cache is faster than RAM, but I want to be sure. Additionally I read that write-caching should be enabled since current data that is to be written later to NAND flash is kept for a while in cache and provided there is data that gets modified a lot before finally being written, holding of this data and its periodic release reduces its write times to SSD thereby reducing its wearing. Now regarding to write-cache buffer flushing, I heard that SSD controllers are so fast by themselves that enabling this option is not required, because they manage flushing. However, once again, I don't know if SSDs have their own onboard cache and whether or not it is faster than their NAND flash and system RAM because if it is, keeping this option enabled would make sense. Recently I have posted question about issue with my Intel 330 SSD 120GB which was main reason to do deeper research having suspicion of write-caching policy being the culprit of SSD's freezing issue assuming data being released is what causes freezes. Currently I have write-cache enabled and write-cache buffer flushing disabled because I believe SSD controller's management of write-cache flushing and Windows write-cache buffer flushing are conflicting with each other: Since I want to troubleshoot in small steps to finally determine the source of issue, I have decided to start with write-caching policy and the move to drivers, switching to AHCI later on and finally disabling DIPM (device initiated power management) through registry modification thanks to @TomWijsman

    Read the article

  • What's So Smart About Oracle Exadata Smart Flash Cache?

    - by kimberly.billings
    Want to know what's so "smart" about Oracle Exadata Smart Flash Cache? This three minute video explains how Oracle Exadata Smart Flash Cache helps solve the random I/O bottleneck challenge and delivers extreme performance for consolidated database applications. Exadata Smart Flash Cache is a feature of the Sun Oracle Database Machine. With it, you get ten times faster I/O response time and use ten times fewer disks for business applications from Oracle and third-party providers. Read the whitepaper for more information. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Can I copy large files faster without using the file cache?

    - by Veazer
    After adding the preload package, my applications seem to speed up but if I copy a large file, the file cache grows by more than double the size of the file. By transferring a single 3-4 GB virtualbox image or video file to an external drive, this huge cache seems to remove all the preloaded applications from memory, leading to increased load times and general performance drops. Is there a way to copy large, multi-gigabyte files without caching them (i.e. bypassing the file cache)? Or a way to whitelist or blacklist specific folders from being cached?

    Read the article

  • From HttpRuntime.Cache to Windows Azure Caching (Preview)

    - by Jeff
    I don’t know about you, but the announcement of Windows Azure Caching (Preview) (yes, the parentheses are apparently part of the interim name) made me a lot more excited about using Azure. Why? Because one of the great performance tricks of any Web app is to cache frequently used data in memory, so it doesn’t have to hit the database, a service, or whatever. When you run your Web app on one box, HttpRuntime.Cache is a sweet and stupid-simple solution. Somewhere in the data fetching pieces of your app, you can see if an object is available in cache, and return that instead of hitting the data store. I did this quite a bit in POP Forums, and it dramatically cuts down on the database chatter. The problem is that it falls apart if you run the app on many servers, in a Web farm, where one server may initiate a change to that data, and the others will have no knowledge of the change, making it stale. Of course, if you have the infrastructure to do so, you can use something like memcached or AppFabric to do a distributed cache, and achieve the caching flavor you desire. You could do the same thing in Azure before, but it would cost more because you’d need to pay for another role or VM or something to host the cache. Now, you can use a portion of the memory from each instance of a Web role to act as that cache, with no additional cost. That’s huge. So if you’re using a percentage of memory that comes out to 100 MB, and you have three instances running, that’s 300 MB available for caching. For the uninitiated, a Web role in Azure is essentially a VM that runs a Web app (worker roles are the same idea, only without the IIS part). You can spin up many instances of the role, and traffic is load balanced to the various instances. It’s like adding or removing servers to a Web farm all willy-nilly and at your discretion, and it’s what the cloud is all about. I’d say it’s my favorite thing about Windows Azure. The slightly annoying thing about developing for a Web role in Azure is that the local emulator that’s launched by Visual Studio is a little on the slow side. If you’re used to using the built-in Web server, you’re used to building and then alt-tabbing to your browser and refreshing a page. If you’re just changing an MVC view, you’re not even doing the building part. Spinning up the simulated Azure environment is too slow for this, but ideally you want to code your app to use this fantastic distributed cache mechanism. So first off, here’s the link to the page showing how to code using the caching feature. If you’re used to using HttpRuntime.Cache, this should be pretty familiar to you. Let’s say that you want to use the Azure cache preview when you’re running in Azure, but HttpRuntime.Cache if you’re running local, or in a regular IIS server environment. Through the magic of dependency injection, we can get there pretty quickly. First, design an interface to handle the cache insertion, fetching and removal. Mine looks like this: public interface ICacheProvider {     void Add(string key, object item, int duration);     T Get<T>(string key) where T : class;     void Remove(string key); } Now we’ll create two implementations of this interface… one for Azure cache, one for HttpRuntime: public class AzureCacheProvider : ICacheProvider {     public AzureCacheProvider()     {         _cache = new DataCache("default"); // in Microsoft.ApplicationServer.Caching, see how-to      }         private readonly DataCache _cache;     public void Add(string key, object item, int duration)     {         _cache.Add(key, item, new TimeSpan(0, 0, 0, 0, duration));     }     public T Get<T>(string key) where T : class     {         return _cache.Get(key) as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } public class LocalCacheProvider : ICacheProvider {     public LocalCacheProvider()     {         _cache = HttpRuntime.Cache;     }     private readonly System.Web.Caching.Cache _cache;     public void Add(string key, object item, int duration)     {         _cache.Insert(key, item, null, DateTime.UtcNow.AddMilliseconds(duration), System.Web.Caching.Cache.NoSlidingExpiration);     }     public T Get<T>(string key) where T : class     {         return _cache[key] as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } Feel free to expand these to use whatever cache features you want. I’m not going to go over dependency injection here, but I assume that if you’re using ASP.NET MVC, you’re using it. Somewhere in your app, you set up the DI container that resolves interfaces to concrete implementations (Ninject call is a “kernel” instead of a container). For this example, I’ll show you how StructureMap does it. It uses a convention based scheme, where if you need to get an instance of IFoo, it looks for a class named Foo. You can also do this mapping explicitly. The initialization of the container looks something like this: ObjectFactory.Initialize(x =>             {                 x.Scan(scan =>                         {                             scan.AssembliesFromApplicationBaseDirectory();                             scan.WithDefaultConventions();                         });                 if (Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.IsAvailable)                     x.For<ICacheProvider>().Use<AzureCacheProvider>();                 else                     x.For<ICacheProvider>().Use<LocalCacheProvider>();             }); If you use Ninject or Windsor or something else, that’s OK. Conceptually they’re all about the same. The important part is the conditional statement that checks to see if the app is running in Azure. If it is, it maps ICacheProvider to AzureCacheProvider, otherwise it maps to LocalCacheProvider. Now when a request comes into your MVC app, and the chain of dependency resolution occurs, you can see to it that the right caching code is called. A typical design may have a call stack that goes: Controller –> BusinessLogicClass –> Repository. Let’s say your repository class looks like this: public class MyRepo : IMyRepo {     public MyRepo(ICacheProvider cacheProvider)     {         _context = new MyDataContext();         _cache = cacheProvider;     }     private readonly MyDataContext _context;     private readonly ICacheProvider _cache;     public SomeType Get(int someTypeID)     {         var key = "somename-" + someTypeID;         var cachedObject = _cache.Get<SomeType>(key);         if (cachedObject != null)         {             _context.SomeTypes.Attach(cachedObject);             return cachedObject;         }         var someType = _context.SomeTypes.SingleOrDefault(p => p.SomeTypeID == someTypeID);         _cache.Add(key, someType, 60000);         return someType;     } ... // more stuff to update, delete or whatever, being sure to remove // from cache when you do so  When the DI container gets an instance of the repo, it passes an instance of ICacheProvider to the constructor, which in this case will be whatever implementation was specified when the container was initialized. The Get method first tries to hit the cache, and of course doesn’t care what the underlying implementation is, Azure, HttpRuntime, or otherwise. If it finds the object, it returns it right then. If not, it hits the database (this example is using Entity Framework), and inserts the object into the cache before returning it. The important thing not pictured here is that other methods in the repo class will construct the key for the cached object, in this case “somename-“ plus the ID of the object, and then remove it from cache, in any method that alters or deletes the object. That way, no matter what instance of the role is processing the request, it won’t find the object if it has been made stale, that is, updated or outright deleted, forcing it to attempt to hit the database. So is this good technique? Well, sort of. It depends on how you use it, and what your testing looks like around it. Because of differences in behavior and execution of the two caching providers, for example, you could see some strange errors. For example, I immediately got an error indicating there was no parameterless constructor for an MVC controller, because the DI resolver failed to create instances for the dependencies it had. In reality, the NuGet packaged DI resolver for StructureMap was eating an exception thrown by the Azure components that said my configuration, outlined in that how-to article, was wrong. That error wouldn’t occur when using the HttpRuntime. That’s something a lot of people debate about using different components like that, and how you configure them. I kinda hate XML config files, and like the idea of the code-based approach above, but you should be darn sure that your unit and integration testing can account for the differences.

    Read the article

  • Cached ObjectDataSource not firing Select Event even Cache Dependecy Removed

    - by John Polvora
    I have the following scenario. A Page with a DetailsView binded to an ObjectDatasource with cache-enabled. The SelectMethod is assigned at Page_Load event, depending on my User Level Logic. After assigned the selectMethod and Parameters for the ODS, if Cache not exists, then ODS will be cached the first time. The next time, the cache will be applied to the ODS and the select event don't need to be fired since the dataresult is cached. The problem is, the ODS Cache works fine, but I have a Refresh button to clear the cache and rebind the DetailsView. Am I doing correctly ? Below is my code. <asp:DetailsView ID="DetailsView1" runat="server" DataSourceID="ObjectDataSource_Summary" EnableModelValidation="True" EnableViewState="False" ForeColor="#333333" GridLines="None"> </asp:DetailsView> <asp:ObjectDataSource ID="ObjectDataSource_Summary" runat="server" SelectMethod="" TypeName="BL.BusinessLogic" EnableCaching="true"> <SelectParameters> <asp:Parameter Name="idCompany" Type="String" /> <SelectParameters> </asp:ObjectDataSource> <asp:ImageButton ID="ImageButton_Refresh" runat="server" OnClick="RefreshClick" ImageUrl="~/img/refresh.png" /> And here is the code behind public partial class Index : Page { protected void Page_Load(object sender, EventArgs e) { ObjectDataSource_Summary.SelectMethod = ""; ObjectDataSource_Summary.SelectParameters[0].DefaultValue = ""; switch (this._loginData.UserLevel) //this is a struct I use for control permissions e pages behaviour { case OperNivel.SysAdmin: case OperNivel.SysOperator: { ObjectDataSource_Summary.SelectMethod = "SystemSummary"; ObjectDataSource_Summary.SelectParameters[0].DefaultValue = "0"; break; } case OperNivel.CompanyAdmin: case OperNivel.CompanyOperator: { ObjectDataSource_Summary.SelectMethod = "CompanySummary"; ObjectDataSource_Summary.SelectParameters[0].DefaultValue = this._loginData.UserLevel.ToString(); break; } default: break; } } protected void Page_LoadComplete(object sender, EventArgs e) { if (Cache[ObjectDataSource_Summary.CacheKeyDependency] == null) { this._loginData.LoginDatetime = DateTime.Now; Session["loginData"] = _loginData; Cache[ObjectDataSource_Summary.CacheKeyDependency] = _loginData; DetailsView1.DataBind(); } } protected void RefreshClick(object sender, ImageClickEventArgs e) { Cache.Remove(ObjectDataSource_Summary.CacheKeyDependency); } } Can anyone help me? The Select() Event of the ObjectDasource is not firing even I Remove the CacheKey Dependency

    Read the article

  • RAID 0 Volatile Volume Cache Mode configuration

    - by SnippetSpace
    I discovered that in IRST there is an option to set a cache mode for my 3 ssd raid 0 array. I've read the documentation by Intel and have some questions: Are there any overall benefits/risks from enabling cache mode? As I'm on a laptop, would write back be recommended? I read it increases chance of data loss on power interruption. What is the difference between how windows handles data integrity and the intel driver? Read only mode seems to have the benefit of faster reads, does it have any downsides? Thanks for your help guys!

    Read the article

  • tweak windows 7 virtual memory and cache / caching settings

    - by bortao
    im on windows 7 64 bit, with 4gb of memory whenever i copy or deal with a big ammount of data, windows swaps out everything from memory to the virtual memory swapfile, to make room to data cache. the problem is: i dont really need caching of this data im copying, its being copied only once, cacheing this data won't help me. on the other hand, swapping out the programs will give me a big lag time whenever i want to use those open programs again. what i want: restrict data cache to a certain ammount, lets say 1gb, or reserve a certain ammount of memory, lets say 2gb, exclusively for running programs memory. my swap file is on a separate partition, but i still have problems with swapping time.

    Read the article

  • tweak windows 7 virtual memory and cache / caching settings

    - by bortao
    im on windows 7 64 bit, with 4gb of memory whenever i copy or deal with a big ammount of data, windows swaps out everything from memory to the virtual memory swapfile, to make room to data cache. the problem is: i dont really need caching of this data im copying, its being copied only once, cacheing this data won't help me. on the other hand, swapping out the programs will give me a big lag time whenever i want to use those open programs again. what i want: restrict data cache to a certain ammount, lets say 1gb, or reserve a certain ammount of memory, lets say 2gb, exclusively for running programs memory. my swap file is on a separate partition, but i still have problems with swapping time.

    Read the article

  • How to place SuperFetch cache on an SSD?

    - by Ian Boyd
    I'm thinking of adding a solid state drive (SSD) to my existing Windows 7 installation. I know I can (and should) move my paging file to the SSD: Should the pagefile be placed on SSDs? Yes. Most pagefile operations are small random reads or larger sequential writes, both of which are types of operations that SSDs handle well. In looking at telemetry data from thousands of traces and focusing on pagefile reads and writes, we find that Pagefile.sys reads outnumber pagefile.sys writes by about 40 to 1, Pagefile.sys read sizes are typically quite small, with 67% less than or equal to 4 KB, and 88% less than 16 KB. Pagefile.sys writes are relatively large, with 62% greater than or equal to 128 KB and 45% being exactly 1 MB in size. In fact, given typical pagefile reference patterns and the favorable performance characteristics SSDs have on those patterns, there are few files better than the pagefile to place on an SSD. What I don't know is if I even can put a SuperFetch cache (i.e. ReadyBoost cache) on the solid state drive. I want to get the benefit of Windows being able to cache gigabytes of frequently accessed data on a relativly small (e.g. 30GB) solid state drive. This is exactly what SuperFetch+ReadyBoost (or SuperFetch+ReadyDrive) was designed for. Will Windows offer (or let) me place a ReadyBoost cache on a solid state flash drive connected via SATA? A problem with the ReadyBoost cache over the ReadyDrive cache is that the ReadyBoost cache does not survive between reboots. The cache is encrypted with a per-session key, making its existing contents unusable during boot and SuperFetch pre-fetching during login. Update One I know that Windows Vista limited you to only one ReadyBoost.sfcache file (I do not know if Windows 7 removed that limitation): Q: Can use use multiple devices for EMDs? A: Nope. We've limited Vista to one ReadyBoost per machine Q: Why just one device? A: Time and quality. Since this is the first revision of the feature, we decided to focus on making the single device exceptional, without the difficulties of managing multiple caches. We like the idea, though, and it's under consideration for future versions. I also know that the 4GB limit on the cache file was a limitation of the FAT filesystem used on most USB sticks - an SSD drive would be formatted with NTFS: Q: What's the largest amount of flash that I can use for ReadyBoost? A: You can use up to 4GB of flash for ReadyBoost (which turns out to be 8GB of cache w/ the compression) Q: Why can't I use more than 4GB of flash? A: The FAT32 filesystem limits our ReadyBoost.sfcache file to 4GB Can a ReadyBoost cache on an NTFS volume be larger than 4GB? Update Two The ReadyBoost cache is encrypted with a per-boot session key. This means that the cache has to be re-built after each boot, and cannot be used to help speed boot times, or latency from login to usable. Windows ReadyDrive technology takes advantage of non-volatile (NV) memory (i.e. flash) that is incorporated with some hybrid hard drives. This flash cache can be used to help Windows boot, or resume from hibernate faster. Will Windows 7 use an internal SSD drive as a ReadyBoost/*ReadyDrive*/SuperFetch cache? Is it possible to make Windows store a SuperFetch cache (i.e. ReadyBoost) on a non-removable SSD? Is it possible to not encrypt the ReadyBoost cache, and if so will Windows 7 use the cache at boot time? See also SuperUser.com: ReadyBoost + SSD = ? Windows 7 - ReadyBoost & SSD drives? Support and Q&A for Solid-State Drives Using SDD as a cache for HDD, is there a solution? Performance increase using SSD for paging/fetch/cache or ReadyBoost? (Win7) Windows 7 To Boost SSD Performance How to Disable Nonvolatile Caching

    Read the article

  • Strange Unclearable Cache Issue with Gmail and Google Apps

    - by Brian
    I am having a strange issue with Gmail and Google Apps... have a look at this screenshot: http://cld.ly/f51ume notice the missing images for the rounded corners? Well this is not such a problem but something similar to a cache issue is causing this as well as no background image, but MOST IMPORTANTLY chat and other "clickable" features aren't working. I've already cleared my cache multiple times and quit and re-started Firefox with no change. Everything is OK in other browsers. Any other debug suggestions?

    Read the article

  • 503 error Varnish cache when eAccelerator is started

    - by Netismine
    I have a Magento installation running on x-large Amazon server. I have Varnish, memcached and eAccelerator installed on the server. At first everything was working fine, but then at some point it stopped working, throwing 503 error with Varnish cache stamp below it. When I disable eaccelerator, error is gone and site is working. This is my eaccelerator config: extension="eaccelerator.so" eaccelerator.shm_size = "512" eaccelerator.cache_dir = "/var/cache/php-eaccelerator" eaccelerator.enable = "1" eaccelerator.optimizer = "1" eaccelerator.debug = 0 eaccelerator.log_file = "/var/log/httpd/eaccelerator_log" eaccelerator.name_space = "" eaccelerator.check_mtime = "1" eaccelerator.filter = "" eaccelerator.shm_ttl = "0" eaccelerator.shm_prune_period = "0" eaccelerator.shm_only = "0" eaccelerator.allowed_admin_path = "" any hints?

    Read the article

  • Force Windows to cache executables without running them?

    - by Josh Einstein
    Is there a way to force Windows to pre-load certain EXE/DLL binaries into its prefetch/superfetch cache as if they had been executed? I have a particular application that loads pretty slowly on first run but if it's "warm" (recently executed) it starts pretty quickly. I'd like to prime the cache early in the background before the application is needed. But since it shows a UI, I'm looking for a way to do this silently. So simply launching the application it isn't ideal. Thanks you in advance. Prompted by David's suggestion in the comments, I wrote a PowerShell script to memory map the files, seek to the end, and close them. I haven't done any controlled tests yet and it could just be my imagination, but Sublime Text (the application in question) appeared to load much more quickly this time around and I haven't used it for several hours.

    Read the article

  • using one disk as cache for others

    - by HugoRune
    Hi Given a PC with several hard drives: Is it possible to use one fast disk as a giant file cache? I.e. automatically copying frequently accessed data to that one disk, and transparently redirecting reads and writes to that disk, so that other drives would only have be accessed occassionally. (writes would have to be forwarded to the other disks after a while of course) Advantages: the other drives could be powered down most of the time; reducing power, heat, noise speed of the other drives would not matter much. cache disk could be solid state. How can I set such a system up? What OS supports these options? Is this possible at all using Windows or Linux?

    Read the article

  • Google Chrome not using local cache

    - by Steve
    Hi. I've been using Google Chrome as a substitute for Firefox not being able to handle having lots of tabs open at the same time. Unfortunately, it looks like Chrome is having the same problem. Freakin useless. I had to end Chrome as my whole system had slowed to a crawl. When I restarted it, I opted to restore the tabs that were last open. At this stage, every one of the 20+ tabs srated downloading the pages they had previously had open. My question is: why can't they open a locally stored/saved copy of the web page from cache? Does Google Chrome store pages in a cache? Also: after most of the pages had completed their downloading, I clicked on each tab to view the page. Half of them only display a white page, and I have to reload the page manually. What is causing this? Thanks for your help.

    Read the article

  • HTTP Cache Control max-age, must-revalidate

    - by nyb
    I have a couple of queries related to Cache-Control. If I specify Cache-Control "max-age=3600, must-revalidate" for a static html/js/images/css file, with Last Modified Header defined in HTTP header, a. Does browser/proxy cache(liek Squid/Akamai) go all the way to orgin server to validate before max-age expires? Or will it serve content from cache till max-age expires? b. After max-age expiry(that is expiry from cache), is there a IMS check or is content re-downloaded from origin server w/o IMS check?

    Read the article

  • MySQL query cache and PHP variables

    - by Saif Bechan
    I have seen the following statement made about the query cache: // query cache does NOT work $r = mysql_query("SELECT username FROM user WHERE signup_date >= CURDATE()"); // query cache works! $today = date("Y-m-d"); $r = mysql_query("SELECT username FROM user WHERE signup_date >= '$today'"); So the query cache only works on the second query. I was wondering if the query cache will also work on this: define('__TODAY',date("Y-m-d")); $r = mysql_query("SELECT username FROM user WHERE signup_date >= '".__TODAY."'");

    Read the article

  • How do I take advantage of Android's "Clear Cache" button

    - by Jay Askren
    In Android's settings, in the "Manage Applications" activity when clicking on an app, the data is broken down into Application, Data, and cache. There is also a button to clear the cache. My app caches audio files and I would like the user to be able to clear the cache using this button. How do I store them so they get lumped in with the cache and the user can clear them? I've tried storing files using both of the following techniques: newFile = File.createTempFile("mcb", ".mp3", context.getCacheDir()); newFile = new File(context.getCacheDir(), "mcb.mp3"); newFile.createNewFile(); In both cases, these files are listed as Data and not Cache.

    Read the article

  • how to disable web page cache throughout the servlets

    - by Kurt
    To no-cache web page, in the java controller servlet, I did somthing like this in a method: public ModelAndView home(HttpServletRequest request, HttpServletResponse response) throws Exception { ModelAndView mav = new ModelAndView(ViewConstants.MV_MAIN_HOME); mav.addObject("testing", "Test this string"); mav.addObject(request); response.setHeader("Cache-Control", "no-cache, no-store"); response.setHeader("Pragma", "no-cache"); response.setDateHeader("Expires", 0); return mav; } But this only works for a particular response object. I have many similar methods in a servlet. And I have many servlets too. If I want to disable cache throughout the application, what should I do? (I do not want to add above code for every single response object) Thanks in advance.

    Read the article

  • SQL Server Manageability Series: how to change the default path of .cache files of a data collector? #sql #mdw #dba

    - by ssqa.net
    How to change the default path of .cache files of a data collector after the Management Data Warehouse (MDW has been setup? This was the question asked by one of the DBAs in a client's place, instantly I enquired that were there any folder specified while setting up the MDW and obvious answer was no as there were left default. This means all the .CACHE files are stored under %C\TEMP directory which may post out of disk space problem on the server where the MDW is setup to collect. Going back...(read more)

    Read the article

  • Would there be any negative side-effects of sharing /var/cache/apt/ between two systems?

    - by ændrük
    In the interest of conserving bandwidth, I'm considering mounting a VirtualBox host's /var/cache/apt as /var/cache/apt in the guest. Both host and guest are Ubuntu 10.10 32-bit. Would there be any negative consequences to doing this? I'm aware of the more robust solutions like apt-proxy, but I'd prefer this simpler solution if it's possible in order to spare the host the overhead of running extra services.

    Read the article

  • jboss cache as hibernate 2nd level - cluster node doesn't persist replicated data

    - by Sergey Grashchenko
    I'm trying to build an architecture basically described in user guide http://www.jboss.org/file-access/default/members/jbosscache/freezone/docs/3.2.1.GA/userguide_en/html/cache_loaders.html#d0e3090 (Replicated caches with each cache having its own store.) but having jboss cache configured as hibernate second level cache. I've read manual for several days and played with the settings but could not achieve the result - the data in memory (jboss cache) gets replicated across the hosts, but it's not persisted in the datasource/database of the target (not original) cluster host. I had a hope that a node might become persistent at eviction, so I've got a cache listener and attached it to @NoveEvicted event. I found that though I could adjust eviction policy to fully control it, no any persistence takes place. Then I had a though that I could try to modify CacheLoader to set "passivate" to true, but I found that in my case (hibernate 2nd level cache) I don't have a way to access a loader. I wonder if replicated data persistence is possible at all by configuration tuning ? If not, will it work for me to create some manual peristence in CacheListener (I could check whether the eviction event is local, and if not - persist it to hibernate datasource somehow) ? I've used mvcc-entity configuration with the modification of cacheMode - set to REPL_ASYNC. I've also played with the eviction policy configuration. Last thing to mention is that I've tested entty persistence and replication in project that has been generated with Seam. I guess it's not important though.

    Read the article

  • Zend Metadata Cache in file

    - by Matthieu
    I set up a metadata cache in Zend Framework because a lot of DESCRIBE queries were executed and it affected the performances. $frontendOptions = array ('automatic_serialization' => true); $backendOptions = array ('cache_dir' => CACHE_PATH . '/db-tables-metadata'); $cache = Zend_Cache::factory( 'Core', 'File', $frontendOptions, $backendOptions ); Zend_Db_Table::setDefaultMetadataCache($cache); I can indeed see the cache files created, and the website works great. However, when I launch unit tests, or a script of the same application that perform DB queries, I end up with an error because Zend couldn't read the cache files. This is because in the website, the cache files are created by the www user, and when I run phpunit or a script, it tries to read them with my user and it fails. Do you see any solution to that? I have some quickfix ideas but I'm looking for a good/stable solution. And I'd rather avoid running phpunit or the scripts as www if possible (for practical reasons).

    Read the article

  • ????ASMM

    - by Liu Maclean(???)
    ???Oracle??????????????SGA/PGA???,????10g????????????ASMM????,????????ASMM?????????Oracle??????????,?ASMM??????DBA????????????;????????ASMM???????????????DBA???:????????????DB,?????????????DBA?????????????????????????????????,ASMM??????????,???????????,??????????,??????????????????;?10g release 1?10.2??????ASMM?????????????,???????ASMM????????ASMM?????startup???????????ASMM??AMM??,????????DBA????SGA/PGA?????????”??”??”???”???,???????????DBA????chemist(???????1??2??????????????)? ?????????????????ASMM?????,?????????????…… Oracle?SGA???????9i???????????,????: Buffer Cache ????????????,??????????????? Default Pool                  ??????,???DB_CACHE_SIZE?? Keep Pool                     ??????,???DB_KEEP_CACHE_SIZE?? Non standard pool         ???????,???DB_nK_cache_size?? Recycle pool                 ???,???db_recycle_cache_size?? Shared Pool ???,???shared_pool_size?? Library cache   ?????? Row cache      ???,?????? Java Pool         java?,???Java_pool_size?? Large Pool       ??,???Large_pool_size?? Fixed SGA       ???SGA??,???Oracle???????,?????????granule? ?9i?????ASMM,???????????SGA,??????MSMM??9i???buffer cache??????????,?????????????????????????,???9i?????????????,?????????????????????????? ????SGA?????: ?????shared pool?default buffer pool????????,??????????? ?9i???????????(advisor),?????????? ??????????????? ?????????,?????? ?????,?????ORA-04031?????????? ASMM?????: ?????????? ???????????????? ???????sga_target?? ???????????,??????????? ??MSMM???????: ???? ???? ?????? ???? ??????????,??????????? ??????????????????,??????????ORA-04031??? ASMM???????????:1.??????sga_target???????2.???????,???:????(memory component),????(memory broker)???????(memory mechanism)3.????(memory advisor) ASMM????????????(Automatically set),??????:shared_pool_size?db_cache_size?java_pool_size?large_pool _size?streams_pool_size;?????????????????,???:db_keep_cache_size?db_recycle_cache_size?db_nk_cache_size?log_buffer????SGA?????,????????????????,??log_buffer?fixed sga??????????????? ??ASMM?????????sga_target??,???????ASMM??????????????????db_cache_size?java_pool_size???,?????????????????????,????????????????????(???)????????,Oracle?????????(granule,?SGA<1GB?granule???4M,?SGA>1GB?granule???16M)???????,??????????????buffer cache,??????????????????(granule)??????????????????????sga_target??,???????????????????(dism,???????)???ASMM?????????????statistics_level?????typical?ALL,?????BASIC??MMON????(Memory Monitor is a background process that gathers memory statistics (snapshots) stores this information in the AWR (automatic workload repository). MMON is also responsible for issuing alerts for metrics that exceed their thresholds)?????????????????????ASMM?????,???????????sga_target?????statistics_level?BASIC: SQL> show parameter sga NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ lock_sga boolean FALSE pre_page_sga boolean FALSE sga_max_size big integer 2000M sga_target big integer 2000M SQL> show parameter sga_target NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ sga_target big integer 2000M SQL> alter system set statistics_level=BASIC; alter system set statistics_level=BASIC * ERROR at line 1: ORA-02097: parameter cannot be modified because specified value is invalid ORA-00830: cannot set statistics_level to BASIC with auto-tune SGA enabled ?????server parameter file?spfile??,ASMM????shutdown??????????????(Oracle???????,????????)???spfile?,?????strings?????spfile????????????????????,?: G10R2.__db_cache_size=973078528 G10R2.__java_pool_size=16777216 G10R2.__large_pool_size=16777216 G10R2.__shared_pool_size=1006632960 G10R2.__streams_pool_size=67108864 ???spfile?????????????????,???????????”???”?????,??????????”??”?? ?ASMM?????????????? ?????(tunable):????????????????????????????buffer cache?????????,cache????????????????,?????????? IO????????????????????????????Library cache????? subheap????,?????????????????????????????????(open cursors)?????????client??????????????buffer cache???????,???????????pin??buffer???(???????) ?????(Un-tunable):???????????????????,?????????????????,?????????????????????????large pool?????? ??????(Fixed Size):???????????,??????????????????????????????????????? ????????????????(memory resize request)?????????,?????: ??????(Immediate Request):???????????ASMM????????????????????????(chunk)?,??????OUT-OF-MEMORY(ORA-04031)???,????????????????????(granule)????????????????????granule,????????????,?????????????????????????????,????granule??????????????? ??????(Deferred Request):???????????????????????????,??????????????granule???????????????MMON??????????delta. ??????(Manual Request):????????????alter system?????????????????????????????????????????????????granule,??????grow?????ORA-4033??,?????shrink?????ORA-4034??? ?ASMM????,????(Memory Broker)????????????????????????????(Deferred)??????????????????????(auto-tunable component)???????????????,???????????????MMON??????????????????????????????????,????????????????;MMON????Memory Broker?????????????????????????MMON????????????????????????????????????????(resize request system queue)?MMAN????(Memory Manager is a background process that manages the dynamic resizing of SGA memory areas as the workload increases or decreases)??????????????????? ?10gR1?Shared Pool?shrink??????????,?????????????Buffer Cache???????????granule,????Buffer Cache?granule????granule header?Metadata(???buffer header??RAC??Lock Elements)????,?????????????????????shared pool????????duration(?????)?chunk??????granule?,????????????granule??10gR2????Buffer Cache Granule????????granule header?buffer?Metadata(buffer header?LE)????,??shared pool???duration?chunk????????granule,??????buffer cache?shared pool??????????????10gr2?streams pool?????????(???????streams pool duration????) ??????????(Donor,???trace????)???,?????????granule???buffer cache,????granule????????????: ????granule???????granule header ?????chunk????granule?????????buffer header ???,???chunk??????????????????????metadata? ???2-4??,???granule???? ??????????????????,??buffer cache??granule???shared pool?,???????: MMAN??????????buffer cache???granule MMAN????granule??quiesce???(Moving 1 granule from inuse to quiesce list of DEFAULT buffer cache for an immediate req) DBWR???????quiesced???granule????buffer(dirty buffer) MMAN??shared pool????????(consume callback),granule?free?chunk???shared pool??(consume)?,????????????????????granule????shared granule??????,???????????granule???????????,??????pin??buffer??Metadata(???buffer header?LE)?????buffer cache??? ???granule???????shared pool,???granule?????shared??? ?????ASMM???????????,??????????: _enabled_shared_pool_duration:?????????10g????shared pool duration??,?????sga_target?0?????false;???10.2.0.5??cursor_space_for_time???true??????false,???10.2.0.5??cursor_space_for_time????? _memory_broker_shrink_heaps:???????0??Oracle?????shared pool?java pool,??????0,??shrink request??????????????????? _memory_management_tracing: ???????MMON?MMAN??????????(advisor)?????(Memory Broker)?????trace???;??ORA-04031????????36,???8?????????????trace,???23????Memory Broker decision???,???32???cache resize???;??????????: Level Contents 0×01 Enables statistics tracing 0×02 Enables policy tracing 0×04 Enables transfer of granules tracing 0×08 Enables startup tracing 0×10 Enables tuning tracing 0×20 Enables cache tracing ?????????_memory_management_tracing?????DUMP_TRANSFER_OPS????????????????,?????????????????trace?????????mman_trace?transfer_ops_dump? SQL> alter system set "_memory_management_tracing"=63; System altered Operation make shared pool grow and buffer cache shrink!!!.............. ???????granule?????,????default buffer pool?resize??: AUTO SGA: Request 0xdc9c2628 after pre-processing, ret=0 /* ???0xdc9c2628??????addr */ AUTO SGA: IMMEDIATE, FG request 0xdc9c2628 /* ???????????Immediate???? */ AUTO SGA: Receiver of memory is shared pool, size=16, state=3, flg=0 /* ?????????shared pool,???,????16?granule,??grow?? */ AUTO SGA: Donor of memory is DEFAULT buffer cache, size=106, state=4, flg=0 /* ???????Default buffer cache,????,????106?granule,??shrink?? */ AUTO SGA: Memory requested=3896, remaining=3896 /* ??immeidate request???????3896 bytes */ AUTO SGA: Memory received=0, minreq=3896, gransz=16777216 /* ????free?granule,??received?0,gransz?granule??? */ AUTO SGA: Request 0xdc9c2628 status is INACTIVE /* ??????????,??????inactive?? */ AUTO SGA: Init bef rsz for request 0xdc9c2628 /* ????????before-process???? */ AUTO SGA: Set rq dc9c2628 status to PENDING /* ?request??pending?? */ AUTO SGA: 0xca000000 rem=3896, rcvd=16777216, 105, 16777216, 17 /* ???????0xca000000?16M??granule */ AUTO SGA: Returning 4 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 4, 1, a AUTO SGA: Resize done for pool DEFAULT, 8192 /* ???default pool?resize */ AUTO SGA: Init aft rsz for request 0xdc9c2628 AUTO SGA: Request 0xdc9c2628 after processing AUTO SGA: IMMEDIATE, FG request 0x7fff917964a0 AUTO SGA: Receiver of memory is shared pool, size=17, state=0, flg=0 AUTO SGA: Donor of memory is DEFAULT buffer cache, size=105, state=0, flg=0 AUTO SGA: Memory requested=3896, remaining=0 AUTO SGA: Memory received=16777216, minreq=3896, gransz=16777216 AUTO SGA: Request 0x7fff917964a0 status is COMPLETE /* shared pool????16M?granule */ AUTO SGA: activated granule 0xca000000 of shared pool ?????partial granule????????????trace: AUTO SGA: Request 0xdc9c2628 after pre-processing, ret=0 AUTO SGA: IMMEDIATE, FG request 0xdc9c2628 AUTO SGA: Receiver of memory is shared pool, size=82, state=3, flg=1 AUTO SGA: Donor of memory is DEFAULT buffer cache, size=36, state=4, flg=1 /* ????????shared pool,?????default buffer cache */ AUTO SGA: Memory requested=4120, remaining=4120 AUTO SGA: Memory received=0, minreq=4120, gransz=16777216 AUTO SGA: Request 0xdc9c2628 status is INACTIVE AUTO SGA: Init bef rsz for request 0xdc9c2628 AUTO SGA: Set rq dc9c2628 status to PENDING AUTO SGA: Moving granule 0x93000000 of DEFAULT buffer cache to activate list AUTO SGA: Moving 1 granule 0x8c000000 from inuse to quiesce list of DEFAULT buffer cache for an immediate req /* ???buffer cache??????0x8c000000?granule??????inuse list, ???????quiesce list? */ AUTO SGA: Returning 0 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 0, 1, 20a AUTO SGA: activated granule 0x93000000 of DEFAULT buffer cache AUTO SGA: NOT_FREE for imm req for gran 0x8c000000 / * ??dbwr??0x8c000000 granule????dirty buffer */ AUTO SGA: Returning 0 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 0, 1, 20a AUTO SGA: NOT_FREE for imm req for gran 0x8c000000 AUTO SGA: Returning 0 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 0, 1, 20a AUTO SGA: NOT_FREE for imm req for gran 0x8c000000 AUTO SGA: Returning 0 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 0, 1, 20a AUTO SGA: NOT_FREE for imm req for gran 0x8c000000 AUTO SGA: Returning 0 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 0, 1, 20a AUTO SGA: NOT_FREE for imm req for gran 0x8c000000 AUTO SGA: Returning 0 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 0, 1, 20a AUTO SGA: NOT_FREE for imm req for gran 0x8c000000 ......................................... AUTO SGA: Rcv shared pool consuming 8192 from 0x8c000000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 90112 from 0x8c002000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 24576 from 0x8c01a000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 65536 from 0x8c022000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 131072 from 0x8c034000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 286720 from 0x8c056000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 98304 from 0x8c09e000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 106496 from 0x8c0b8000 in granule 0x8c000000; owner is DEFAULT buffer cache ..................... /* ??shared pool????0x8c000000 granule??chunk, ??granule?owner????default buffer cache */ AUTO SGA: Imm xfer 0x8c000000 from quiesce list of DEFAULT buffer cache to partial inuse list of shared pool /* ???0x8c000000 granule?default buffer cache????????shared pool????inuse list */ AUTO SGA: Returning 4 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 4, 1, 20a AUTO SGA: Init aft rsz for request 0xdc9c2628 AUTO SGA: Request 0xdc9c2628 after processing AUTO SGA: IMMEDIATE, FG request 0x7fffe9bcd0e0 AUTO SGA: Receiver of memory is shared pool, size=83, state=0, flg=1 AUTO SGA: Donor of memory is DEFAULT buffer cache, size=35, state=0, flg=1 AUTO SGA: Memory requested=4120, remaining=0 AUTO SGA: Memory received=14934016, minreq=4120, gransz=16777216 AUTO SGA: Request 0x7fffe9bcd0e0 status is COMPLETE /* ????partial transfer?? */ ?????partial transfer??????DUMP_TRANSFER_OPS????0x8c000000 partial granule???????,?: SQL> oradebug setmypid; Statement processed. SQL> oradebug dump DUMP_TRANSFER_OPS 1; Statement processed. SQL> oradebug tracefile_name; /s01/admin/G10R2/udump/g10r2_ora_21482.trc =======================trace content============================== GRANULE SIZE is 16777216 COMPONENT NAME : shared pool Number of granules in partially inuse list (listid 4) is 23 Granule addr is 0x8c000000 Granule owner is DEFAULT buffer cache /* ?0x8c000000 granule?shared pool?partially inuse list, ?????owner??default buffer cache */ Granule 0x8c000000 dump from owner perspective gptr = 0x8c000000, num buf hdrs = 1989, num buffers = 156, ghdr = 0x8cffe000 / * ?????granule?granule header????0x8cffe000, ????156?buffer block,1989?buffer header */ /* ??granule??????,??????buffer cache??shared pool chunk */ BH:0x8cf76018 BA:(nil) st:11 flg:20000 BH:0x8cf76128 BA:(nil) st:11 flg:20000 BH:0x8cf76238 BA:(nil) st:11 flg:20000 BH:0x8cf76348 BA:(nil) st:11 flg:20000 BH:0x8cf76458 BA:(nil) st:11 flg:20000 BH:0x8cf76568 BA:(nil) st:11 flg:20000 BH:0x8cf76678 BA:(nil) st:11 flg:20000 BH:0x8cf76788 BA:(nil) st:11 flg:20000 BH:0x8cf76898 BA:(nil) st:11 flg:20000 BH:0x8cf769a8 BA:(nil) st:11 flg:20000 BH:0x8cf76ab8 BA:(nil) st:11 flg:20000 BH:0x8cf76bc8 BA:(nil) st:11 flg:20000 BH:0x8cf76cd8 BA:0x8c018000 st:1 flg:622202 ............... Address 0x8cf30000 to 0x8cf74000 not in cache Address 0x8cf74000 to 0x8d000000 in cache Granule 0x8c000000 dump from receivers perspective Dumping layout Address 0x8c000000 to 0x8c018000 in sga heap(1,3) (idx=1, dur=4) Address 0x8c018000 to 0x8c01a000 not in this pool Address 0x8c01a000 to 0x8c020000 in sga heap(1,3) (idx=1, dur=4) Address 0x8c020000 to 0x8c022000 not in this pool Address 0x8c022000 to 0x8c032000 in sga heap(1,3) (idx=1, dur=4) Address 0x8c032000 to 0x8c034000 not in this pool Address 0x8c034000 to 0x8c054000 in sga heap(1,3) (idx=1, dur=4) Address 0x8c054000 to 0x8c056000 not in this pool Address 0x8c056000 to 0x8c09c000 in sga heap(1,3) (idx=1, dur=4) Address 0x8c09c000 to 0x8c09e000 not in this pool Address 0x8c09e000 to 0x8c0b6000 in sga heap(1,3) (idx=1, dur=4) Address 0x8c0b6000 to 0x8c0b8000 not in this pool Address 0x8c0b8000 to 0x8c0d2000 in sga heap(1,3) (idx=1, dur=4) ???????granule?????shared granule??????,?????????buffer block,????1?shared subpool??????durtaion?4?chunk,duration=4?execution duration;??duration?chunk???????????,??extent???quiesce list??????????????free?execution duration?????????????,??????duration???extent(??????extent????granule)??????? ?????????????ASMM?????????,????: V$SGAINFODisplays summary information about the system global area (SGA). V$SGADisplays size information about the SGA, including the sizes of different SGA components, the granule size, and free memory. V$SGASTATDisplays detailed information about the SGA. V$SGA_DYNAMIC_COMPONENTSDisplays information about the dynamic SGA components. This view summarizes information based on all completed SGA resize operations since instance startup. V$SGA_DYNAMIC_FREE_MEMORYDisplays information about the amount of SGA memory available for future dynamic SGA resize operations. V$SGA_RESIZE_OPSDisplays information about the last 400 completed SGA resize operations. V$SGA_CURRENT_RESIZE_OPSDisplays information about SGA resize operations that are currently in progress. A resize operation is an enlargement or reduction of a dynamic SGA component. V$SGA_TARGET_ADVICEDisplays information that helps you tune SGA_TARGET. ?????????shared pool duration???,?????????

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >