Search Results

Search found 32994 results on 1320 pages for 'second level cache'.

Page 71/1320 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Chrome browser caching

    - by Kyle B.
    I do a lot of development on my local machine and would like to start using Chrome, however I cannot seem to do a hard-refresh (ctrl+f5) or any other key combination to get my browser to forcibly refresh all content @ http://localhost. I change projects frequently in IIS and this presents a problem because I see stylesheet and image data from my previous project with no way to get this page to reload without forcibly dumping all cache data from the settings menu. Is there another key combination I am missing, or is there a place I can (on a site by site basis) turn off caching? I prefer not to have to clear out my temporary files in the browser settings as I switch projects frequently. Thanks, Kyle

    Read the article

  • Selective Disable APC caching

    - by Victor
    I installed APC on my VPS and it works great with W3 Cache wordpress plugin. My problem is that there is one database in MySQL which is pinged by client end every few seconds to see if there are new updates. These db contains certain time sensitive information and hence it can't be part of cached data. How can I disable APC for this database/files? or Can I set a very short expiry of certain type of data? Any help is highly appreciated.

    Read the article

  • PHP-APC is exceeding limit apc.shm_size often

    - by user142741
    I am implementing the apc on a shared server that currently has 1000 sites (using wordpress, moodle, etc.). I'm Looking for the admin page, and I see "Cache full count" is growing rapidly. I've tried increasing the value of "apc.shm_size" reduce value "apc.ttl" increase the value "apc.shm_segments" but I can not resolve this issue. What am I doing wrong? I'm putting down some information: apc.ini: extension=apc.so apc.shm_size = 256 apc.enabled = 1 apc.ttl = 300 apc.user_ttl = 300 Ubuntu: 12.04 PHP: 5.3.10 APC: 3.1.7 Server has 16GB memory Limit share memory: 256MB

    Read the article

  • Disable caching on a specific Classic ASP page

    - by David Brunelle
    Hi, Not sure if I really am on the right forum, but if not, just tell me. I have a page that is coded in Classic ASP which is used to send email. We are currently having a problem in which the page seem to be sent twice sometime. Upon checking, we found out that those who have this problem are coming from big organisation, so it was suggested that their server might cache the file for some reason. I would like to know, is there a way in HTML (or Classic ASP) to prevent that from happening? Or is it in IIS that we must set this up? Thanks,

    Read the article

  • How to configure nginx to serve static contents from RAM?

    - by Vijayendra Tripathi
    I want to set up nginx as my web server. I want to have image files cached in the memory (RAM) rather then disk. I am serving a small page and want few images always served from RAM. I dont wish to use varnish (or any other such tools) for this as I believe nginx has a capability to cache contents into RAM. I am not sure as how may I configure nginx for this? I did try few combinations but they didn't work. nginx uses disk all the time to get images. For example, when I tried apache benchmark to test with following command - ab -c 500 -n 1000 http://localhost/banner.jpg I get following error - socket: Too many open files (24) I guess this means nginx is trying to open to many files simultaneously from the disk and OS is not allowing this operation. Can anyone please suggest me a correct configuration? Thanks for considering this message.

    Read the article

  • Chrome developer tools - network panel gaps

    - by Chris Nicholson
    In the Chrome developer tools, under the network tab, I'm curious to know what is happening during the gaps. If you look at my image below, I have highlighted in orange the areas where these gaps exist. Where I'm able to load a lot of my page from cache it's a shame these large gaps occur as they make up most of my page load time. What exactly is happening in this time? EDIT Okay I found this answer which essentially sums up my question, so a different question: does anyone know a good method to reduce the length of these gaps? Presumably (albeit rather extreme) if I loaded all my CSS on the page there wouldn't be a delay after loading the CSS file before the images were loaded.

    Read the article

  • how can i cahe one more web site on same backend server (web server) with varnish?

    - by Kerberos
    i have one web server which is IIS that is back on varnish. there are more web sites on ISS. there are all web sites header's on IIS and all web sites publish from port 80. can i cache all web site by varnish like below code;backend cacheWebSite{.host = "192.168.0.1"; .port = "80";} sub vcl_recv {if (req.http.host == "www.example1.com") {set req.backend = CacheWebSites;} if (req.http.host == "www.example2.com") {set req.backend = CacheWebSites; } if (req.http.host == "www.example3.com") {set req.backend = CacheWebSites; }} i can't test this code. that is just senario. thank you for your help already now.

    Read the article

  • Request Coalescing in Nginx

    - by Marcel Jackwerth
    I have an image resize server sitting behind an nginx server. On a cold cache two clients requesting the same file could trigger two resize jobs. client-01.net GET /resize.do/avatar-1234567890/300x200.png client-02.net GET /resize.do/avatar-1234567890/300x200.png It would be great if only one of the requests could go through to the backend in this situation (while the other client is set 'on-hold'). In Varnish there seems to be such a feature, called Request Coalescing. However that seems to be a Varnish-specific term. Is there something similar for Nginx?

    Read the article

  • How do I rewrite *.example.com to www.example.com?

    - by Lekensteyn
    In my network, I've some Ubuntu machines which need to download files from nl.archive.ubuntu.com. Since it's quite a waste of time to download everything multiple times, I've setup a squid proxy for caching the data. Another use for this proxy was rewriting requests for archive.ubuntu.com or *.archive.ubuntu.com to nl.archive.ubuntu.com because this mirror is faster than the US mirrors. This has worked quite well, but after a recent install of my caching machine, the configuration was lost. I remember having a separate perl program for handling this rewrite. How do I setup such a squid proxy which rewrites the host *.example.com to www.example.com and cache the result of the latter?

    Read the article

  • Nginx save file to local disk

    - by Dean Chen
    My case is: In our China company, we have to access one web server in USA headquarter through Internet. But network is too slow, and we download many big image files. All our developers have to wait. So we want to setup a Nginx which acts as reverse proxy, its upstream is our USA web server. Question is can we make Nginx save the image files from USA web server into its local disk? I mean let Nginx act as one cache server.

    Read the article

  • How do I turn off caching in IIS7?

    - by jammus
    Hello. I'm developing an ASP classic site under Windows 7 (form a queue ladies). The problem is IIS seems to be heavily making use of its cache for both static and dynamic content which really conflicts with my 'make a small change, alt-tab, hit ctrl-F5' development style. Changes made to .asp files may take two or three refreshes to show up where as changes to .js files can take 20 times as many. How do I go about turning the caching off on my development machine? Cheers. in b4 stop using asp classic

    Read the article

  • How does the TuneIn mobile application resume a live stream, when the connection breaks?

    - by navnav
    So I have this TuneIn application on my phone, which allows me to listen to many radio stations. It also allows me to rewind and pause the live stream, which I know is doable with today's technology. What gets me puzzled, though, is how when the internet connection breaks, the live stream will stop, but, when the connection comes back, the app can just pick up from where it broke. Why it gets me puzzled is because there is no way for the app to cache the live stream when the net connection has gone, yet it can start the live stream back up, from where it broke. Is there is possible to start a live stream, at a specific point of the audio stream? Or are they using some other method?

    Read the article

  • Can Squid 2.7 proxy gzipped content

    - by Tom Styles
    We have a forward proxy for our network which is Squid 2.7. This is managed for us by a third party. We noticed recently that http requests going from our network to the web were having the Accept-Encoding header removed. This was resulting in all web traffic across our network (approx 8000+ PCs) being uncompressed even though the browsers and server on each end were capable. We have asked the third party to look into this and they have said it is because Squid 2.7 does not support compression. I understand this to be true but I was under the impression that the compression happened on the webserver rather than the proxy. So... Can Squid 2.7 proxy and/or cache content that is gzipped? If it can, how/why might it be configured such that the Accept-Encoding header is being removed?

    Read the article

  • Benefits of a RAID BBU in addition to a double UPS + PS system

    - by Wikser
    Today I roughly measured the benefits of enabling write-back on the RAID controller on a server at work. It got no RAID battery-backup-unit (BBU) so the write-cache is currently disabled. As the server is not used to capacity (by far), the results in most test were spectacular, e.g.: Database CRUD: before 35s, after 4s Saving a 1MB Excel file: before: 20s (!), after: 0.5s Of course having a BBU is always recommended, but what are the main benefits of installing a BBU to a system, which got redundant power supplies and is attached to UPSs? Does this depend on the type of the system (database, file, terminal)? What is a realistic fail scenario which could be prevented by a BBU? Thanks in advance!

    Read the article

  • How to view dirty page count in Windows Server 2003

    - by Mark Wilkins
    Is there a way to view the number of dirty pages (cached file pages that need to still be written to disk) in Windows Server 2003? In Windows 7, for example, I can use Performance Monitor and use the "Dirty Pages" counter (one of the cache counters). This counter does not seem to be available in Server 2003. Also on Windows 7 (and other later systems), I can use Sysinternals RAMMap and effectively see the dirty pages on a file-by-file basis. Is there something similar for Server 2003?

    Read the article

  • ASP.NET GridView second header row to span main header row

    - by Dana Robinson
    I have an ASP.NET GridView which has columns that look like this: | Foo | Bar | Total1 | Total2 | Total3 | Is it possible to create a header on two rows that looks like this? | | Totals | | Foo | Bar | 1 | 2 | 3 | The data in each row will remain unchanged as this is just to pretty up the header and decrease the horizontal space that the grid takes up. The entire GridView is sortable in case that matters. I don't intend for the added "Totals" spanning column to have any sort functionality. Edit: Based on one of the articles given below, I created a class which inherits from GridView and adds the second header row in. namespace CustomControls { public class TwoHeadedGridView : GridView { protected Table InnerTable { get { if (this.HasControls()) { return (Table)this.Controls[0]; } return null; } } protected override void OnDataBound(EventArgs e) { base.OnDataBound(e); this.CreateSecondHeader(); } private void CreateSecondHeader() { GridViewRow row = new GridViewRow(0, -1, DataControlRowType.Header, DataControlRowState.Normal); TableCell left = new TableHeaderCell(); left.ColumnSpan = 3; row.Cells.Add(left); TableCell totals = new TableHeaderCell(); totals.ColumnSpan = this.Columns.Count - 3; totals.Text = "Totals"; row.Cells.Add(totals); this.InnerTable.Rows.AddAt(0, row); } } } In case you are new to ASP.NET like I am, I should also point out that you need to: 1) Register your class by adding a line like this to your web form: <%@ Register TagPrefix="foo" NameSpace="CustomControls" Assembly="__code"%> 2) Change asp:GridView in your previous markup to foo:TwoHeadedGridView. Don't forget the closing tag. Another edit: You can also do this without creating a custom class. Simply add an event handler for the DataBound event of your grid like this: protected void gvOrganisms_DataBound(object sender, EventArgs e) { GridView grid = sender as GridView; if (grid != null) { GridViewRow row = new GridViewRow(0, -1, DataControlRowType.Header, DataControlRowState.Normal); TableCell left = new TableHeaderCell(); left.ColumnSpan = 3; row.Cells.Add(left); TableCell totals = new TableHeaderCell(); totals.ColumnSpan = grid.Columns.Count - 3; totals.Text = "Totals"; row.Cells.Add(totals); Table t = grid.Controls[0] as Table; if (t != null) { t.Rows.AddAt(0, row); } } } The advantage of the custom control is that you can see the extra header row on the design view of your web form. The event handler method is a bit simpler, though.

    Read the article

  • javascript select box hanging on second select in ie7

    - by bsandrabr
    I have a drop down select box inside a div. When the user clicks on change, a dropdown box appears next to the change/submit button and the user makes a selection which then updates the db and the selection appears instead of the dropdown. All works fine in IE8 and firefox but in IE7 it allows one selection (there are several identical dropdowns) but the second time a selection is made it hangs on please wait. This is the relevant code <td width=200> <input type="button" onclick="startChanging(this)" value="Change" /></td> <script type="text/javascript"> var selectBox, isEditing = false; var recordvalue; if( window.XMLHttpRequest ) { recordvalue = new XMLHttpRequest(); } else if( window.ActiveXObject ) { try { recordvalue = new ActiveXObject('Microsoft.XMLHTTP'); } catch(e) {} } window.onload = function () { selectBox = document.getElementById('changer'); selectBox.id = ''; selectBox.parentNode.removeChild(selectBox); }; function startChanging(whatButton) { if( isEditing && isEditing != whatButton ) { return; } //no editing of other entries if( isEditing == whatButton ) { changeSelect(whatButton); return; } //this time, act as "submit" isEditing = whatButton; whatButton.value = 'Submit'; var theRow = whatButton.parentNode.parentNode; var stateCell = theRow.cells[3]; //the cell that says "present" stateCell.className = 'editing'; //so you can use CSS to remove the background colour stateCell.replaceChild(selectBox,stateCell.firstChild); //PRESENT is replaced with the select input selectBox.selectedIndex = 0; } function changeSelect(whatButton) { isEditing = true; //don't allow it to be clicked until submission is complete whatButton.value = 'Change'; var stateCell = selectBox.parentNode; var theRow = stateCell.parentNode; var editid = theRow.cells[0].firstChild.firstChild.nodeValue; //text inside the first cell var value = selectBox.firstChild.options[selectBox.firstChild.selectedIndex].value; //the option they chose selectBox.parentNode.replaceChild(document.createTextNode('Please wait...'),selectBox); if( !recordvalue ) { //allow fallback to basic HTTP location.href = 'getupdate.php?id='+editid+'&newvalue='+value; } else { recordvalue.onreadystatechange = function () { if( recordvalue.readyState != 4 ) { return; } if( recordvalue.status >= 300 ) { alert('An error occurred when trying to update'); } isEditing = false; newState = recordvalue.responseText.split("|"); stateCell.className = newState[0]; stateCell.firstChild.nodeValue = newState[1] || 'Server response was not correct'; }; recordvalue.open('GET', "getupdate.php?id="+editid+"&newvalue="+value, true); recordvalue.send(null); } } </script> If anyone has any idea why this is happening I'd be very grateful

    Read the article

  • JSON Paring - How to show second Level ListView

    - by Sophie
    I am parsing JSON data into ListView, and successfully parsed first level of JSON in MainActivity.java, where i am showing list of Main Locations, like: Inner Locations Outer Locations Now i want whenever i do tap on Inner Locations then in SecondActivity it should show Delhi and NCR in a List, same goes for Outer Locations as well, in this case whenever user do tap need to show USA JSON look like: { "all": [ { "title": "Inner Locations", "maps": [ { "title": "Delhi", "markers": [ { "name": "Connaught Place", "latitude": 28.632777800000000000, "longitude": 77.219722199999980000 }, { "name": "Lajpat Nagar", "latitude": 28.565617900000000000, "longitude": 77.243389100000060000 } ] }, { "title": "NCR", "markers": [ { "name": "Gurgaon", "latitude": 28.440658300000000000, "longitude": 76.987347699999990000 }, { "name": "Noida", "latitude": 28.570000000000000000, "longitude": 77.319999999999940000 } ] } ] }, { "title": "Outer Locations", "maps": [ { "title": "United States", "markers": [ { "name": "Virgin Islands", "latitude": 18.335765000000000000, "longitude": -64.896335000000020000 }, { "name": "Vegas", "latitude": 36.114646000000000000, "longitude": -115.172816000000010000 } ] } ] } ] } Note: But whenever i do tap on any of the ListItem in first activity, not getting any list in SecondActivity, why ? MainActivity.java:- @Override protected Void doInBackground(Void... params) { // Create an array arraylist = new ArrayList<HashMap<String, String>>(); // Retrieve JSON Objects from the given URL address jsonobject = JSONfunctions .getJSONfromURL("http://10.0.2.2/locations.json"); try { // Locate the array name in JSON jsonarray = jsonobject.getJSONArray("all"); for (int i = 0; i < jsonarray.length(); i++) { HashMap<String, String> map = new HashMap<String, String>(); jsonobject = jsonarray.getJSONObject(i); // Retrieve JSON Objects map.put("title", jsonobject.getString("title")); arraylist.add(map); } } catch (JSONException e) { Log.e("Error", e.getMessage()); e.printStackTrace(); } return null; } @Override protected void onPostExecute(Void args) { // Locate the listview in listview_main.xml listview = (ListView) findViewById(R.id.listview); // Pass the results into ListViewAdapter.java adapter = new ListViewAdapter(MainActivity.this, arraylist); // Set the adapter to the ListView listview.setAdapter(adapter); // Close the progressdialog mProgressDialog.dismiss(); listview.setOnItemClickListener(new OnItemClickListener() { @Override public void onItemClick(AdapterView<?> parent, View view, int position, long id) { Toast.makeText(MainActivity.this, String.valueOf(position), Toast.LENGTH_LONG).show(); // TODO Auto-generated method stub Intent sendtosecond = new Intent(MainActivity.this, SecondActivity.class); // Pass all data rank sendtosecond.putExtra("title", arraylist.get(position).get(MainActivity.TITLE)); Log.d("Tapped Item::", arraylist.get(position).get(MainActivity.TITLE)); startActivity(sendtosecond); } }); } } } SecondActivity.java: @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // Get the view from listview_main.xml setContentView(R.layout.listview_main); Intent in = getIntent(); strReceived = in.getStringExtra("title"); Log.d("Received Data::", strReceived); // Execute DownloadJSON AsyncTask new DownloadJSON().execute(); } // DownloadJSON AsyncTask private class DownloadJSON extends AsyncTask<Void, Void, Void> { @Override protected void onPreExecute() { super.onPreExecute(); } @Override protected Void doInBackground(Void... params) { // Create an array arraylist = new ArrayList<HashMap<String, String>>(); // Retrieve JSON Objects from the given URL address jsonobject = JSONfunctions .getJSONfromURL("http://10.0.2.2/locations.json"); try { // Locate the array name in JSON jsonarray = jsonobject.getJSONArray("maps"); for (int i = 0; i < jsonarray.length(); i++) { HashMap<String, String> map = new HashMap<String, String>(); jsonobject = jsonarray.getJSONObject(i); // Retrieve JSON Objects map.put("title", jsonobject.getString("title")); arraylist.add(map); } } catch (JSONException e) { Log.e("Error", e.getMessage()); e.printStackTrace(); } return null; }

    Read the article

  • Second query to SQLite (on iPhone) errors.

    - by Luke
    Hi all, On the iPhone, I am developing a class for a cart that connects directly to a database. To view the cart, all items can be pulled from the database, however, it seems that removing them doesn't work. It is surprising to me because the error occurs during connection to the database, except not the second time I connect even after the DB has been closed. #import "CartDB.h" #import "CartItem.h" @implementation CartDB @synthesize database, databasePath; - (NSMutableArray *) getAllItems { NSMutableArray *items = [[NSMutableArray alloc] init]; if([self openDatabase]) { const char *sqlStatement = "SELECT * FROM items;"; sqlite3_stmt *compiledStatement; if(sqlite3_prepare_v2(cartDatabase, sqlStatement, -1, &compiledStatement, NULL) == SQLITE_OK) { while(sqlite3_step(compiledStatement) == SQLITE_ROW) { int rowId = sqlite3_column_int(compiledStatement, 0); int productId = sqlite3_column_int(compiledStatement, 1); NSString *features = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 2)]; int quantity = sqlite3_column_int(compiledStatement, 3); CartItem *cartItem = [[CartItem alloc] initWithRowId:rowId productId:productId features:features quantity:quantity]; [items addObject:cartItem]; [cartItem release]; } } sqlite3_finalize(compiledStatement); } [self closeDatabase]; return items; } - (BOOL) removeCartItem:(CartItem *)item { sqlite3_stmt *deleteStatement; [self openDatabase]; const char *sql = "DELETE FROM items WHERE id = ?"; if(sqlite3_prepare_v2(cartDatabase, sql, -1, &deleteStatement, NULL) != SQLITE_OK) { return NO; } sqlite3_bind_int(deleteStatement, 1, item.rowId); if(SQLITE_DONE != sqlite3_step(deleteStatement)) { sqlite3_reset(deleteStatement); [self closeDatabase]; return NO; } else { sqlite3_reset(deleteStatement); [self closeDatabase]; return YES; } } - (BOOL) openDatabase { if(sqlite3_open([databasePath UTF8String], &cartDatabase) == SQLITE_OK) { return YES; } else { return NO; } } - (void) closeDatabase { sqlite3_close(cartDatabase); } The error occurs on the line where the connection is opened in openDatabase. Any ideas? Need to flush something? Something gets autoreleased? I really can't figure it out. --Edit-- The error that I receive is GDB: Program received signal "EXC_BAD_ACCESS". --Edit-- I ended up just connecting in the init and closing in the free methods, which might not be the proper way, but that's another question altogether so it's effectively persistent instead of connecting multiple times. Still would be nice to know what was up with this for future reference.

    Read the article

  • APC not caching many files

    - by tetranz
    Hello I have a Drupal site running on a VPS at Linode with PHP 5.2.10 and APC 3.1.6. It never caches more than about 25 files and barely uses any of its available memory. Drupal has hundreds of php files. I have another server where APC seems to work well and does indeed cache hundreds of files. The only difference with that site is that it runs Ubuntu 10.04 and php 5.3.2. The config settings are the same. What could be wrong? I'll paste the config from apc.php below. This is after hitting multiple parts of Drupal. Thanks APC Version 3.1.6 PHP Version 5.2.10-2ubuntu6.5 APC Host xxx.example.com Server Software Apache/2.2.12 (Ubuntu) Shared Memory 1 Segment(s) with 32.0 MBytes (mmap memory, pthread mutex locking) Start Time 2010/12/02 11:32:17 Uptime 3 minutes File Upload Support 1 File Cache Information Cached Files 21 ( 1.4 MBytes) Hits 169 Misses 21 Request Rate (hits, misses) 1.00 cache requests/second Hit Rate 0.89 cache requests/second Miss Rate 0.11 cache requests/second Insert Rate 0.17 cache requests/second Cache full count 0 User Cache Information Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Runtime Settings apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 1M apc.mmap_file_mask apc.num_files_hint 1000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.shm_segments 1 apc.shm_size 32M apc.slam_defense 1 apc.stat 1 apc.stat_ctime 0 apc.ttl 0 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock 1

    Read the article

  • AppFabric Cache - An existing connection was forcibly closed by the remote host

    - by Wallace Breza
    I'm trying to get AppFabric cache up and running on my local development environment. I have Windows Server AppFabric Beta 2 Refresh installed, and the cache cluster and host configured and started running on Windows 7 64-bit. I'm running my MVC2 website in a local IIS website under a v4.0 app pool in integrated mode. HostName : CachePort Service Name Service Status Version Info -------------------- ------------ -------------- ------------ SN-3TQHQL1:22233 AppFabricCachingService UP 1 [1,1][1,1] I have my web.config configured with the following: <configSections> <section name="dataCacheClient" type="Microsoft.ApplicationServer.Caching.DataCacheClientSection, Microsoft.ApplicationServer.Caching.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" allowLocation="true" allowDefinition="Everywhere"/> </configSections> <dataCacheClient> <hosts> <host name="SN-3TQHQL1" cachePort="22233" /> </hosts> </dataCacheClient> I'm getting an error when I attempt to initialize the DataCacheFactory: protected CacheService() { _cacheFactory = new DataCacheFactory(); <-- Error here _defaultCache = _cacheFactory.GetDefaultCache(); } I'm getting the ASP.NET yellow error screen with the following: An existing connection was forcibly closed by the remote host Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host Source Error: Line 21: protected CacheService() Line 22: { Line 23: _cacheFactory = new DataCacheFactory(); Line 24: _defaultCache = _cacheFactory.GetDefaultCache(); Line 25: }

    Read the article

  • Java OutOfMemoryError due to Linux RAM disk cache not freed

    - by Markus Jevring
    The process will run fine all day, then, bam, without warning, it will throw this error. Sometimes seemingly in the middle of doing nothing. It will happen at seemingly random times during the day. I checked to see if anything else was running on the machine, like scheduled backups or something, but found nothing. The machine has enough physical memory (2GB, with about 1GB free for a 3-500MB load), and has sufficient -Xmx specified. According to our sysadmin, the problem is that the RAM that the kernel uses as a disk cache (apparently all but 8MB) is not freed when the JVM needs to allocate memory, so the JVM process throws an OutOfMemoryError. This could be because Java asks the kernel if enough memory is available before allocating and finds that it is insufficient, resulting in a crash. I would like to think, however, that Java simply tries to allocate the memory via the kernel, and when the kernel gets such a request, it makes room for the application by throwing our some of the disk cache. Has anyone else run in to the issue, and if so, what was the error, and how did you solve it? We are currently using jdk1.6.0_20 on SLES 10 SP2 Linux 2.6.16.60-0.42.9-smp in VMWare ESX.

    Read the article

  • jQuery image preload/cache halting browser

    - by Nathan Loding
    In short, I have a very large photo gallery and I'm trying to cache as many of the thumbnail images as I can when the first page loads. There could be 1000+ thumbnails. First question -- is it stupid to try to preload/cache that many? Second question -- when the preload() function fires, the entire browser stops responding for a minute to two. At which time the callback fires, so the preload is complete. Is there a way to accomplish "smart preloading" that doesn't impede on the user experience/speed when attempting to load this many objects? The $.preLoadImages function is take from here: http://binarykitten.me.uk/dev/jq-plugins/107-jquery-image-preloader-plus-callbacks.html Here's how I'm implementing it: $(document).ready(function() { setTimeout("preload()", 5000); }); function preload() { var images = ['image1.jpg', ... 'image1000.jpg']; $.preLoadImages(images, function() { alert('done'); }); } 1000 images is a lot. Am I asking too much?

    Read the article

  • Solution for cleaning an image cache directory on the SD card

    - by synic
    I've got an app that is heavily based on remote images. They are usually displayed alongside some data in a ListView. A lot of these images are new, and a lot of the old ones will never be seen again. I'm currently storing all of these images on the SD card in a custom cache directory (ala evancharlton's magnatune app). I noticed that after about 10 days, the directory totals ~30MB. This is quite a bit more than I expected, and it leads me to believe that I need to come up with a good solution for cleaning out old files... and I just can't think of a great one. Maybe you can help. These are the ideas that I've had: Delete old files. When the app starts, start a background thread, and delete all files older than X days. This seems to pose a problem, though, in that, if the user actively uses the app, this could make the device sluggish if there are hundreds of files to delete. After creating the files on the SD card, call new File("/path/to/file").deleteOnExit(); This will cause all files to be deleted when the VM exits (I don't even know if this method works on Android). This is acceptable, because, even though the files need to be cached for the session, they don't need to be cached for the next session. It seems like this will also slow the device down if there are a lot of files to be deleted when the VM exits. Delete old files, up to a max number of files. Same as #1, but only delete N number of files at a time. I don't really like this idea, and if the user was very active, it may never be able to catch up and keep the cache directory clean. That's about all I've got. Any suggestions would be appreciated.

    Read the article

  • Implementing Model-level caching

    - by Byron
    I was posting some comments in a related question about MVC caching and some questions about actual implementation came up. How does one implement a Model-level cache that works transparently without the developer needing to manually cache, yet still remains efficient? I would keep my caching responsibilities firmly within the model. It is none of the controller's or view's business where the model is getting data. All they care about is that when data is requested, data is provided - this is how the MVC paradigm is supposed to work. (Source: Post by Jarrod) The reason I am skeptical is because caching should usually not be done unless there is a real need, and shouldn't be done for things like search results. So somehow the Model itself has to know whether or not the SELECT statement being issued to it worthy of being cached. Wouldn't the Model have to be astronomically smart, and/or store statistics of what is being most often queried over a long period of time in order to accurately make a decision? And wouldn't the overhead of all this make the caching useless anyway? Also, how would you uniquely identify a query from another query (or more accurately, a resultset from another resultset)? What about if you're using prepared statements, with only the parameters changing according to user input? Another poster said this: I would suggest using the md5 hash of your query combined with a serialized version of your input arguments. This would require twice the number of serialization options. I was under the impression that serialization was quite expensive, and for large inputs this might be even worse than just re-querying. And is the minuscule chance of collision worth worrying about? Conceptually, caching in the Model seems like a good idea to me, but it seems in practicality the developer should have direct control over caching and write it into the controller. Thoughts/ideas? Edit: I'm using PHP and MySQL if that helps to narrow your focus.

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >