Search Results

Search found 6412 results on 257 pages for 'intersystems cache'.

Page 40/257 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Windows 2003 Server Caching

    - by pablomedok
    We're experiencing almost everyday table index corruption on Windows Server 2003. We are running an old application which uses DBF/CDX tables. Everything was fine for ages, but 6 months after we've installed Advantage Database Server (which allows access to some tables to our website) we started to get index corruption problems. And we don't know whom to blame. We've tried to exclude all possible causes of this corruption. Now all users work in terminal mode - so no network problems can cause that, OpLocks also can't be a reason. We changed hardware, network cards, switches, reainstalled Server and even moved to new dedicated server. The only thing we can't exclude is ADS - because it should be working. Is that possible that local read/write caching that causes that problem? E.g. one user or process uses cached data, later another user/process changes it, and later the first user changes it again without knowing about the first change. Is it possible theoretically? Is it possible that this problem is caused by imporper file server or caching settings? Is it possible that normal users use non-cached data and ADS is using cached data? Or vice versa? Is it possible that each terminal user has its own cache? Or maybe the problem is about RAID caching somehow interfering with Windows Server caching? Or maybe there are some special settings for Windows Server for working with DBF tables that are being written simultaneously by several terminal users? Maybe there is a way to turn off caching for some certain files to check it? Sometimes we get index crash twice a day, sometimes everything is fine for 5 days in a row. Today only one user was working in the evening with the database (usually there are 30-50 users are working simultaneously on working hours). So it's almost zero load on server. , Syncronization with website is performed every 5 minutes during work hours and every 15 minutes in the evening and on weekend. We've done file access auditing and it shows that during website syncroniztions ADS server opens the table and index files for ReadEA and WriteEA though it performs only SELECT queries. ADS does UPDATE/INSERT queries but less freqently - not during regular synchronizations, but only when an order is placed by website visitor). Please help me. We are struggling with this problem for almost a year and still can't find any pattern or any clue about this problem. Here is my previous qestion about this issue on DBA: http://dba.stackexchange.com/questions/8646/foxpro-dbf-index-corruption

    Read the article

  • Cache Simulator in C

    - by DuffDuff
    Ok this is only my second question, and it's quite a doozy. It's for a school assignment, but no one (including the TAs) seems to be able to help me. It's kind of a tall order but I'm not sure where else to turn. Essentially the assignment was to make a cache simulator. This version is direct mapping and is actually only a small portion of the whole project, but if I can't even get this down I have no chance with other associativities. I'm posting my whole code because I don't want to make any assumptions about where the problem is. This is the test case: http://www.mediafire.com/?ty5dnihydnw And you run the following command: ./sims 512 direct 32 fifo wt pinatrace.out You're supposed to get: hits: 604037 misses 138349 writes: 239269 reads: 138349 But I get: Hits: 587148 Misses: 155222 Writes: 239261 Reads: 155222 If anyone could at least point me in the right direction it would be greatly appreciated. I've been stuck on this for about 12 hours. #include <stdio.h> #include <stdlib.h> #include <string.h> #include <math.h> struct myCache { int valid; char *tag; char *block; }; /* sim [-h] <cache size> <associativity> <block size> <replace alg> <write policy> <trace file> */ //God willing I come up with a better Hex to Bin convertion that maintains the beginning 0s... void hex2bin(char input[], char output[]) { int i; int a = 0; int b = 1; int c = 2; int d = 3; int x = 4; int size; size = strlen(input); for (i = 0; i < size; i++) { if (input[i] =='0') { output[i*x +a] = '0'; output[i*x +b] = '0'; output[i*x +c] = '0'; output[i*x +d] = '0'; } else if (input[i] =='1') { output[i*x +a] = '0'; output[i*x +b] = '0'; output[i*x +c] = '0'; output[i*x +d] = '1'; } else if (input[i] =='2') { output[i*x +a] = '0'; output[i*x +b] = '0'; output[i*x +c] = '1'; output[i*x +d] = '0'; } else if (input[i] =='3') { output[i*x +a] = '0'; output[i*x +b] = '0'; output[i*x +c] = '1'; output[i*x +d] = '1'; } else if (input[i] =='x') { output[i*x +a] = '0'; output[i*x +b] = '1'; output[i*x +c] = '0'; output[i*x +d] = '0'; } else if (input[i] =='5') { output[i*x +a] = '0'; output[i*x +b] = '1'; output[i*x +c] = '0'; output[i*x +d] = '1'; } else if (input[i] =='6') { output[i*x +a] = '0'; output[i*x +b] = '1'; output[i*x +c] = '1'; output[i*x +d] = '0'; } else if (input[i] =='7') { output[i*x +a] = '0'; output[i*x +b] = '1'; output[i*x +c] = '1'; output[i*x +d] = '1'; } else if (input[i] =='8') { output[i*x +a] = '1'; output[i*x +b] = '0'; output[i*x +c] = '0'; output[i*x +d] = '0'; } else if (input[i] =='9') { output[i*x +a] = '1'; output[i*x +b] = '0'; output[i*x +c] = '0'; output[i*x +d] = '1'; } else if (input[i] =='a') { output[i*x +a] = '1'; output[i*x +b] = '0'; output[i*x +c] = '1'; output[i*x +d] = '0'; } else if (input[i] =='b') { output[i*x +a] = '1'; output[i*x +b] = '0'; output[i*x +c] = '1'; output[i*x +d] = '1'; } else if (input[i] =='c') { output[i*x +a] = '1'; output[i*x +b] = '1'; output[i*x +c] = '0'; output[i*x +d] = '0'; } else if (input[i] =='d') { output[i*x +a] = '1'; output[i*x +b] = '1'; output[i*x +c] = '0'; output[i*x +d] = '1'; } else if (input[i] =='e') { output[i*x +a] = '1'; output[i*x +b] = '1'; output[i*x +c] = '1'; output[i*x +d] = '0'; } else if (input[i] =='f') { output[i*x +a] = '1'; output[i*x +b] = '1'; output[i*x +c] = '1'; output[i*x +d] = '1'; } } output[32] = '\0'; } int main(int argc, char* argv[]) { FILE *tracefile; char readwrite; int trash; int cachesize; int blocksize; int setnumber; int blockbytes; int setbits; int blockbits; int tagsize; int m; int count = 0; int count2 = 0; int count3 = 0; int i; int j; int xindex; int jindex; int kindex; int lindex; int setadd; int totalset; int writeMiss = 0; int writeHit = 0; int cacheMiss = 0; int cacheHit = 0; int read = 0; int write = 0; int size; int extra; char bbits[100]; char sbits[100]; char tbits[100]; char output[100]; char input[100]; char origtag[100]; if (argc != 7) { if (strcmp(argv[0], "-h")) { printf("./sim2 <cache size> <associativity> <block size> <replace alg> <write policy> <trace file>\n"); return 0; } else { fprintf(stderr, "Error: wrong number of parameters.\n"); return -1; } } tracefile = fopen(argv[6], "r"); if(tracefile == NULL) { fprintf(stderr, "Error: File is NULL.\n"); return -1; } //Determining size of sbits, bbits, and tag cachesize = atoi(argv[1]); blocksize = atoi(argv[3]); setnumber = (cachesize/blocksize); printf("setnumber: %d\n", setnumber); setbits = (round((log(setnumber))/(log(2)))); printf("sbits: %d\n", setbits); blockbits = log(blocksize)/log(2); printf("bbits: %d\n", blockbits); tagsize = 32 - (blockbits + setbits); printf("t: %d\n", tagsize); struct myCache newCache[setnumber]; //Allocating Space for Tag Bits, initiating tag and valid to 0s for(i=0;i<setnumber;i++) { newCache[i].tag = (char *)malloc(sizeof(char)*(tagsize+1)); for(j=0;j<tagsize;j++) { newCache[i].tag[j] = '0'; } newCache[i].valid = 0; } while(fgetc(tracefile)!='#') { setadd = 0; totalset = 0; //read in file fseek(tracefile,-1,SEEK_CUR); fscanf(tracefile, "%x: %c %s\n", &trash, &readwrite, origtag); //shift input Hex size = strlen(origtag); extra = (10 - size); for(i=0; i<extra; i++) input[i] = '0'; for(i=extra, j=0; i<(size-(2-extra)); j++, i++) input[i]=origtag[j+2]; input[8] = '\0'; // Convert Hex to Binary hex2bin(input, output); //Resolving the Address into tbits, sbits, bbits for (xindex=0, jindex=(32-blockbits); jindex<32; jindex++, xindex++) { bbits[xindex] = output[jindex]; } bbits[xindex]='\0'; for (xindex=0, kindex=(32-(blockbits+setbits)); kindex<32-(blockbits); kindex++, xindex++){ sbits[xindex] = output[kindex]; } sbits[xindex]='\0'; for (xindex=0, lindex=0; lindex<(32-(blockbits+setbits)); lindex++, xindex++){ tbits[xindex] = output[lindex]; } tbits[xindex]='\0'; //Convert set bits from char array into ints for(xindex = 0, kindex = (setbits -1); xindex < setbits; xindex ++, kindex--) { if (sbits[xindex] == '1') setadd = 1; if (sbits[xindex] == '0') setadd = 0; setadd = setadd * pow(2, kindex); totalset += setadd; } //Calculating Hits and Misses if (newCache[totalset].valid == 0) { newCache[totalset].valid = 1; strcpy(newCache[totalset].tag, tbits); } else if (newCache[totalset].valid == 1) { if(strcmp(newCache[totalset].tag, tbits) == 0) { if (readwrite == 'W') { cacheHit++; write++; } if (readwrite == 'R') cacheHit++; } else { if (readwrite == 'R') { cacheMiss++; read++; } if (readwrite == 'W') { cacheMiss++; read++; write++; } strcpy(newCache[totalset].tag, tbits); } } } printf("Hits: %d\n", cacheHit); printf("Misses: %d\n", cacheMiss); printf("Writes: %d\n", write); printf("Reads: %d\n", read); }

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • CacheManager.getCacheFileBaseDir() always returns null

    - by Leon
    Hi, I've been trying to use the CacheManager for caching some http requests but it failed every time with a nullpointer exception. After some digging I believe I found out why: CacheManager.getCacheFileBaseDir() always returns null so when I try to use CacheManager.getCacheFile() or CacheManager.saveCacheFile() they fail. CacheManager.cacheDisabled() returns false :S I hadn 't created a cache partition via the AVD manager so I thought the problem lie there. But after creating a cache partition getCacheFile() still return null: 03-16 00:25:16.321: ERROR/AndroidRuntime(296): Caused by: java.lang.NullPointerException 03-16 00:25:16.321: ERROR/AndroidRuntime(296): at android.webkit.CacheManager.getCacheFile(CacheManager.java:296) What could be the problem? I've got the code posted here: http://pastebin.com/eaJwfXEK But it's a bit messy because I've been trying tons of stuff. Why does CacheManager.getCacheFileBaseDir() return null and not a File object? Thanks in advance! Leon

    Read the article

  • How to prevent caching from jQuery Ajax?

    - by cynwong
    Hi, Could anyone please help me with this? I have a web page using .manifest for offline storage caching. In that page, I use jQuery ajax call to get the data from the server. If I first load the page, it is OK. I can switch between Online and Offline. But the problem is when I go back online and refresh the page. jQuery ajax cannot be able to talk to server anymore. Is there a way to for ajax to talk to the server or clear offline cache? My ajax call is as such: $.ajax({ type: "GET", url: requestUrl, success: localSuccess, error: error, dataType: "text", cache:false });

    Read the article

  • Getting Started with CacheMoney

    - by Matt Grande
    I recently installed cache-money. After some difficulties getting memcached and cache-money set up, I thought I had it working. It cached the one query on my login page fine. I login, and go to my message index page and get this error: indices delegated to @cache_config.indices, but @cache_config is nil: Slug(id: integer, name: string, sluggable_id: integer, sequence: integer, sluggable_type: string, scope: string, created_at: datetime) Searching for the first part of that error message returns 0 hits on Google, so I'm at a loss on where to even begin. Any suggestions?

    Read the article

  • Memory mapped files causes low physical memory

    - by harik
    I have a 2GB RAM and running a memory intensive application and going to low available physical memory state and system is not responding to user actions, like opening any application or menu invocation etc. How do I trigger or tell the system to swap the memory to pagefile and free physical memory? I'm using Windows XP. If I run the same application on 4GB RAM machine it is not the case, system response is good. After getting choked of available physical memory system automatically swaps to pagefile and free physical memory, not that bad as 2GB system. To overcome this problem (on 2GB machine) attempted to use memory mapped files for large dataset which are allocated by application. In this case virtual memory of the application(process) is fine but system cache is high and same problem as above that physical memory is less. Even though memory mapped file is not mapped to process virtual memory system cache is high. why???!!! :( Any help is appreciated. Thanks.

    Read the article

  • How to retrieve only updated/new records since the last query in SQL?

    - by William Choi
    Hi all, I was asked to design a class for caching SQL query results. Calling the class' query method will query and cache the entire set of results at the first time; afterward, each subsequence query will retrieve only the updated portion, and will merge the result into the cache. If the class is required to be generic, i.e. NO knowledge about the db and the tables, do you have any idea? Is it possible, and how to retrieve only updated/new records since the last query? Thanks! William

    Read the article

  • IE form input data disappear after browser refresh

    - by RWW
    Hi, I'm trying to achieve sticky forms without PHP. My setup is AJAX like javascript. The back/forward work fine on both IE and FF, but refresh only works on FF, not IE. Doesn't matter what cache options I use, I've even set IE's temporary files option to never check for updates, and the input value is gone after page refresh(the refresh button or F5) I've read many posts where people have the opposite problem, and do not want form data to persist across page refresh, and never read from browser cache, but I do. Any help is appreciated, thanks!

    Read the article

  • How return 304 status with FileResult in ASP.NET MVC RC1

    - by Maysam
    As you may know we have got a new ActionResult called FileResult in RC1 version of ASP.NET MVC. Using that, your action methods can return image to browser dynamically. Something like this: public ActionResult DisplayPhoto(int id) { Photo photo = GetPhotoFromDatabase(id); return File(photo.Content, photo.ContentType); } In the HTML code, we can use something like this: <img src="http://mysite.com/controller/DisplayPhoto/657"> Since the image is returned dynamically, we need a way to cache the returned stream so that we don't need to read the image again from database. I guess we can do it with something like this, I'm not sure: Response.StatusCode = 304; This tells the browser that you already have the image in your cache. I just don't know what to return in my action method after setting StatusCode to 304. Should I return null or something?

    Read the article

  • Very long strings as primary keys in a database for caching

    - by Bill Zimmerman
    Hi, I am working on a web app that allows users to create dynamic PDF files based on what they enter into a form (it is not very structured data). The idea is that User 1 enters several words (arbitrary # of words, practically capped of course), for example: A B C D E There is no such string in the database, so I was thinking: Store this string as a primary key in a MySQL database (it could be maybe around 50-100k of text, but usually probably less than 200 words) Generate the PDF file, and create a link to it in the database When the next user requests A B C D E, then I can just serve the file instead of recreating it each time. (simple cache) The PDF is cpu intensive to generate, so I am trying to cache as much as I can... My questions are: Does anyone have any alternative ideas to my approach What will the database performance be like? Is there a better way to design the schema than using the input string as the primary key?

    Read the article

  • Zend_Cache_Backend_Sqlite vs Zend_Cache_Backend_File

    - by Alekc
    Hi, Currently i'm using Zend_Cache_Backend_File for caching my project (especially responses from external web services). I was wandering if I could find some benefit in migrating the structure to Zend_Cache_Backend_Sqlite. Possible advantages are: File system is well-ordered (only 1 file in cache folder) Removing expired entries should be quicker (my assumption, since zend wouldn't need to scan internal-metadatas for expiring date of each cache) Possible disadvantages: Finding record to read (with files zend check if file exists based on filename and should be a bit quicker) in term of speed. I've tried to search a bit in internet but it seems that there are not a lot of discussion about the matter. What do you think about it? Thanks in advance.

    Read the article

  • XmlDocument caching memory usage

    - by mdsharpe
    We are seeing very high memory usage in .NET web applications which use XmlDocument. A small (~5MB) XML document is loaded into an XmlDocument object and stored in HttpContext.Cache for easy querying and XSLT transformation on each page load. The XML is modified on disk periodically so a cache has a dependency on the file. Such an application appears to be using hundreds of megabytes of RAM. I have experimented with requesting garbage collection on each request start, and this keeps the RAM usage far lower but I cannot imagine this is good practise. Does anyone have any suggestions as to how we can achieve the same goal but with lower RAM usage?

    Read the article

  • A scheme for expiring downloaded content?

    - by Chad Johnson
    I am going to offer a web API service that allows users to download and "rent" content for a monthly subscription fee. The API will either be open to everyone or possibly just select parties (not sure yet). Each developer must agree to a license, and they receive a developer key for their person. Each software application will have its own key as well. So then end-users will download the software which will interact with my service's API. Each user will have a key for each application as well (probably using OAuth). Content will be cached on first download and accessible offline via just the third-party application that cached the content. If a user cancels their subscription, I plan on doing the following: Deactivate the user's OAuth key for all applications. Do not allow the user's account to download new content via the API (and subsequently any software that uses the API). Now, the big question is: how do I make content expire if they cancel their subscription? If they cancel, they should not have access to content anymore. Here are ideas I've thought of (some of these are half-solutions, not yet fully fleshed out): Require that applications encrypt downloaded content using the user's OAuth key, making it available to only the application. This will prevent most users from going to the cache directory and just copying and keeping files. Update the user's key once a month, forcing content to re-cache on a monthly basic. Users could then access content for a month after they cancel their subscription. Require applications to "phone home" [to the service] periodically and check whether the user's subscription has terminated. If so, require in the API developer license that applications expire cache. If it is found that applications do not comply, their keys (and possibly keys for all developers) are permanently deactivated as a consequence. One major worry is that some applications may blatantly ignore constraints of the license. Is it generally acceptable to rely on applications abiding by the licensing constraints? Bad idea? Any other ideas? Maybe a way to make content auto-expire after x days? Something else? I'm open to out-of-the-box ideas.

    Read the article

  • quering an external oracle db in rails application

    - by railscoder
    I have a website which useses a mysql database for its whole operation . But for a new requirement i need to query a external oracle database( used by other component) and compile a list of items and display in a page in the website. How is it possible to connect to a external database just for rendering a single page. And is it possible to cache the queried result for say 1 month before invalidating the cache and get the updated list of items. i dont want query the external oracle db for each request.

    Read the article

  • PHP APC - Why is loading cached array op codes slow?

    - by Aaron Kreider
    I'm using APC to reduce my loading time for my PHP files. My files load very fast, except for one file where I define more than 100 arrays. This 270 kb file takes 200 ms to load. The rest of the files are full of objects, methods, and functions. I'm wondering: does OP code caching not work as well for arrays? My APC cache should be big enough to handle all of my classes. Currently 40% of my cache is free. My hit rate is 99%. apc.shm_size=32 M apc.max_file_size = 1M apc.shm_segments= 1 APC 3.1.6 I'm using PHP 5.2, Apache 2, and Windows Vista.

    Read the article

  • node_load in drupal gets incorrect node when you are NOT logged in

    - by Alaa
    Hi All, i have a module and i am using node_load(array('nid' = arg(1))); now the problem is that this function keep getting its data for node_load from DB cache. how can i force this function to not use DB cache or static value? Example my link is http://mydomain.com/node/344983 now: $node=node_load(array('nid'=arg(1)),null,true); echo $node-nid; output: 435632 which is a randomly node id (available in the database) and everytime i ctrl+F5 my browser, i get new nid!! Note: if i am logged in, it gives the result correctly, but this problem happens only when i am browsing the website as an anonymous user i really appreciate any idea!! Thanks

    Read the article

  • How to specify HTTP expiration header? (ASP.NET MVC+IIS)

    - by Marek
    I am already using output caching in my ASP.NET MVC application. Page speed tells me to specify HTTP cache expiration for css and images in the response header. I know that the Response object contains some properties that control cache expiration. I know that these properties can be used to control HTTP caching for response that I am serving from my code: Response.Expires Response.ExpiresAbsolute Response.CacheControl or alternatively Response.AddHeader("Expires", "Thu, 01 Dec 1994 16:00:00 GMT"); The question is how do I set the Expires header for resources that are served automatically, e.g. images, css and such?

    Read the article

  • Caching stored procedure results in Linq'u

    - by itdebeloper
    In our web application we have a lots of stored procedures look like this one: getSomeData(/* 7 diffrent params */) This stored procedure don't make any updates. We are using Linq'u. I know that the date are changing no often than once per day so the results for the same sets of parameters values will be the same. Does Linqu have cache simple solution? I know how to 'manually' write cache mechanism in .net, but I supposed that in Linqu this problem was solved. I'm a lazy guy :) so I'm looking for something realy simple like: Linqu_global_store_procedure_configuration.CacheDuration="600" Linqu_global_store_procedure_configuration.CacheVaryByParam="*" I'm using .net 3.5 but its not any problem to move for 4.0.

    Read the article

  • static images aren't caching with php-generated page

    - by scootklein
    Our website was just converted to being generated by mod_rewrite and php scripts. Images aren't caching in browsers when they seemingly should be. All images follow format: <img src="/images/header.png" /> I must avoid the script completely caching because the PHP parser needs to handle each page dynamically on each request; however, the download overhead of the large images is cumbersome on every single page load. I would ideally provide headers for "Cache-Control: no-cache, must-revalidate" and "Expires: some_date_in_the_past" to force revalidation of the PHP script. Why isn't the browser caching static images with consistent href values across all pages?

    Read the article

  • Dot Net Nuke module works in "Edit" mode but not for "View": cache problem?

    - by Godeke
    I have a DNN task that simply runs some Javascript to compute a price based on a few input fields. This module works fine on our production site, but we had a company do a skin for us to improve the look of the site and the module fails under this new system. (DNN 05.06.00 (459) although it was 5.5 prior... I updated in a futile hope that it was a bug in the old revision.) What is incredibly odd about this is that the module works fine when I'm logged in to DNN and using the Edit mode as an administrator. In this case the small snippet of JavaScript loads fine and filling the fields results in a price. On the other hand it I click "View" (or more importantly, if I'm not logged in at all) the page loads a cached copy. Even odder, I have found the cache files in \Portals\2\Cache\Pages are generated and then only the cached data is being used. When the cached copy is loaded, the JavaScript doesn't appear (it is normally created via a Page.ClientScript.RegisterClientScriptBlock(). Additionally, the button which posts the data to the server doesn't execute any of the server side code (confirmed with a debugger) but instead just reloads the cached copy. If I manually delete the files in \Portals\2\Cache\Pages then everything works properly, but I have to do so after every page load: failing to do so simply loads the page as it was last generated repeatedly. Resetting the application (either via the UI or editing web.config) doesn't change this and clearing the cache from the Host Settings page doesn't actually clear these cached pages. I'm guessing that Edit mode bypasses the cache in some way, but I have gone as far as turning off all caching on the site (which is horrible for performance) and the cached version is still loaded. Has anyone seen anything like this? Shouldn't clearing the cache clear the files (I'm using the File provider for caching)? Shouldn't even a cached page go back to the server if the user posts back? EDIT: I should point out that permissions don't appear to be a problem on the cache directory... other pages cached output are deleted from this folder, just this page has this issue. EDIT 2: Clarifying some settings and conditions which I didn't provide. First, this module works fine in production under DNN 5.6.0. In our test environment with the consulting company's changes it fails (the changes are skin and page layout only in theory: the module source itself verifies as unchanged). All cache settings and the like have been verified the same between the two and we only resorted to setting the module cache to 0 and -1 (and disabling the test site's cache entirely) when we couldn't find another cause for the problem. I have watched the cache work correctly on many other pages in test: there is something about this page that is causing the problem. We have punted and are creating an installable skin based on the consultant's work as I suspect they have somehow corrupted the DNN install (database side I think).

    Read the article

  • Using Queries with Coherence Read-Through Caches

    - by jpurdy
    Applications that rely on partial caches of databases, and use read-through to maintain those caches, have some trade-offs if queries are required. Coherence does not support push-down queries, so queries will apply only to data that currently exists in the cache. This is technically consistent with "read committed" semantics, but the potential absence of data may make the results so unintuitive as to be useless for most use cases (depending on how much of the database is held in cache). Alternatively, the application itself may manually "push down" queries to the database, either retrieving results equivalent to querying the cache directly, or may query the database for a key set and read the values from the cache (relying on read-through to handle any missing values). Obviously, if the result set is too large, reading through the cache may cause significant thrashing. It's also worth pointing out that if the cache is asynchronously synchronized with the database (perhaps via database change listener), that an application may commit a transaction to the database, then generate a key set from the database via a query, then read cache entries through the cache, possibly resulting in a race condition where the application sees older data than it had previously committed. In theory this is not problematic but in practice it is very unintuitive. For this reason it often makes sense to invalidate the cache when updating the database, forcing the next read-through to update the cache.

    Read the article

  • Database design advice needed.

    - by user346271
    Hi all, I'm a lone developer for a telecoms company, and am after some database design advice from anyone with a bit of time to answer. I am inserting into one table ~2 million rows each day, these tables then get archived and compressed on a monthly basis. Each monthly table contains ~15,000,000 rows. Although this is increasing month on month. For every insert I do above I am combining the data from rows which belong together and creating another "correlated" table. This table is currently not being archived, as I need to make sure I never miss an update to the correlated table. (Hope that makes sense) Although in general this information should remain fairly static after a couple of days of processing. All of the above is working perfectly. However my company now wishes to perform some stats against this data, and these tables are getting too large to provide the results in what would be deemed a reasonable time. Even with the appropriate indexes set. So I guess after all the above my question is quite simple. Should I write a script which groups the data from my correlated table into smaller tables. Or should I store the queries result sets in something like memcache? I'm already using mysqls cache, but due to having limited control over how long the data is stored for, it's not working ideally. The main advantages I can see of using something like memcache: No blocking on my correlated table after the query has been cashed. Greater flexibility of sharing the collected data between the backend collector and front end processor. (i.e custom reports could be written in the backend and the results of these stored in the cache under a key which then gets shared with anyone who would want to see the data of this report) Redundancy and scalability if we start sharing this data with a large amount of customers. The main disadvantages I can see of using something like memcache: Data is not persistent if machine is rebooted / cache is flushed. The main advantages of using MySql Persistent data. Less code changes (although adding something like memcache is trivial anyway) The main disadvantages of using MySql Have to define table templates every time I want to store provide a new set of grouped data. Have to write a program which loops through the correlated data and fills these new tables. Potentially will still grow slower as the data continues to be filled. Apologies for quite a long question. It's helped me to write down these thoughts here anyway, and any advice/help/experience with dealing with this sort of problem would be greatly appreciated. Many thanks. Alan

    Read the article

  • ??1???????????????!???????????????|Oracle Coherence|??????

    - by ???02
    ????????????3????????????????????????????3??????????????????????????(?????????1?????????????????)????????????????????????RDBMS1?????????RDBMS??????????????????????????RDBMS????????????????RDBMS????????????????????????????????????SQL ????????????????????????????????????????RDBMS?????????????????????????????·???????????·??????????????????·???????????????????????????????(?????)????????????????????????????????????????????????????????????????????????????2??????????????????????????????????·??????????2?????????????????????????????·??????????????????????????????????????(?????????)???????????????????????????(????????)???????????????????·????????????????JBoss Cache??????????1????Java Map API????put/get?????????????????Java ?????????????????·??????????????????????????????????????????????1:JBoss Cache 3.0???????????????????????(Java Map API???????Java???????????????????????????)// ??????????Person????????????????// CacheFactory?DefaultCacheFactory?Cache?Fqn?Node?JBoss Cache????CacheFactory factory = new DefaultCacheFactory();// ????????Cache cache = factory.createCache();Fqn personData = Fqn.fromString("/person");// ???????????(?????·???)???Person???Node personNode = cache.getRoot().addChild(personData);// ??Person?????????????Person p1 = new Person(1234, "??", "??", "?????");// ?????·???????????personNode.put(1234, p1);?????·???·?????????·????????????????????????????????????·???·??????????????????????????·??????????????????2?????????????????????????????????????????????????????????????????????????????????????????????????????1????????RDBMS??????????????????????????2??????????????????Java API???????????????????????????????????????????????????????????·???·????????????????????????????·?????????·????????????????????????????????????????????·???·?????????????????????MapReduce???????????????????????????????????????????(?????)????????????????????·???·????????????????????????????????????????????????????????????·???????????????????????????????????????IT??????????????Web 2.0??????????????????????????????????????????????????????????????????????????????????????????????????????????????????2:??????????·???·???????Oracle Coherence?????????????????????(??????·?????JBoss Cache?????)// ??????????Person????????????????// CacheFactory?Oracle Coherence????// Person????????Map personCache = CacheFactory.getCache("person");// ??Person?????????????Person p1 = new Person(1234, "??", "??", "?????");// ?????·??????????personCache.put(1234, p1);??????????????????????3 ???????????????????????????????????·???????????????????????????·???·??????????????????????????????????·???????????????????12

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >