Search Results

Search found 7920 results on 317 pages for 'drupal cache'.

Page 82/317 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Organize address cache

    - by Orsol
    Hi, I need to organize cache in mySql database for address - coordinates. What is the best practice to store address? Do i need to compress address string or use it as is? edit: Ok, let's I reassert my question. How to store long (up to 512) string in database if I need to search by exactly this string in future.

    Read the article

  • All PHP sites stopped working on IIS7, internal server error 500

    - by TimothyP
    I installed multiple drupal 7 sites using the Web Platform Installer on Windows Server 2008. Until know they worked without any problems, but recently internal server error 500 started to show up (once every so many requests), now it happens for all requests to any of the php sites. There's not much detail to go on, and nothing changed between the time when it was working and now (well nothing I know of anyway) The log file is flooded with messages such as [09-Aug-2011 09:08:04] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:08:16] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:08:16] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:08:20] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:08:22] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:08:51] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:09:56] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:09:57] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:12:13] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:15:09] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:15:09] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:21:28] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:21:28] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 I have tried increasing the memory limit in php.ini as such: memory_limit = 512MB But that doesn't seem to solve the problem either. This is in the global php configuration in IIS When I looked at the sites one by one, I noticed that PHP seemed to have been disabled. PHP is not enabled. Register new PHP version to enable PHP via FastCGI So I tried to register the php version again C:\Program Files\PHP\v5.3\php-cgi.exe But when I try to apply the changes I get There was an error while performing this operation Details: Operation is not valid due to the current state of the object There doesn't seem to be any other information than that. I have no idea why all of a sudden php isn't available for the sites anymore. PS: I have rebooted IIS, the server, etc... This server is hosted on amazon S3, so I gave the server some more power Update These seem to be two different issues I used memory_limit=128MB instead of memory_limit=128M Notice the "M" instead of "MB" A memory_limit of 128M was not enough, had to increase it to 512M The first issue caused internal server errors for every request. Increasing to 512MB seemed to have solved the problem for a little while, but after a while the server errors return. Note that the PHP manager inside of IIS still shows there is no PHP available for the sites (the global config does see it as available) So the problem remains unsolved

    Read the article

  • Amazon EC2 Instance - m1.medium Ubuntu 12.04 - Started to crash three days ago

    - by Joy
    The environment: Amazon EC2 Instance - m1.medium Ubuntu 12.04 Apache 2.2.22 - Running a Drupal Site Using MySQL DB Server RAM info: ~$ free -gt total used free shared buffers cached Mem: 3 1 2 0 0 0 -/+ buffers/cache: 0 2 Swap: 0 0 0 Total: 3 1 2 Hard drive info: Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 4.7G 2.9G 62% / udev 1.9G 8.0K 1.9G 1% /dev tmpfs 751M 180K 750M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.9G 0 1.9G 0% /run/shm /dev/xvdb 394G 199M 374G 1% /mnt The problem About two days ago the site started failing becaue the MySQL server was shut down by Apache with the following message: kernel: [2963685.664359] [31716] 106 31716 226946 22748 0 0 0 mysqld kernel: [2963685.664730] Out of memory: Kill process 31716 (mysqld) score 23 or sacrifice child kernel: [2963685.664764] Killed process 31716 (mysqld) total-vm:907784kB, anon-rss:90992kB, file-rss:0kB kernel: [2963686.153608] init: mysql main process (31716) killed by KILL signal kernel: [2963686.169294] init: mysql main process ended, respawning That states that the VM was occupying 0.9GB, but my Ram has 2GB free, so 1GB was still left free. I understand that in Linux applications can allocate more memory than physically available. I don't know if this is the problme, it's the first time that it has started to happen. Obviously, the MySQL server tries to restart, but there's no memory for it apparently and it won't restart. Here is its error log: Plugin 'FEDERATED' is disabled. The InnoDB memory heap is disabled Mutexes and rw_locks use GCC atomic builtins Compressed tables use zlib 1.2.3.4 Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 Completed initialization of buffer pool Fatal error: cannot allocate memory for the buffer pool Plugin 'InnoDB' init function returned error. Plugin 'InnoDB' registration as a STORAGE ENGINE failed. Unknown/unsupported storage engine: InnoDB [ERROR] Aborting [Note] /usr/sbin/mysqld: Shutdown complete I simply restarted the Mysql service. About two hours later it happened again. I restarted it. Then it happened again 9 hours later. So then I thought of the MaxClients parameter of apache.conf, so I went to check it out. It was set at 150. I decided to drop it down to 60. As so: <IfModule mpm_prefork_module> ... MaxClients 60 </IfModule> <IfModule mpm_worker_module> ... MaxClients 60 </IfModule> <IfModule mpm_event_module> ... MaxClients 60 </IfModule> Once I did that, I had the apache2 service restart and it all went smoothly for 3/4 of a day. Since at night the MySQL service shut down once again, but this time it wasn't killed by the Apache2 service. Instead it called the OOM-Killer with the following message: kernel: [3104680.005312] mysqld invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 kernel: [3104680.005351] [<ffffffff81119795>] oom_kill_process+0x85/0xb0 kernel: [3104680.548860] init: mysql main process (30821) killed by KILL signal Now I'm out of ideas. Some articles state that the ideal thing to do is change the kernel behaviour with the following (include it to the file /etc/sysctl.conf ) vm.overcommit_memory = 2 vm.overcommit_ratio = 80 So no overcommits will take place. I'm wondering if this is the way to go? Keep in mind I'm no server administrator, I have basic knowldege. Thanks a bunch in advance.

    Read the article

  • mysql 5.1 - innodb - query_cache_size - 9,418,108 queries have been removed from the query cache due to lack of memory

    - by Tom C
    Currently running on a 16GB system - Ubuntu 64 bit. INnodb Buffer Pool is set to 10GB. tuning-primer shows the following: QUERY CACHE Query cache is enabled Current query_cache_size = 512 M Current query_cache_used = 501 M Current query_cache_limit = 4 M Current Query cache Memory fill ratio = 97.87 % Current query_cache_min_res_unit = 4 K However, 9418108 queries have been removed from the query cache due to lack of memory Perhaps you should raise query_cache_size That is over 9million queries removed. System uptime is 8 days. Should I remove the Query Cache altogether? Our db is always under heavy I/O. tia

    Read the article

  • Why isn't the Cache invalidated after table update using the SqlCacheDependency?

    - by Jason
    I have been trying to get SqlCacheDependency working. I think I have everything set up correctly, but when I update the table, the item in the Cache isn't invalidated. Can you look at my code and see if I am missing anything? I enabled the Service Broker for the Sandbox database. I have placed the following code in the Global.asax file. I also restart IIS to make sure it is called. void Application_Start(object sender, EventArgs e) { SqlDependency.Start(ConfigurationManager.ConnectionStrings["SandboxConnectionString"].ConnectionString); } I have placed this entry in the web.config file: <system.web> <caching> <sqlCacheDependency enabled="true" pollTime="10000"> <databases> <add name="Sandbox" connectionStringName="SandboxConnectionString"/> </databases> </sqlCacheDependency> </caching> </system.web> I call this code to put the item into the cache: protected void CacheDataSetButton_Click(object sender, EventArgs e) { using (SqlConnection sqlConnection = new SqlConnection(ConfigurationManager.ConnectionStrings["SandboxConnectionString"].ConnectionString)) { using (SqlCommand sqlCommand = new SqlCommand("SELECT PetID, Name, Breed, Age, Sex, Fixed, Microchipped FROM dbo.Pets", sqlConnection)) { using (SqlDataAdapter sqlDataAdapter = new SqlDataAdapter(sqlCommand)) { DataSet petsDataSet = new DataSet(); sqlDataAdapter.Fill(petsDataSet, "Pets"); SqlCacheDependency petsSqlCacheDependency = new SqlCacheDependency(sqlCommand); Cache.Insert("Pets", petsDataSet, petsSqlCacheDependency, DateTime.Now.AddSeconds(10), Cache.NoSlidingExpiration); } } } } Then I bind the GridView with this code: protected void BindGridViewButton_Click(object sender, EventArgs e) { if (Cache["Pets"] != null) { GridView1.DataSource = Cache["Pets"] as DataSet; GridView1.DataBind(); } } Between attempts to DataBind the GridView, I change the table's values expecting it to invalidate the Cache["Pets"] item, but it seems to stay in the Cache indefinitely.

    Read the article

  • Does the Java Memory Model (JSR-133) imply that entering a monitor flushes the CPU data cache(s)?

    - by Durandal
    There is something that bugs me with the Java memory model (if i even understand everything correctly). If there are two threads A and B, there are no guarantees that B will ever see a value written by A, unless both A and B synchronize on the same monitor. For any system architecture that guarantees cache coherency between threads, there is no problem. But if the architecture does not support cache coherency in hardware, this essentially means that whenever a thread enters a monitor, all memory changes made before must be commited to main memory, and the cache must be invalidated. And it needs to be the entire data cache, not just a few lines, since the monitor has no information which variables in memory it guards. But that would surely impact performance of any application that needs to synchronize frequently (especially things like job queues with short running jobs). So can Java work reasonably well on architectures without hardware cache-coherency? If not, why doesn't the memory model make stronger guarantees about visibility? Wouldn't it be more efficient if the language would require information what is guarded by a monitor? As i see it the memory model gives us the worst of both worlds, the absolute need to synchronize, even if cache coherency is guaranteed in hardware, and on the other hand bad performance on incoherent architectures (full cache flushes). So shouldn't it be more strict (require information what is guarded by a monitor) or more lose and restrict potential platforms to cache-coherent architectures? As it is now, it doesn't make too much sense to me. Can somebody clear up why this specific memory model was choosen? EDIT: My use of strict and lose was a bad choice in retrospect. I used "strict" for the case where less guarantees are made and "lose" for the opposite. To avoid confusion, its probably better to speak in terms of stronger or weaker guarantees.

    Read the article

  • Windows 2003 Server Caching

    - by pablomedok
    We're experiencing almost everyday table index corruption on Windows Server 2003. We are running an old application which uses DBF/CDX tables. Everything was fine for ages, but 6 months after we've installed Advantage Database Server (which allows access to some tables to our website) we started to get index corruption problems. And we don't know whom to blame. We've tried to exclude all possible causes of this corruption. Now all users work in terminal mode - so no network problems can cause that, OpLocks also can't be a reason. We changed hardware, network cards, switches, reainstalled Server and even moved to new dedicated server. The only thing we can't exclude is ADS - because it should be working. Is that possible that local read/write caching that causes that problem? E.g. one user or process uses cached data, later another user/process changes it, and later the first user changes it again without knowing about the first change. Is it possible theoretically? Is it possible that this problem is caused by imporper file server or caching settings? Is it possible that normal users use non-cached data and ADS is using cached data? Or vice versa? Is it possible that each terminal user has its own cache? Or maybe the problem is about RAID caching somehow interfering with Windows Server caching? Or maybe there are some special settings for Windows Server for working with DBF tables that are being written simultaneously by several terminal users? Maybe there is a way to turn off caching for some certain files to check it? Sometimes we get index crash twice a day, sometimes everything is fine for 5 days in a row. Today only one user was working in the evening with the database (usually there are 30-50 users are working simultaneously on working hours). So it's almost zero load on server. , Syncronization with website is performed every 5 minutes during work hours and every 15 minutes in the evening and on weekend. We've done file access auditing and it shows that during website syncroniztions ADS server opens the table and index files for ReadEA and WriteEA though it performs only SELECT queries. ADS does UPDATE/INSERT queries but less freqently - not during regular synchronizations, but only when an order is placed by website visitor). Please help me. We are struggling with this problem for almost a year and still can't find any pattern or any clue about this problem. Here is my previous qestion about this issue on DBA: http://dba.stackexchange.com/questions/8646/foxpro-dbf-index-corruption

    Read the article

  • Cache Simulator in C

    - by DuffDuff
    Ok this is only my second question, and it's quite a doozy. It's for a school assignment, but no one (including the TAs) seems to be able to help me. It's kind of a tall order but I'm not sure where else to turn. Essentially the assignment was to make a cache simulator. This version is direct mapping and is actually only a small portion of the whole project, but if I can't even get this down I have no chance with other associativities. I'm posting my whole code because I don't want to make any assumptions about where the problem is. This is the test case: http://www.mediafire.com/?ty5dnihydnw And you run the following command: ./sims 512 direct 32 fifo wt pinatrace.out You're supposed to get: hits: 604037 misses 138349 writes: 239269 reads: 138349 But I get: Hits: 587148 Misses: 155222 Writes: 239261 Reads: 155222 If anyone could at least point me in the right direction it would be greatly appreciated. I've been stuck on this for about 12 hours. #include <stdio.h> #include <stdlib.h> #include <string.h> #include <math.h> struct myCache { int valid; char *tag; char *block; }; /* sim [-h] <cache size> <associativity> <block size> <replace alg> <write policy> <trace file> */ //God willing I come up with a better Hex to Bin convertion that maintains the beginning 0s... void hex2bin(char input[], char output[]) { int i; int a = 0; int b = 1; int c = 2; int d = 3; int x = 4; int size; size = strlen(input); for (i = 0; i < size; i++) { if (input[i] =='0') { output[i*x +a] = '0'; output[i*x +b] = '0'; output[i*x +c] = '0'; output[i*x +d] = '0'; } else if (input[i] =='1') { output[i*x +a] = '0'; output[i*x +b] = '0'; output[i*x +c] = '0'; output[i*x +d] = '1'; } else if (input[i] =='2') { output[i*x +a] = '0'; output[i*x +b] = '0'; output[i*x +c] = '1'; output[i*x +d] = '0'; } else if (input[i] =='3') { output[i*x +a] = '0'; output[i*x +b] = '0'; output[i*x +c] = '1'; output[i*x +d] = '1'; } else if (input[i] =='x') { output[i*x +a] = '0'; output[i*x +b] = '1'; output[i*x +c] = '0'; output[i*x +d] = '0'; } else if (input[i] =='5') { output[i*x +a] = '0'; output[i*x +b] = '1'; output[i*x +c] = '0'; output[i*x +d] = '1'; } else if (input[i] =='6') { output[i*x +a] = '0'; output[i*x +b] = '1'; output[i*x +c] = '1'; output[i*x +d] = '0'; } else if (input[i] =='7') { output[i*x +a] = '0'; output[i*x +b] = '1'; output[i*x +c] = '1'; output[i*x +d] = '1'; } else if (input[i] =='8') { output[i*x +a] = '1'; output[i*x +b] = '0'; output[i*x +c] = '0'; output[i*x +d] = '0'; } else if (input[i] =='9') { output[i*x +a] = '1'; output[i*x +b] = '0'; output[i*x +c] = '0'; output[i*x +d] = '1'; } else if (input[i] =='a') { output[i*x +a] = '1'; output[i*x +b] = '0'; output[i*x +c] = '1'; output[i*x +d] = '0'; } else if (input[i] =='b') { output[i*x +a] = '1'; output[i*x +b] = '0'; output[i*x +c] = '1'; output[i*x +d] = '1'; } else if (input[i] =='c') { output[i*x +a] = '1'; output[i*x +b] = '1'; output[i*x +c] = '0'; output[i*x +d] = '0'; } else if (input[i] =='d') { output[i*x +a] = '1'; output[i*x +b] = '1'; output[i*x +c] = '0'; output[i*x +d] = '1'; } else if (input[i] =='e') { output[i*x +a] = '1'; output[i*x +b] = '1'; output[i*x +c] = '1'; output[i*x +d] = '0'; } else if (input[i] =='f') { output[i*x +a] = '1'; output[i*x +b] = '1'; output[i*x +c] = '1'; output[i*x +d] = '1'; } } output[32] = '\0'; } int main(int argc, char* argv[]) { FILE *tracefile; char readwrite; int trash; int cachesize; int blocksize; int setnumber; int blockbytes; int setbits; int blockbits; int tagsize; int m; int count = 0; int count2 = 0; int count3 = 0; int i; int j; int xindex; int jindex; int kindex; int lindex; int setadd; int totalset; int writeMiss = 0; int writeHit = 0; int cacheMiss = 0; int cacheHit = 0; int read = 0; int write = 0; int size; int extra; char bbits[100]; char sbits[100]; char tbits[100]; char output[100]; char input[100]; char origtag[100]; if (argc != 7) { if (strcmp(argv[0], "-h")) { printf("./sim2 <cache size> <associativity> <block size> <replace alg> <write policy> <trace file>\n"); return 0; } else { fprintf(stderr, "Error: wrong number of parameters.\n"); return -1; } } tracefile = fopen(argv[6], "r"); if(tracefile == NULL) { fprintf(stderr, "Error: File is NULL.\n"); return -1; } //Determining size of sbits, bbits, and tag cachesize = atoi(argv[1]); blocksize = atoi(argv[3]); setnumber = (cachesize/blocksize); printf("setnumber: %d\n", setnumber); setbits = (round((log(setnumber))/(log(2)))); printf("sbits: %d\n", setbits); blockbits = log(blocksize)/log(2); printf("bbits: %d\n", blockbits); tagsize = 32 - (blockbits + setbits); printf("t: %d\n", tagsize); struct myCache newCache[setnumber]; //Allocating Space for Tag Bits, initiating tag and valid to 0s for(i=0;i<setnumber;i++) { newCache[i].tag = (char *)malloc(sizeof(char)*(tagsize+1)); for(j=0;j<tagsize;j++) { newCache[i].tag[j] = '0'; } newCache[i].valid = 0; } while(fgetc(tracefile)!='#') { setadd = 0; totalset = 0; //read in file fseek(tracefile,-1,SEEK_CUR); fscanf(tracefile, "%x: %c %s\n", &trash, &readwrite, origtag); //shift input Hex size = strlen(origtag); extra = (10 - size); for(i=0; i<extra; i++) input[i] = '0'; for(i=extra, j=0; i<(size-(2-extra)); j++, i++) input[i]=origtag[j+2]; input[8] = '\0'; // Convert Hex to Binary hex2bin(input, output); //Resolving the Address into tbits, sbits, bbits for (xindex=0, jindex=(32-blockbits); jindex<32; jindex++, xindex++) { bbits[xindex] = output[jindex]; } bbits[xindex]='\0'; for (xindex=0, kindex=(32-(blockbits+setbits)); kindex<32-(blockbits); kindex++, xindex++){ sbits[xindex] = output[kindex]; } sbits[xindex]='\0'; for (xindex=0, lindex=0; lindex<(32-(blockbits+setbits)); lindex++, xindex++){ tbits[xindex] = output[lindex]; } tbits[xindex]='\0'; //Convert set bits from char array into ints for(xindex = 0, kindex = (setbits -1); xindex < setbits; xindex ++, kindex--) { if (sbits[xindex] == '1') setadd = 1; if (sbits[xindex] == '0') setadd = 0; setadd = setadd * pow(2, kindex); totalset += setadd; } //Calculating Hits and Misses if (newCache[totalset].valid == 0) { newCache[totalset].valid = 1; strcpy(newCache[totalset].tag, tbits); } else if (newCache[totalset].valid == 1) { if(strcmp(newCache[totalset].tag, tbits) == 0) { if (readwrite == 'W') { cacheHit++; write++; } if (readwrite == 'R') cacheHit++; } else { if (readwrite == 'R') { cacheMiss++; read++; } if (readwrite == 'W') { cacheMiss++; read++; write++; } strcpy(newCache[totalset].tag, tbits); } } } printf("Hits: %d\n", cacheHit); printf("Misses: %d\n", cacheMiss); printf("Writes: %d\n", write); printf("Reads: %d\n", read); }

    Read the article

  • Using a CMS with an external database

    - by George Reith
    I am looking at building an external site with a CMS, probably Drupal or ExpressionEngine. The problem is that our company already has a membership database that is designed to work with our existing enterprise software. Migrating data from the database manually is not an option as modifications and new data must be accessible in real-time. Because the design of the external database will differ from the CMS's own I have decided the best way forward is to use two databases and force the CMS to use the external to read user information (cannot write to) and a local for everything else the CMS needs to do (read + write). Is this feasible with these Drupal or ExpressionEngine? Ideally I need to be able to use hooks as I do not wan't to modify core CMS files. Sifting through the docs I am not able to find what I would hook into for ether CMS. (Note: I know it is possible, but I want to know if it's feasible). Finally if there is a better way of handling this situation please also chime in. Perhaps there is something at the database level to reference a field or table in an external database? I'm clutching at straws someone can point me in the right direction I'm sure.

    Read the article

  • Nginx - Treats PHP as binary

    - by Think Floyd
    We are running Nginx+FastCgi as the backend for our Drupal site. Everything seems to work like fine, except for this one url. http:///sites/all/modules/tinymce/tinymce/jscripts/tiny_mce/plugins/smimage/index.php (We use TinyMCE module in Drupal, and the url above is invoked when user tries to upload an image) When we were using Apache, everything was working fine. However, nginx treats that above url Binary and tries to Download it. (We've verified that the file pointed out by the url is a valid PHP file) Any idea what could be wrong here? I think it's something to do with the NGINX configuration, but not entirely sure what that is. Any help is greatly appreciated. Config: Here's the snippet from the nginx configuration file: root /var/www/; index index.php; if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; } error_page 404 index.php; location ~* \.(engine|inc|info|install|module|profile|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)$|^(code-style\.pl|Entries.*|Repository|Root|Tag|Template)$ { deny all; } location ~* ^.+\.(jpg|jpeg|gif|png|ico)$ { access_log off; expires 7d; } location ~* ^.+\.(css|js)$ { access_log off; expires 7d; } location ~ .php$ { include /etc/nginx/fcgi.conf; fastcgi_pass 127.0.0.1:8888; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; } location ~ /\.ht { deny all; }

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • CacheManager.getCacheFileBaseDir() always returns null

    - by Leon
    Hi, I've been trying to use the CacheManager for caching some http requests but it failed every time with a nullpointer exception. After some digging I believe I found out why: CacheManager.getCacheFileBaseDir() always returns null so when I try to use CacheManager.getCacheFile() or CacheManager.saveCacheFile() they fail. CacheManager.cacheDisabled() returns false :S I hadn 't created a cache partition via the AVD manager so I thought the problem lie there. But after creating a cache partition getCacheFile() still return null: 03-16 00:25:16.321: ERROR/AndroidRuntime(296): Caused by: java.lang.NullPointerException 03-16 00:25:16.321: ERROR/AndroidRuntime(296): at android.webkit.CacheManager.getCacheFile(CacheManager.java:296) What could be the problem? I've got the code posted here: http://pastebin.com/eaJwfXEK But it's a bit messy because I've been trying tons of stuff. Why does CacheManager.getCacheFileBaseDir() return null and not a File object? Thanks in advance! Leon

    Read the article

  • How to prevent caching from jQuery Ajax?

    - by cynwong
    Hi, Could anyone please help me with this? I have a web page using .manifest for offline storage caching. In that page, I use jQuery ajax call to get the data from the server. If I first load the page, it is OK. I can switch between Online and Offline. But the problem is when I go back online and refresh the page. jQuery ajax cannot be able to talk to server anymore. Is there a way to for ajax to talk to the server or clear offline cache? My ajax call is as such: $.ajax({ type: "GET", url: requestUrl, success: localSuccess, error: error, dataType: "text", cache:false });

    Read the article

  • Getting Started with CacheMoney

    - by Matt Grande
    I recently installed cache-money. After some difficulties getting memcached and cache-money set up, I thought I had it working. It cached the one query on my login page fine. I login, and go to my message index page and get this error: indices delegated to @cache_config.indices, but @cache_config is nil: Slug(id: integer, name: string, sluggable_id: integer, sequence: integer, sluggable_type: string, scope: string, created_at: datetime) Searching for the first part of that error message returns 0 hits on Google, so I'm at a loss on where to even begin. Any suggestions?

    Read the article

  • Memory mapped files causes low physical memory

    - by harik
    I have a 2GB RAM and running a memory intensive application and going to low available physical memory state and system is not responding to user actions, like opening any application or menu invocation etc. How do I trigger or tell the system to swap the memory to pagefile and free physical memory? I'm using Windows XP. If I run the same application on 4GB RAM machine it is not the case, system response is good. After getting choked of available physical memory system automatically swaps to pagefile and free physical memory, not that bad as 2GB system. To overcome this problem (on 2GB machine) attempted to use memory mapped files for large dataset which are allocated by application. In this case virtual memory of the application(process) is fine but system cache is high and same problem as above that physical memory is less. Even though memory mapped file is not mapped to process virtual memory system cache is high. why???!!! :( Any help is appreciated. Thanks.

    Read the article

  • How to retrieve only updated/new records since the last query in SQL?

    - by William Choi
    Hi all, I was asked to design a class for caching SQL query results. Calling the class' query method will query and cache the entire set of results at the first time; afterward, each subsequence query will retrieve only the updated portion, and will merge the result into the cache. If the class is required to be generic, i.e. NO knowledge about the db and the tables, do you have any idea? Is it possible, and how to retrieve only updated/new records since the last query? Thanks! William

    Read the article

  • IE form input data disappear after browser refresh

    - by RWW
    Hi, I'm trying to achieve sticky forms without PHP. My setup is AJAX like javascript. The back/forward work fine on both IE and FF, but refresh only works on FF, not IE. Doesn't matter what cache options I use, I've even set IE's temporary files option to never check for updates, and the input value is gone after page refresh(the refresh button or F5) I've read many posts where people have the opposite problem, and do not want form data to persist across page refresh, and never read from browser cache, but I do. Any help is appreciated, thanks!

    Read the article

  • How return 304 status with FileResult in ASP.NET MVC RC1

    - by Maysam
    As you may know we have got a new ActionResult called FileResult in RC1 version of ASP.NET MVC. Using that, your action methods can return image to browser dynamically. Something like this: public ActionResult DisplayPhoto(int id) { Photo photo = GetPhotoFromDatabase(id); return File(photo.Content, photo.ContentType); } In the HTML code, we can use something like this: <img src="http://mysite.com/controller/DisplayPhoto/657"> Since the image is returned dynamically, we need a way to cache the returned stream so that we don't need to read the image again from database. I guess we can do it with something like this, I'm not sure: Response.StatusCode = 304; This tells the browser that you already have the image in your cache. I just don't know what to return in my action method after setting StatusCode to 304. Should I return null or something?

    Read the article

  • Very long strings as primary keys in a database for caching

    - by Bill Zimmerman
    Hi, I am working on a web app that allows users to create dynamic PDF files based on what they enter into a form (it is not very structured data). The idea is that User 1 enters several words (arbitrary # of words, practically capped of course), for example: A B C D E There is no such string in the database, so I was thinking: Store this string as a primary key in a MySQL database (it could be maybe around 50-100k of text, but usually probably less than 200 words) Generate the PDF file, and create a link to it in the database When the next user requests A B C D E, then I can just serve the file instead of recreating it each time. (simple cache) The PDF is cpu intensive to generate, so I am trying to cache as much as I can... My questions are: Does anyone have any alternative ideas to my approach What will the database performance be like? Is there a better way to design the schema than using the input string as the primary key?

    Read the article

  • Zend_Cache_Backend_Sqlite vs Zend_Cache_Backend_File

    - by Alekc
    Hi, Currently i'm using Zend_Cache_Backend_File for caching my project (especially responses from external web services). I was wandering if I could find some benefit in migrating the structure to Zend_Cache_Backend_Sqlite. Possible advantages are: File system is well-ordered (only 1 file in cache folder) Removing expired entries should be quicker (my assumption, since zend wouldn't need to scan internal-metadatas for expiring date of each cache) Possible disadvantages: Finding record to read (with files zend check if file exists based on filename and should be a bit quicker) in term of speed. I've tried to search a bit in internet but it seems that there are not a lot of discussion about the matter. What do you think about it? Thanks in advance.

    Read the article

  • XmlDocument caching memory usage

    - by mdsharpe
    We are seeing very high memory usage in .NET web applications which use XmlDocument. A small (~5MB) XML document is loaded into an XmlDocument object and stored in HttpContext.Cache for easy querying and XSLT transformation on each page load. The XML is modified on disk periodically so a cache has a dependency on the file. Such an application appears to be using hundreds of megabytes of RAM. I have experimented with requesting garbage collection on each request start, and this keeps the RAM usage far lower but I cannot imagine this is good practise. Does anyone have any suggestions as to how we can achieve the same goal but with lower RAM usage?

    Read the article

  • A scheme for expiring downloaded content?

    - by Chad Johnson
    I am going to offer a web API service that allows users to download and "rent" content for a monthly subscription fee. The API will either be open to everyone or possibly just select parties (not sure yet). Each developer must agree to a license, and they receive a developer key for their person. Each software application will have its own key as well. So then end-users will download the software which will interact with my service's API. Each user will have a key for each application as well (probably using OAuth). Content will be cached on first download and accessible offline via just the third-party application that cached the content. If a user cancels their subscription, I plan on doing the following: Deactivate the user's OAuth key for all applications. Do not allow the user's account to download new content via the API (and subsequently any software that uses the API). Now, the big question is: how do I make content expire if they cancel their subscription? If they cancel, they should not have access to content anymore. Here are ideas I've thought of (some of these are half-solutions, not yet fully fleshed out): Require that applications encrypt downloaded content using the user's OAuth key, making it available to only the application. This will prevent most users from going to the cache directory and just copying and keeping files. Update the user's key once a month, forcing content to re-cache on a monthly basic. Users could then access content for a month after they cancel their subscription. Require applications to "phone home" [to the service] periodically and check whether the user's subscription has terminated. If so, require in the API developer license that applications expire cache. If it is found that applications do not comply, their keys (and possibly keys for all developers) are permanently deactivated as a consequence. One major worry is that some applications may blatantly ignore constraints of the license. Is it generally acceptable to rely on applications abiding by the licensing constraints? Bad idea? Any other ideas? Maybe a way to make content auto-expire after x days? Something else? I'm open to out-of-the-box ideas.

    Read the article

  • quering an external oracle db in rails application

    - by railscoder
    I have a website which useses a mysql database for its whole operation . But for a new requirement i need to query a external oracle database( used by other component) and compile a list of items and display in a page in the website. How is it possible to connect to a external database just for rendering a single page. And is it possible to cache the queried result for say 1 month before invalidating the cache and get the updated list of items. i dont want query the external oracle db for each request.

    Read the article

  • PHP APC - Why is loading cached array op codes slow?

    - by Aaron Kreider
    I'm using APC to reduce my loading time for my PHP files. My files load very fast, except for one file where I define more than 100 arrays. This 270 kb file takes 200 ms to load. The rest of the files are full of objects, methods, and functions. I'm wondering: does OP code caching not work as well for arrays? My APC cache should be big enough to handle all of my classes. Currently 40% of my cache is free. My hit rate is 99%. apc.shm_size=32 M apc.max_file_size = 1M apc.shm_segments= 1 APC 3.1.6 I'm using PHP 5.2, Apache 2, and Windows Vista.

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >