Search Results

Search found 15401 results on 617 pages for 'memory optimization'.

Page 44/617 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • C++ iterators & loop optimization

    - by Quantum7
    I see a lot of c++ code that looks like this: for( const_iterator it = list.begin(), const_iterator ite = list.end(); it != ite; ++it) As opposed to the more concise version: for( const_iterator it = list.begin(); it != list.end(); ++it) Will there be any difference in speed between these two conventions? Naively the first will be slightly faster since list.end() is only called once. But since the iterator is const, it seems like the compiler will pull this test out of the loop, generating equivalent assembly for both.

    Read the article

  • Mysql optimization

    - by Jens
    I have this mysql table called comments which looks like this: commentID parentID type userID date comment The commentID is set as Primary key, but most of the time I fetch the data using the parentID. How should I set my indexes? Should I just add an index on parentID and let commentID be the primary key?

    Read the article

  • Constant embedded for loop condition optimization in C++ with gcc

    - by solinent
    Will a compiler optimize tihs: bool someCondition = someVeryTimeConsumingTask(/* ... */); for (int i=0; i<HUGE_INNER_LOOP; ++i) { if (someCondition) doCondition(i); else bacon(i); } into: bool someCondition = someVeryTimeConsumingTask(/* ... */); if (someCondition) for (int i=0; i<HUGE_INNER_LOOP; ++i) doCondition(i); else for (int i=0; i<HUGE_INNER_LOOP; ++i) bacon(i); someCondition is trivially constant within the for loop. This may seem obvious and that I should do this myself, but if you have more than one condition then you are dealing with permuatations of for loops, so the code would get quite a bit longer. I am deciding on whether to do it (I am already optimizing) or whether it will be a waste of my time.

    Read the article

  • MySQL: optimization of table (indexing, foreign key) with no primary keys

    - by Haradzieniec
    Each member has 0 or more orders. Each order contains at least 1 item. memberid - varchar, not integer - that's OK (please do not mention that's not very good, I can't change it). So, thera 3 tables: members, orders and order_items. Orders and order_items are below: CREATE TABLE `orders` ( `orderid` INT(11) UNSIGNED NOT NULL AUTO_INCREMENT, `memberid` VARCHAR( 20 ), `Time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP , `info` VARCHAR( 3200 ) NULL , PRIMARY KEY (orderid) , FOREIGN KEY (memberid) REFERENCES members(memberid) ) ENGINE = InnoDB; CREATE TABLE `order_items` ( `orderid` INT(11) UNSIGNED NOT NULL, `item_number_in_cart` tinyint(1) NOT NULL , --- 5 items in cart= 5 rows `price` DECIMAL (6,2) NOT NULL, FOREIGN KEY (orderid) REFERENCES orders(orderid) ) ENGINE = InnoDB; So, order_items table looks like: orderid - item_number_in_cart - price: ... 1000456 - 1 - 24.99 1000456 - 2 - 39.99 1000456 - 3 - 4.99 1000456 - 4 - 17.97 1000457 - 1 - 20.00 1000458 - 1 - 99.99 1000459 - 1 - 2.99 1000459 - 2 - 69.99 1000460 - 1 - 4.99 ... As you see, order_items table has no primary keys (and I think there is no sense to create an auto_increment id for this table, because once we want to extract data, we always extract it as WHERE orderid='1000456' order by item_number_in_card asc - the whole block, id woudn't be helpful in queries). Once data is inserted into order_items, it's not UPDATEd, just SELECTed. The questions are: I think it's a good idea to put index on item_number_in_cart. Could anybody please confirm that? Is there anything else I have to do with order_items to increase the performance, or that looks pretty good? I could miss something because I'm a newbie. Thank you in advance.

    Read the article

  • LinQ optimization

    - by Budda
    Here is a peace of code: void MyFunc(List<MyObj> objects) { MyFunc1(objects); foreach( MyObj obj in objects.Where(obj1=>obj1.Good)) { // Do Action With Good Object } } void MyFunc1(List<MyObj> objects) { int iGoodCount = objects.Where(obj1=>obj1.Good).Count(); BeHappy(iGoodCount); // do other stuff with 'objects' collection } Here we see that collection is analyzed twice and each time the value of 'Good' property is checked for each member: 1st time when calculating count of good objects, 2nd - when iterating through all good objects. It is desirable to have that optimized, and here is a straightforward solution: before call to MyFunc1 makecreate an additional temporary collection of good objects only (goodObjects, it can be IEnumerable); get count of these objects and pass it as an additional parameter to MyFunc1; in the 'MyFunc' method iterate not through 'objects.Where(...)' but through the 'goodObjects' collection. Not too bad approach (as far as I see), but additional parameter is required to be passed. Question: is there any LinQ out-of-the-box functionality that allows any caching during 1st Where().Count(), remembering a processed collection and use it in the next iteration? Any thoughts are welcome. Thanks.

    Read the article

  • NSMutableArray memory leak when reloading objects

    - by Davin
    I am using Three20/TTThumbsviewcontroller to load photos. I am struggling since quite a some time now to fix memory leak in setting photosource. I am beginner in Object C & iOS memory management. Please have a look at following code and suggest any obvious mistakes or any errors in declaring and releasing variables. -- PhotoViewController.h @interface PhotoViewController : TTThumbsViewController <UIPopoverControllerDelegate,CategoryPickerDelegate,FilterPickerDelegate,UISearchBarDelegate>{ ...... NSMutableArray *_photoList; ...... @property(nonatomic,retain) NSMutableArray *photoList; -- PhotoViewController.m @implementation PhotoViewController .... @synthesize photoList; ..... - (void)LoadPhotoSource:(NSString *)query:(NSString *)title:(NSString* )stoneName{ NSLog(@"log- in loadPhotosource method"); if (photoList == nil) photoList = [[NSMutableArray alloc] init ]; [photoList removeAllObjects]; @try { sqlite3 *db; NSFileManager *fileMgr = [NSFileManager defaultManager]; NSString* documentsPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0]; NSString *dbPath = [documentsPath stringByAppendingPathComponent: @"DB.s3db"]; BOOL success = [fileMgr fileExistsAtPath:dbPath]; if(!success) { NSLog(@"Cannot locate database file '%@'.", dbPath); } if(!(sqlite3_open([dbPath UTF8String], &db) == SQLITE_OK)) { NSLog(@"An error has occured."); } NSString *_sql = query;//[NSString stringWithFormat:@"SELECT * FROM Products where CategoryId = %i",[categoryId integerValue]]; const char *sql = [_sql UTF8String]; sqlite3_stmt *sqlStatement; if(sqlite3_prepare(db, sql, -1, &sqlStatement, NULL) != SQLITE_OK) { NSLog(@"Problem with prepare statement"); } if ([stoneName length] != 0) { NSString *wildcardSearch = [NSString stringWithFormat:@"%@%%",[stoneName stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]]; sqlite3_bind_text(sqlStatement, 1, [wildcardSearch UTF8String], -1, SQLITE_STATIC); } while (sqlite3_step(sqlStatement)==SQLITE_ROW) { NSString* urlSmallImage = @"Mahallati_NoImage.png"; NSString* urlThumbImage = @"Mahallati_NoImage.png"; NSString *designNo = [NSString stringWithUTF8String:(char *) sqlite3_column_text(sqlStatement,2)]; designNo = [designNo stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]; NSString *desc = [NSString stringWithUTF8String:(char *) sqlite3_column_text(sqlStatement,7)]; desc = [desc stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]; NSString *caption = designNo;//[designNo stringByAppendingString:desc]; caption = [caption stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]; NSString *smallFilePath = [documentsPath stringByAppendingPathComponent: [NSString stringWithFormat:@"Small%@.JPG",designNo] ]; smallFilePath = [smallFilePath stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]; if ([fileMgr fileExistsAtPath:smallFilePath]){ urlSmallImage = [NSString stringWithFormat:@"Small%@.JPG",designNo]; } NSString *thumbFilePath = [documentsPath stringByAppendingPathComponent: [NSString stringWithFormat:@"Thumb%@.JPG",designNo] ]; thumbFilePath = [thumbFilePath stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]; if ([fileMgr fileExistsAtPath:thumbFilePath]){ urlThumbImage = [NSString stringWithFormat:@"Thumb%@.JPG",designNo]; } NSNumber *photoProductId = [NSNumber numberWithInt:(int)sqlite3_column_int(sqlStatement, 0)]; NSNumber *photoPrice = [NSNumber numberWithInt:(int)sqlite3_column_int(sqlStatement, 6)]; char *productNo1 = sqlite3_column_text(sqlStatement, 3); NSString* productNo; if (productNo1 == NULL) productNo = nil; else productNo = [NSString stringWithUTF8String:productNo1]; Photo *jphoto = [[[Photo alloc] initWithCaption:caption urlLarge:[NSString stringWithFormat:@"documents://%@",urlSmallImage] urlSmall:[NSString stringWithFormat:@"documents://%@",urlSmallImage] urlThumb:[NSString stringWithFormat:@"documents://%@",urlThumbImage] size:CGSizeMake(123, 123) productId:photoProductId price:photoPrice description:desc designNo:designNo productNo:productNo ] autorelease]; [photoList addObject:jphoto]; [jphoto release]; } } @catch (NSException *exception) { NSLog(@"An exception occured: %@", [exception reason]); } self.photoSource = [[[MockPhotoSource alloc] initWithType:MockPhotoSourceNormal title:[NSString stringWithFormat: @"%@",title] photos: photoList photos2:nil] autorelease]; } Memory leaks happen when calling above LoadPhotosource method again with different query... I feel its something wrong in declaring NSMutableArray (photoList), but can't figure out how to fix memory leak. Any suggestion is really appreciated.

    Read the article

  • Testing shared memory ,strange thing happen

    - by barfatchen
    I have 2 program compiled in 4.1.2 running in RedHat 5.5 , It is a simple job to test shared memory , shmem1.c like following : #define STATE_FILE "/program.shared" #define NAMESIZE 1024 #define MAXNAMES 100 typedef struct { char name[MAXNAMES][NAMESIZE]; int heartbeat ; int iFlag ; } SHARED_VAR; int main (void) { int first = 0; int shm_fd; static SHARED_VAR *conf; if((shm_fd = shm_open(STATE_FILE, (O_CREAT | O_EXCL | O_RDWR), (S_IREAD | S_IWRITE))) > 0 ) { first = 1; /* We are the first instance */ } else if((shm_fd = shm_open(STATE_FILE, (O_CREAT | O_RDWR), (S_IREAD | S_IWRITE))) < 0) { printf("Could not create shm object. %s\n", strerror(errno)); return errno; } if((conf = mmap(0, sizeof(SHARED_VAR), (PROT_READ | PROT_WRITE), MAP_SHARED, shm_fd, 0)) == MAP_FAILED) { return errno; } if(first) { for(idx=0;idx< 1000000000;idx++) { conf->heartbeat = conf->heartbeat + 1 ; } } printf("conf->heartbeat=(%d)\n",conf->heartbeat) ; close(shm_fd); shm_unlink(STATE_FILE); exit(0); }//main And shmem2.c like following : #define STATE_FILE "/program.shared" #define NAMESIZE 1024 #define MAXNAMES 100 typedef struct { char name[MAXNAMES][NAMESIZE]; int heartbeat ; int iFlag ; } SHARED_VAR; int main (void) { int first = 0; int shm_fd; static SHARED_VAR *conf; if((shm_fd = shm_open(STATE_FILE, (O_RDWR), (S_IREAD | S_IWRITE))) < 0) { printf("Could not create shm object. %s\n", strerror(errno)); return errno; } ftruncate(shm_fd, sizeof(SHARED_VAR)); if((conf = mmap(0, sizeof(SHARED_VAR), (PROT_READ | PROT_WRITE), MAP_SHARED, shm_fd, 0)) == MAP_FAILED) { return errno; } int idx ; for(idx=0;idx< 1000000000;idx++) { conf->heartbeat = conf->heartbeat + 1 ; } printf("conf->heartbeat=(%d)\n",conf->heartbeat) ; close(shm_fd); exit(0); } After compiled : gcc shmem1.c -lpthread -lrt -o shmem1.exe gcc shmem2.c -lpthread -lrt -o shmem2.exe And Run both program almost at the same time with 2 terminal : [test]$ ./shmem1.exe First creation of the shm. Setting up default values conf->heartbeat=(840825951) [test]$ ./shmem2.exe conf->heartbeat=(1215083817) I feel confused !! since shmem1.c is a loop 1,000,000,000 times , how can it be possible to have a answer like 840,825,951 ? I run shmem1.exe and shmem2.exe this way,most of the results are conf-heartbeat will larger than 1,000,000,000 , but seldom and randomly , I will see result conf-heartbeat will lesser than 1,000,000,000 , either in shmem1.exe or shmem2.exe !! if run shmem1.exe only , it is always print 1,000,000,000 , my question is , what is the reason cause conf-heartbeat=(840825951) in shmem1.exe ? Update: Although not sure , but I think I figure it out what is going on , If shmem1.exe run 10 times for example , then conf-heartbeat = 10 , in this time shmem1.exe take a rest and then back , shmem1.exe read from shared memory and conf-heartbeat = 8 , so shmem1.exe will continue from 8 , why conf-heartbeat = 8 ? I think it is because shmem2.exe update the shared memory data to 8 , shmem1.exe did not write 10 back to shared memory before it took a rest ....that is just my theory... i don't know how to prove it !!

    Read the article

  • Memory leak involving jQuery Ajax requests

    - by Eli Courtwright
    I have a webpage that's leaking memory in both IE8 and Firefox; the memory usage displayed in the Windows Process Explorer just keeps growing over time. The following page requests the "unplanned.json" url, which is a static file that never changes (though I do set my Cache-control HTTP header to no-cache to make sure that the Ajax request always goes through). When it gets the results, it clears out an HTML table, loops over the json array it got back from the server, and dynamically adds a row to an HTML table for each entry in the array. Then it waits 2 seconds and repeats this process. Here's the entire webpage: <html> <head> <title>Test Page</title> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3/jquery.min.js"></script> </head> <body> <script type="text/javascript"> function kickoff() { $.getJSON("unplanned.json", resetTable); } function resetTable(rows) { $("#content tbody").empty(); for(var i=0; i<rows.length; i++) { $("<tr>" + "<td>" + rows[i].mpe_name + "</td>" + "<td>" + rows[i].bin + "</td>" + "<td>" + rows[i].request_time + "</td>" + "<td>" + rows[i].filtered_delta + "</td>" + "<td>" + rows[i].failed_delta + "</td>" + "</tr>").appendTo("#content tbody"); } setTimeout(kickoff, 2000); } $(kickoff); </script> <table id="content" border="1" style="width:100% ; text-align:center"> <thead><tr> <th>MPE</th> <th>Bin</th> <th>When</th> <th>Filtered</th> <th>Failed</th> </tr></thead> <tbody></tbody> </table> </body> </html> If it helps, here's an example of the json I'm sending back (it's this exact array wuith thousands of entries instead of just one): [ { mpe_name: "DBOSS-995", request_time: "09/18/2009 11:51:06", bin: 4, filtered_delta: 1, failed_delta: 1 } ] EDIT: I've accepted Toran's extremely helpful answer, but I feel I should post some additional code, since his removefromdom jQuery plugin has some limitations: It only removes individual elements. So you can't give it a query like `$("#content tbody tr")` and expect it to remove all of the elements you've specified. Any element that you remove with it must have an `id` attribute. So if I want to remove my `tbody`, then I must assign an `id` to my `tbody` tag or else it will give an error. It removes the element itself and all of its descendants, so if you simply want to empty that element then you'll have to re-create it afterwards (or modify the plugin to empty instead of remove). So here's my page above modified to use Toran's plugin. For the sake of simplicity I didn't apply any of the general performance advice offered by Peter. Here's the page which now no longer memory leaks: <html> <head> <title>Test Page</title> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3/jquery.min.js"></script> </head> <body> <script type="text/javascript"> <!-- $.fn.removefromdom = function(s) { if (!this) return; var el = document.getElementById(this.attr("id")); if (!el) return; var bin = document.getElementById("IELeakGarbageBin"); //before deleting el, recursively delete all of its children. while (el.childNodes.length > 0) { if (!bin) { bin = document.createElement("DIV"); bin.id = "IELeakGarbageBin"; document.body.appendChild(bin); } bin.appendChild(el.childNodes[el.childNodes.length - 1]); bin.innerHTML = ""; } el.parentNode.removeChild(el); if (!bin) { bin = document.createElement("DIV"); bin.id = "IELeakGarbageBin"; document.body.appendChild(bin); } bin.appendChild(el); bin.innerHTML = ""; }; var resets = 0; function kickoff() { $.getJSON("unplanned.json", resetTable); } function resetTable(rows) { $("#content tbody").removefromdom(); $("#content").append('<tbody id="id_field_required"></tbody>'); for(var i=0; i<rows.length; i++) { $("#content tbody").append("<tr><td>" + rows[i].mpe_name + "</td>" + "<td>" + rows[i].bin + "</td>" + "<td>" + rows[i].request_time + "</td>" + "<td>" + rows[i].filtered_delta + "</td>" + "<td>" + rows[i].failed_delta + "</td></tr>"); } resets++; $("#message").html("Content set this many times: " + resets); setTimeout(kickoff, 2000); } $(kickoff); // --> </script> <div id="message" style="color:red"></div> <table id="content" border="1" style="width:100% ; text-align:center"> <thead><tr> <th>MPE</th> <th>Bin</th> <th>When</th> <th>Filtered</th> <th>Failed</th> </tr></thead> <tbody id="id_field_required"></tbody> </table> </body> </html> FURTHER EDIT: I'll leave my question unchanged, though it's worth noting that this memory leak has nothing to do with Ajax. In fact, the following code would memory leak just the same and be just as easily solved with Toran's removefromdom jQuery plugin: function resetTable() { $("#content tbody").empty(); for(var i=0; i<1000; i++) { $("#content tbody").append("<tr><td>" + "DBOSS-095" + "</td>" + "<td>" + 4 + "</td>" + "<td>" + "09/18/2009 11:51:06" + "</td>" + "<td>" + 1 + "</td>" + "<td>" + 1 + "</td></tr>"); } setTimeout(resetTable, 2000); } $(resetTable);

    Read the article

  • Google I/O 2011: Memory management for Android Apps

    Google I/O 2011: Memory management for Android Apps Patrick Dubroy Android apps have more memory available to them than ever before, but are you sure you're using it wisely? This talk will cover the memory management changes in Gingerbread and Honeycomb (concurrent GC, heap-allocated bitmaps, "largeHeap" option) and explore tools and techniques for profiling the memory usage of Android apps. From: GoogleDevelopers Views: 5698 45 ratings Time: 58:42 More in Science & Technology

    Read the article

  • My C disk show 33gb less space then i have bcs of hidden or encrypted files i cant find

    - by Peter
    Hello I was hoping some one could help me my drive has 92gbs used space, 95gbs free out of 220 partition 33gbs in the air i cant find, already did cleanup, emptied recycle bin, history and temp files also and I believe sin Ive seen before its possibly space my brother used with a program he uses to hide (possibly encrypted) files dont know the name just seen him do it on usb and pc dont appear visible or hidden and result is what you read above is there any way of finding them to delete them hence my brother is nowhere to b found or could it be something else??? already tried freecommander also.

    Read the article

  • Can I upgrade an Asus M51Sn laptop to 2x4GB of RAM? (DDR2)

    - by matteo
    My Asus M51Sn has 2 RAM slots which currently have 1x1GB + 1x2GB DDR2-800 SODimm RAM modules installed. I've found out that 4GB DDR2 SODimm modules do exist, though they are impossible to find in local stores nere here, but I've found them in online stores like these: http://www.pccomponentes.com/g_skill_ddr2_800_pc2_6400_4gb_so_dimm.html They seem to meet the specification, so can I replace both my current modules with 2x4GB modules, and reach a total of 8GB? Or should I worry about some limit (e.g. 4GB max or 2GB per slot) imposed by the matherboard, chipset or whatever? (I currently use Ubuntu 12.04 32 bit, so I plan to use the pae kernel, which supposedly supports 4GB ram on a 32bit system; or I may consider switching tu 64bit ubuntu; the question is about hardware limitations, not OS limitations).

    Read the article

  • Exploring In-memory OLTP Engine (Hekaton) in SQL Server 2014 CTP1

    The continuing drop in the price of memory has made fast in-memory OLTP increasingly viable. SQL Server 2014 allows you to migrate the most-used tables in an existing database to memory-optimised 'Hekaton' technology, but how you balance between disk tables and in-memory tables for optimum performance requires judgement and experiment. What is this technology, and how can you exploit it? Rob Garrison explains.

    Read the article

  • How can we tell what driven the Private bytes spiking.

    - by ronin
    I have websites running on .Net Framework 2.0 environment. For recent every day my website becomes slow at certain time and I need to recycle my app pool. I checked the log file found that the private bytes will spike during that time slot. Through some research I already know that the managed code and unmanaged code consists of Privates and we can identify which one cause the spike based on "Bytes in all heaps" counter. But I can't find a way to dig deeper. Is there any way that I can find out what driven my private bytes spike? How can we see what the private bytes are being used for? Thanks, Ronin

    Read the article

  • When to unload graphics object from main memory?

    - by piotrek
    I writing my resource mangaer, and I consider about how it can work for graphics objects (like textures, meshes). I think about this : I want to load texture (in pseudocode): Texture t = resMgr.GetTex("image.png"); and GetTex make something like this: load texture from disk to main memory create texture object (load it to gpu memory) unload texture from main memory I consider about 3 step, does game engines that you know unload meshes/textures after load them into gpu memory ?

    Read the article

  • Search Engine Optimization Crucial For Site Page Rank

    Search engine optimization is a process to drive traffic to your blog or sites. Search engines are the best way to give you the traffic that will boost your product sell. And as per the internet marketing is concern the search engine optimization is best way. The reward are numerous but the two that stand out are; you blog will rank higher and you will generate traffic directly proportional to higher selling of your product. For a long time now sitemaps have assisted online business people achieve webpage site optimization.

    Read the article

  • Is an average RAM usage per Apache process of 43 MB "normal" for a Social Networking site? [closed]

    - by Programmer
    I have a Social Networking site that runs on a single LAMP Server that handles everything. The average RAM usage per Apache process is 43 MB. Is that amount roughly within the expected range for a Social Networking site, or is it too high? If it's too high, where and how can I look to bring that average number down? (If you need more details to determine whether it's within the expected range or not, just let me know and I'll edit my question to provide them as best I can.)

    Read the article

  • DB12c In-Memory & JSON ?????

    - by katsumii
    ???8?18??20????????? DB12c PS1(PatchSet 1, 12.1.0.2.0)?????????JSON ??In-Memory Option ??????????????????????????????????????????????????????????????????????????In-Memory???????????????????????????????????????????????????????????????????????[????] Oracle Database 12c In-Memory?????????! (Oracle Technology Network Japan Blog)?Oracle Database 12c? Oracle In-Memory Option???? 8?28?(?)19:00 ~20:40 @  ??????????(??????)

    Read the article

  • PostgreSQL - Why are some queries on large datasets so incredibly slow

    - by Brad Mathews
    Hello, I have two types of queries I run often on two large datasets. They run much slower than I would expect them to. The first type is a sequential scan updating all records: Update rcra_sites Set street = regexp_replace(street,'/','','i') rcra_sites has 700,000 records. It takes 22 minutes from pgAdmin! I wrote a vb.net function that loops through each record and sends an update query for each record (yes, 700,000 update queries!) and it runs in less than half the time. Hmmm.... The second type is a simple update with a relation and then a sequential scan: Update rcra_sites as sites Set violations='No' From narcra_monitoring as v Where sites.agencyid=v.agencyid and v.found_violation_flag='N' narcra_monitoring has 1,700,000 records. This takes 8 minutes. The query planner refuses to use my indexes. The query runs much faster if I start with a set enable_seqscan = false;. I would prefer if the query planner would do its job. I have appropriate indexes, I have vacuumed and analyzed. I optimized my shared_buffers and effective_cache_size best I know to use more memory since I have 4GB. My hardware is pretty darn good. I am running v8.4 on Windows 7. Is PostgreSQL just this slow? Or am I still missing something? Thanks! Brad

    Read the article

  • UITableView's NSString memory leak on iphone when encoding with NSUTF8StringEncoding

    - by vince
    my UITableView have serious memory leak problem only when the NSString is NOT encoding with NSASCIIStringEncoding. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"cell"; UILabel *textLabel1; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; textLabel1 = [[UILabel alloc] initWithFrame:CGRectMake(105, 6, 192, 22)]; textLabel1.tag = 1; textLabel1.textColor = [UIColor whiteColor]; textLabel1.backgroundColor = [UIColor blackColor]; textLabel1.numberOfLines = 1; textLabel1.adjustsFontSizeToFitWidth = NO; [textLabel1 setFont:[UIFont boldSystemFontOfSize:19]]; [cell.contentView addSubview:textLabel1]; [textLabel1 release]; } else { textLabel1 = (UILabel *)[cell.contentView viewWithTag:1]; } NSDictionary *tmpDict = [listOfInfo objectForKey:[NSString stringWithFormat:@"%@",indexPath.row]]; textLabel1.text = [tmpDict objectForKey:@"name"]; return cell; } -(void) readDatabase { NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDir = [documentPaths objectAtIndex:0]; databasePath = [documentsDir stringByAppendingPathComponent:[NSString stringWithFormat:@"%@",myDB]]; sqlite3 *database; if(sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK) { const char sqlStatement = [[NSString stringWithFormat:@"select id,name from %@ order by orderid",myTable] UTF8String]; sqlite3_stmt *compiledStatement; if(sqlite3_prepare_v2(database, sqlStatement, -1, &compiledStatement, NULL) == SQLITE_OK) { while(sqlite3_step(compiledStatement) == SQLITE_ROW) { NSString *tmpid = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 0)]; NSString *tmpname = [NSString stringWithCString:(const char *)sqlite3_column_text(compiledStatement, 1) encoding:NSUTF8StringEncoding]; [listOfInfo setObject:[[NSMutableDictionary alloc] init] forKey:tmpid]; [[listOfInfo objectForKey:tmpid] setObject:[NSString stringWithFormat:@"%@", tmpname] forKey:@"name"]; } } sqlite3_finalize(compiledStatement); debugNSLog(@"sqlite closing"); } sqlite3_close(database); } when i change the line NSString *tmpname = [NSString stringWithCString:(const char *)sqlite3_column_text(compiledStatement, 1) encoding:NSUTF8StringEncoding]; to NSString *tmpname = [NSString stringWithCString:(const char *)sqlite3_column_text(compiledStatement, 1) encoding:NSASCIIStringEncoding]; the memory leak is gone i tried NSString stringWithUTF8String and it still leak. i've also tried: NSData *dtmpname = [NSData dataWithBytes:sqlite3_column_blob(compiledStatement, 1) length:sqlite3_column_bytes(compiledStatement, 1)]; NSString *tmpname = [[[NSString alloc] initWithData:dtmpname encoding:NSUTF8StringEncoding] autorelease]; and the problem remains, the leak occur when u start scrolling the tableview. i've actually tried other encoding and it seems that only NSASCIIStringEncoding works(no memory leak) any idea how to get rid of this problem?

    Read the article

  • AVAudioRecorder Memory Leak

    - by Eric Ranschau
    I'm hoping someone out there can back me up on this... I've been working on an application that allows the end user to record a small audio file for later playback and am in the process of testing for memory leaks. I continue to very consistently run into a memory leak when the AVAudioRecorder's "stop" method attempts to close the audio file to which it's been recording. This really seems to be a leak in the framework itself, but if I'm being a bonehead you can tell me. To illustrate, I've worked up a stripped down test app that does nothing but start/stop a recording w/ the press of a button. For the sake of simplicty, everything happens in app. delegate as follows: @synthesize audioRecorder, button; @synthesize window; - (BOOL) application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { // create compplete path to database NSString *tempPath = NSTemporaryDirectory(); NSString *audioFilePath = [tempPath stringByAppendingString:@"/customStatement.caf"]; // define audio file url NSURL *audioFileURL = [[NSURL alloc] initFileURLWithPath:audioFilePath]; // define audio recorder settings NSDictionary *settings = [[NSDictionary alloc] initWithObjectsAndKeys: [NSNumber numberWithInt:kAudioFormatAppleIMA4], AVFormatIDKey, [NSNumber numberWithInt:1], AVNumberOfChannelsKey, [NSNumber numberWithInt:AVAudioQualityLow], AVSampleRateConverterAudioQualityKey, [NSNumber numberWithFloat:44100], AVSampleRateKey, [NSNumber numberWithInt:8], AVLinearPCMBitDepthKey, nil ]; // define audio recorder audioRecorder = [[AVAudioRecorder alloc] initWithURL:audioFileURL settings:settings error:nil]; [audioRecorder setDelegate:self]; [audioRecorder setMeteringEnabled:YES]; [audioRecorder prepareToRecord]; // define record button button = [UIButton buttonWithType:UIButtonTypeRoundedRect]; [button addTarget:self action:@selector(handleTouch_recordButton) forControlEvents:UIControlEventTouchUpInside]; [button setFrame:CGRectMake(110.0, 217.5, 100.0, 45.0)]; [button setTitle:@"Record" forState:UIControlStateNormal]; [button setTitle:@"Stop" forState:UIControlStateSelected]; // configure the main view controller UIViewController *viewController = [[UIViewController alloc] init]; [viewController.view addSubview:button]; // add controllers to window [window addSubview:viewController.view]; [window makeKeyAndVisible]; // release [audioFileURL release]; [settings release]; [viewController release]; return YES; } - (void) handleTouch_recordButton { if ( ![button isSelected] ) { [button setSelected:YES]; [audioRecorder record]; } else { [button setSelected:NO]; [audioRecorder stop]; } } - (void) dealloc { [audioRecorder release]; [button release]; [window release]; [super dealloc]; } The stack trace from Instruments that shows pretty clearly that the "closeFile" method in the AVFoundation code is leaking...something. You can see a screen shot of the Instruments session here: Developer Forums: AVAudioRecorder Memory Leak Any thoughts would be greatly appreciated!

    Read the article

  • .NET RegEx "Memory Leak" investigation

    - by Kevin Pullin
    I recently looked into some .NET "memory leaks" (i.e. unexpected, lingering GC rooted objects) in a WinForms app. After loading and then closing a huge report, the memory usage did not drop as expected even after a couple of gen2 collections. Assuming that the reporting control was being kept alive by a stray event handler I cracked open WinDbg to see what was happening... Using WinDbg, the !dumpheap -stat command reported a large amount of memory was consumed by string instances. Further refining this down with the !dumpheap -type System.String command I found the culprit, a 90MB string used for the report, at address 03be7930. The last step was to invoke !gcroot 03be7930 to see which object(s) were keeping it alive. My expectations were incorrect - it was not an unhooked event handler hanging onto the reporting control (and report string), but instead it was held on by a System.Text.RegularExpressions.RegexInterpreter instance, which itself is a descendant of a System.Text.RegularExpressions.CachedCodeEntry. Now, the caching of Regexs is (somewhat) common knowledge as this helps to reduce the overhead of having to recompile the Regex each time it is used. But what then does this have to do with keeping my string alive? Based on analysis using Reflector, it turns out that the input string is stored in the RegexInterpreter whenever a Regex method is called. The RegexInterpreter holds onto this string reference until a new string is fed into it by a subsequent Regex method invocation. I'd expect similar behaviour by hanging onto Regex.Match instances and perhaps others. The chain is something like this: Regex.Split, Regex.Match, Regex.Replace, etc Regex.Run RegexScanner.Scan (RegexScanner is the base class, RegexInterpreter is the subclass described above). The offending Regex is only used for reporting, rarely used, and therefore unlikely to be used again to clear out the existing report string. And even if the Regex was used at a later point, it would probably be processing another large report. This is a relatively significant problem and just plain feels dirty. All that said, I found a few options on how to resolve, or at least work around, this scenario. I'll let the community respond first and if no takers come forward I will fill in any gaps in a day or two.

    Read the article

  • Flex 4 Spark VideoDisplay in Popup causes memory leak

    - by Ben
    Hi, I'm currently building an air app with FB 4. I have a custom control that contains a VideoDisplay control, and which loaded using the PopupManager. Using the profiler, i've noticed that every time the my popup is loaded the memory for it gets allocated, but when it's closed the memory is never recovered. There's nothing else holding a reference to the popup. And if I don't set the source of the VideoDisplay object, then there is no leak - but as soon as the source is set I get a leak. I can't see any method to force close the stream or anything on the spark VideoDisplay control. Any idea or suggestions? EDIT: I have tried setting the source to null before closing the popup but that doesn't change anything. Also, I'm not holding any event listener to the video

    Read the article

  • Linq to SQL DataContext Windsor IoC memory leak problem

    - by Mr. Flibble
    I have an ASP.NET MVC app that creates a Linq2SQL datacontext on a per-web-request basis using Castler Windsor IoC. For some reason that I do not fully understand, every time a new datacontext is created (on every web request) about 8k of memory is taken up and not released - which inevitably causes an OutOfMemory exception. If I force garbage collection the memory is released OK. My datacontext class is very simple: public class DataContextAccessor : IDataContextAccessor { private readonly DataContext dataContext; public DataContextAccessor(string connectionString) { dataContext = new DataContext(connectionString); } public DataContext DataContext { get { return dataContext; } } } The Windsor IoC webconfig to instantiate this looks like so: <component id="DataContextAccessor" service="DomainModel.Repositories.IDataContextAccessor, DomainModel" type="DomainModel.Repositories.DataContextAccessor, DomainModel" lifestyle="PerWebRequest"> <parameters> <connectionString> ... </connectionString> </parameters> </component> Does anyone know what the problem is, and how to fix it?

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >