Search Results

Search found 12417 results on 497 pages for 'memory leak'.

Page 38/497 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Python: Memory usage and optimization when modifying lists

    - by xApple
    The problem My concern is the following: I am storing a relativity large dataset in a classical python list and in order to process the data I must iterate over the list several times, perform some operations on the elements, and often pop an item out of the list. It seems that deleting one item out of a Python list costs O(N) since Python has to copy all the items above the element at hand down one place. Furthermore, since the number of items to delete is approximately proportional to the number of elements in the list this results in an O(N^2) algorithm. I am hoping to find a solution that is cost effective (time and memory-wise). I have studied what I could find on the internet and have summarized my different options below. Which one is the best candidate ? Keeping a local index: while processingdata: index = 0 while index < len(somelist): item = somelist[index] dosomestuff(item) if somecondition(item): del somelist[index] else: index += 1 This is the original solution I came up with. Not only is this not very elegant, but I am hoping there is better way to do it that remains time and memory efficient. Walking the list backwards: while processingdata: for i in xrange(len(somelist) - 1, -1, -1): dosomestuff(item) if somecondition(somelist, i): somelist.pop(i) This avoids incrementing an index variable but ultimately has the same cost as the original version. It also breaks the logic of dosomestuff(item) that wishes to process them in the same order as they appear in the original list. Making a new list: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) newlist = [] for item in somelist: if somecondition(item): newlist.append(item) somelist = newlist gc.collect() This is a very naive strategy for eliminating elements from a list and requires lots of memory since an almost full copy of the list must be made. Using list comprehensions: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) somelist[:] = [x for x in somelist if somecondition(x)] This is very elegant but under-the-cover it walks the whole list one more time and must copy most of the elements in it. My intuition is that this operation probably costs more than the original del statement at least memory wise. Keep in mind that somelist can be huge and that any solution that will iterate through it only once per run will probably always win. Using the filter function: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) somelist = filter(lambda x: not subtle_condition(x), somelist) This also creates a new list occupying lots of RAM. Using the itertools' filter function: from itertools import ifilterfalse while processingdata: for item in itertools.ifilterfalse(somecondtion, somelist): dosomestuff(item) This version of the filter call does not create a new list but will not call dosomestuff on every item breaking the logic of the algorithm. I am including this example only for the purpose of creating an exhaustive list. Moving items up the list while walking while processingdata: index = 0 for item in somelist: dosomestuff(item) if not somecondition(item): somelist[index] = item index += 1 del somelist[index:] This is a subtle method that seems cost effective. I think it will move each item (or the pointer to each item ?) exactly once resulting in an O(N) algorithm. Finally, I hope Python will be intelligent enough to resize the list at the end without allocating memory for a new copy of the list. Not sure though. Abandoning Python lists: class Doubly_Linked_List: def __init__(self): self.first = None self.last = None self.n = 0 def __len__(self): return self.n def __iter__(self): return DLLIter(self) def iterator(self): return self.__iter__() def append(self, x): x = DLLElement(x) x.next = None if self.last is None: x.prev = None self.last = x self.first = x self.n = 1 else: x.prev = self.last x.prev.next = x self.last = x self.n += 1 class DLLElement: def __init__(self, x): self.next = None self.data = x self.prev = None class DLLIter: etc... This type of object resembles a python list in a limited way. However, deletion of an element is guaranteed O(1). I would not like to go here since this would require massive amounts of code refactoring almost everywhere.

    Read the article

  • Basic C question, concerning memory allocation and value assignment

    - by VHristov
    Hi there, I have recently started working on my master thesis in C that I haven't used in quite a long time. Being used to Java, I'm now facing all kinds of problems all the time. I hope someone can help me with the following one, since I've been struggling with it for the past two days. So I have a really basic model of a database: tables, tuples, attributes and I'm trying to load some data into this structure. Following are the definitions: typedef struct attribute { int type; char * name; void * value; } attribute; typedef struct tuple { int tuple_id; int attribute_count; attribute * attributes; } tuple; typedef struct table { char * name; int row_count; tuple * tuples; } table; Data is coming from a file with inserts (generated for the Wisconsin benchmark), which I'm parsing. I have only integer or string values. A sample row would look like: insert into table values (9205, 541, 1, 1, 5, 5, 5, 5, 0, 1, 9205, 10, 11, 'HHHHHHH', 'HHHHHHH', 'HHHHHHH'); I've "managed" to load and parse the data and also to assign it. However, the assignment bit is buggy, since all values point to the same memory location, i.e. all rows look identical after I've loaded the data. Here is what I do: char value[10]; // assuming no value is longer than 10 chars int i, j, k; table * data = (table*) malloc(sizeof(data)); data->name = "table"; data->row_count = number_of_lines; data->tuples = (tuple*) malloc(number_of_lines*sizeof(tuple)); tuple* current_tuple; for(i=0; i<number_of_lines; i++) { current_tuple = &data->tuples[i]; current_tuple->tuple_id = i; current_tuple->attribute_count = 16; // static in our system current_tuple->attributes = (attribute*) malloc(16*sizeof(attribute)); for(k = 0; k < 16; k++) { current_tuple->attributes[k].name = attribute_names[k]; // for int values: current_tuple->attributes[k].type = DB_ATT_TYPE_INT; // write data into value-field int v = atoi(value); current_tuple->attributes[k].value = &v; // for string values: current_tuple->attributes[k].type = DB_ATT_TYPE_STRING; current_tuple->attributes[k].value = value; } // ... } While I am perfectly aware, why this is not working, I can't figure out how to get it working. I've tried following things, none of which worked: memcpy(current_tuple->attributes[k].value, &v, sizeof(int)); This results in a bad access error. Same for the following code (since I'm not quite sure which one would be the correct usage): memcpy(current_tuple->attributes[k].value, &v, 1); Not even sure if memcpy is what I need here... Also I've tried allocating memory, by doing something like: current_tuple->attributes[k].value = (int *) malloc(sizeof(int)); only to get "malloc: * error for object 0x100108e98: incorrect checksum for freed object - object was probably modified after being freed." As far as I understand this error, memory has already been allocated for this object, but I don't see where this happened. Doesn't the malloc(sizeof(attribute)) only allocate the memory needed to store an integer and two pointers (i.e. not the memory those pointers point to)? Any help would be greatly appreciated! Regards, Vassil

    Read the article

  • Reed-Solomon encoder for embedded application (memory-efficient)

    - by bjarkef
    Hi I am looking for a very memory-efficient (like max. 500 bytes of memory for lookup tables etc.) implementation of a Reed-Solomon encoder for use in an embedded application? I am interested in coding blocks of 10 bytes with 5 bytes of parity. Speed is of little importance. Do you know any freely available implementations that I can use for this purpose? Thanks in advance.

    Read the article

  • Linux Shared Memory

    - by Betamoo
    The function which creates shared memory in *inux programming takes a key as one of its parameters.. What is the meaning of this key? And How can I use it? Edit: Not shared memory id

    Read the article

  • java memory management

    - by pavlos
    i have the following code snapshot: public void getResults( String expression, String collection ){ ReferenceList list; Vector lists = new Vector(); list = invertedIndex.get( 1 )//invertedIndex is class member lists.add(list); } when the method is finished, i supose that the local objects ( list, lists) are "destroyed". Can you tell if the memory occupied by list stored in invertedIndex is released as well? Or does java allocate new memory for list when assigning list = invertedIndex.get( 1 );

    Read the article

  • Can immutable be a memory hog?

    - by ciscoheat
    Let's say we have a memory-intensive class like an Image, with chainable methods like Resize() and ConvertTo(). If this class is immutable, won't it take a huge amount of memory when I start doing things like i.Resize(500, 800).Rotate(90).ConvertTo(Gif), compared to a mutable one which modifies itself? How to handle a situation like this in a functional language?

    Read the article

  • Reopen error in bdb 4.7 memory

    - by user207634
    I create a memory pool with flag(DB_CREATE) after ope a Db_Env with flag(DB_CREATE | DB_INIT_MPOOL |DB_SYSTEM_MEM), when I run their program at first time, it's ok and create some db files such _db.001,_db.002,mpool, but after I close the program and run it again,their make a error said the system cannot find the specified file("Mpool: PANIC:No such file or directory"), if I delete all files in the memory pool folder and run it again it will be ok? How can I fix this problem?

    Read the article

  • How to remove NSString Related Memory Leaks?

    - by Rahul Vyas
    in my application this method shows memory leak how do i remove leak? -(void)getOneQuestion:(int)flashcardId categoryID:(int)categoryId { flashCardText = [[NSString alloc] init]; flashCardAnswer=[[NSString alloc] init]; //NSLog(@"%s %d %s", __FILE__, __LINE__, __PRETTY_FUNCTION__, __FUNCTION__); sqlite3 *MyDatabase; sqlite3_stmt *CompiledStatement=nil; NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask,YES); NSString *documentsDirectory = [paths objectAtIndex:0]; NSString *MyDatabasePath = [documentsDirectory stringByAppendingString:@"/flashCardDatabase.sqlite"]; if(sqlite3_open([MyDatabasePath UTF8String],&MyDatabase) == SQLITE_OK) { sqlite3_prepare_v2(MyDatabase, "select flashCardText,flashCardAnswer,flashCardTotalOption from flashcardquestionInfo where flashCardId=? and categoryId=?", -1, &CompiledStatement, NULL); sqlite3_bind_int(CompiledStatement, 1, flashcardId); sqlite3_bind_int(CompiledStatement, 2, categoryId); while(sqlite3_step(CompiledStatement) == SQLITE_ROW) { self.flashCardText = [NSString stringWithUTF8String:(char *)sqlite3_column_text(CompiledStatement,0)]; self.flashCardAnswer= [NSString stringWithUTF8String:(char *)sqlite3_column_text(CompiledStatement,1)]; flashCardTotalOption=[[NSNumber numberWithInt:sqlite3_column_int(CompiledStatement,2)] intValue]; } sqlite3_reset(CompiledStatement); sqlite3_finalize(CompiledStatement); sqlite3_close(MyDatabase); } } this method also shows leaks.....what's wrong with this method? -(void)getMultipleChoiceAnswer:(int)flashCardId { if(optionsList!=nil) [optionsList removeAllObjects]; else optionsList = [[NSMutableArray alloc] init]; sqlite3 *MyDatabase; sqlite3_stmt *CompiledStatement=nil; NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask,YES); NSString *documentsDirectory = [paths objectAtIndex:0]; NSString *MyDatabasePath = [documentsDirectory stringByAppendingString:@"/flashCardDatabase.sqlite"]; if(sqlite3_open([MyDatabasePath UTF8String],&MyDatabase) == SQLITE_OK) { sqlite3_prepare_v2(MyDatabase,"select OptionText from flashCardMultipleAnswer where flashCardId=?", -1, &CompiledStatement, NULL); sqlite3_bind_int(CompiledStatement, 1, flashCardId); while(sqlite3_step(CompiledStatement) == SQLITE_ROW) { [optionsList addObject:[NSString stringWithUTF8String:(char *)sqlite3_column_text(CompiledStatement,0)]]; } sqlite3_reset(CompiledStatement); sqlite3_finalize(CompiledStatement); sqlite3_close(MyDatabase); } }

    Read the article

  • quick question about memory management in AS3

    - by TheDarkIn1978
    the following method will be called many times. i'm concerned the continuous call for new rectangles will add potentially unnecessary memory consumption, or is the memory used to produce the previous rectangle released/overwritten to accommodate another rectangle since it is assigned to the same instance variable? private function onDrag(evt:MouseEvent):void { this.startDrag(false, dragBounds()); } private function dragBounds():Rectangle { var stagebounds = new Rectangle(0 - swatchRect.x, 0 - swatchRect.y, stage.stageWidth - swatchRect.width, stage.stageHeight - swatchRect.height); return stagebounds; }

    Read the article

  • Memory Mapped Files .NET

    - by CSharpAtl
    I have a project and it needs to access a large amount of proprietary data in ASP.NET. This was done on the Linux/PHP by loading the data in shared memory. I was wondering if trying to use Memory Mapped Files would be the way to go, or if there is a better way with better .NET support. I was thinking of using the Data Cache but not sure of all the pitfalls of size of data being saved in the Cache.

    Read the article

  • How does memory protection in SASOS works?

    - by chris
    I'd like to know how it works - whether it checks if process can read/write/execute memory on every access, or it does it only once? But when it does it only once, and all processes are in a single address space, how are these other hostile processes are prevented from accessing memory from not their's areas?

    Read the article

  • modalViewController use very much memory

    - by burki
    Hi! I'm presenting a modalViewController that uses a certain amount of memory, of course. But now, if I call the method - (void)dismissModalViewControllerAnimated:(BOOL)animated it seems that the modalViewController remains in the memory. How can I solve this problem? Thanks.

    Read the article

  • Memory Problem - Release whole ViewController ?

    - by Sebastian
    Hi, I'm using a TabBarController with a few Tabs and I have memory problems when switching through the tabs and the contents. Is there a way to release and dealloc everything when I go to another ViewController ? So when I am in Tab#1 with ViewController #1 and I go to Tab#2 with ViewController #2, how can I free all the memory ViewController #1 took ? Thx ! Sebastian

    Read the article

  • iPhone memory leaks with store kit

    - by Nareshkumar
    Hello, I am trying to develop an application which uses storekit api. The document (Store Kit guide) suggests that the api will not work on a simulator. I found out that memory leaks will not be able to work on a device. I was wondering if any one can tell me how to check for memory leaks while using a store kit api on a project? How is it possible?

    Read the article

  • Reading email address from contacts fails with weird memory issue

    - by CapsicumDreams
    Hi all, I'm stumped. I'm trying to get a list of all the email address a person has. I'm using the ABPeoplePickerNavigationController to select the person, which all seems fine. I'm setting my ABRecordRef personDealingWith; from the person argument to - (BOOL)peoplePickerNavigationController:(ABPeoplePickerNavigationController *)peoplePicker shouldContinueAfterSelectingPerson:(ABRecordRef)person property:(ABPropertyID)property identifier:(ABMultiValueIdentifier)identifier { and everything seems fine up till this point. The first time the following code executes, all is well. When subsequently run, I can get issues. First, the code: // following line seems to make the difference (issue 1) // NSLog(@"%d", ABMultiValueGetCount(ABRecordCopyValue(personDealingWith, kABPersonEmailProperty))); // construct array of emails ABMultiValueRef multi = ABRecordCopyValue(personDealingWith, kABPersonEmailProperty); CFIndex emailCount = ABMultiValueGetCount(multi); if (emailCount 0) { // collect all emails in array for (CFIndex i = 0; i < emailCount; i++) { CFStringRef emailRef = ABMultiValueCopyValueAtIndex(multi, i); [emailArray addObject:(NSString *)emailRef]; CFRelease(emailRef); } } // following line also matters (issue 2) CFRelease(multi); If compiled as written, the are no errors or static analysis problems. This crashes with a *** -[Not A Type retain]: message sent to deallocated instance 0x4e9dc60 error. But wait, there's more! I can fix it in either of two ways. Firstly, I can uncomment the NSLog at the top of the function. I get a leak from the NSLog's ABRecordCopyValue every time through, but the code seems to run fine. Also, I can comment out the CFRelease(multi); at the end, which does exactly the same thing. Static compilation errors, but running code. So without a leak, this function crashes. To prevent a crash, I need to haemorrhage memory. Neither is a great solution. Can anyone point out what's going on?

    Read the article

  • Drawable advantage over bitmap for memory in android

    - by Tabish
    This question is linked with the answers in the following question: Error removing Bitmaps[Android] Is there any advantage of using Drawable over Bitmap in Android in terms of memory de-allocation ? I was looking at Romain Guy project Shelves and he uses SoftReference for images caches but I'm unable to search where is the code which is de-allocating these Drawables when SoftReference automatically reclaims the memory for Bitmap. As far as I know .recycle() has to be explicitly called on the Bitmap for it to be de-allocated.

    Read the article

  • Running out of memory with UIImage creation on an offscreen Bitmap Context by NSOperation

    - by sigsegv
    I have an app with multiple UIView subclasses that acts as pages for a UIScrollView. UIViews are moved back and forth to provide a seamless experience to the user. Since the content of the views is rather slow to draw, it's rendered on a single shared CGBitmapContext guarded by locks by NSOperation subclasses - executed one at once in an NSOperationQueue - wrapped up in an UIImage and then used by the main thread to update the content of the views. -(void)main { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc]init]; if([self isCancelled]) { return; } if(nil == data) { return; } // Buffer is the shared instance of a CG Bitmap Context wrapper class // data is a dictionary CGImageRef img = [buffer imageCreateWithData:data]; UIImage * image = [[UIImage alloc]initWithCGImage:img]; CGImageRelease(img); if([self isCancelled]) { [image release]; return; } NSDictionary * result = [[NSDictionary alloc]initWithObjectsAndKeys:image,@"image",id,@"id",nil]; // target is the instance of the UIView subclass that will use // the image [target performSelectorOnMainThread:@selector(updateContentWithData:) withObject:result waitUntilDone:NO]; [result release]; [image release]; [pool release]; } The updateContentWithData: of the UIView subclass performed on the main thread is just as simple -(void)updateContentWithData:(NSDictionary *)someData { NSDictionary * data = [someData retain]; if([[data valueForKey:@"id"]isEqualToString:[self pendingRequestId]]) { UIImage * image = [data valueForKey:@"image"]; [self setCurrentImage:image]; [self setNeedsDisplay]; } // If the image has not been retained, it should be released together // with the dictionary retaining it [data release]; } The drawLayer:inContext: method of the subclass will just get the CGImage from the UIImage and use it to update the backing layer or part of it. No retain or release is involved in the process. The problem is that after a while I run out of memory. The number of the UIViews is static. CGImageRef and UIImage are created, retained and released correctly (or so it seems to me). Instruments does not show any leaks, just the free memory available dip constantly, rise a few times, and then dip even lower until the application is terminated. The app cycles through about 2-300 of the aforementioned pages before that, but I would expect to have the memory usage reach a more or less stable level of used memory after a bunch of pages have been already skimmed at fast speed or, since the images are up to 3MB in size, deplete way earlier. Any suggestion will be greatly appreciated.

    Read the article

  • What happens in memory when a C++ class is instantiated

    - by Jo Bucher
    I'm interested in the nuts and boltw of C++ and I wondered what actually changes when an object is instantiated. I'm particularly interested if the functions are then added to memory, if they are there from runtime or if they are never stored in memory at all. If anyone could direct me to a good site on some of the core bolts of C and C++, I'd love that too. Thanks, Jo

    Read the article

  • mod_perl memory

    - by Pavel Georgiev
    Hi, I have a perl script running in mod_perl that needs to write a large amount of data to the client, possibly over a long period. The behavior that I observe is that once I print and flush something, the buffer memory is not reclaimed even though I rflush (I know this cant be reclaimed back by the OS). Is that how mod_perl operates and is there a way that I can force it to periodically free the buffer memory, so that I can use that for new buffers instead of taking more from the OS?

    Read the article

  • CUDA: When to use shared memory and when to rely on L1 caching?

    - by Roger Dahl
    After Compute Capability 2.0 (Fermi) was released, I've wondered if there are any use cases left for shared memory. That is, when is it better to use shared memory than just let L1 perform its magic in the background? Is shared memory simply there to let algorithms designed for CC < 2.0 run efficiently without modifications? To collaborate via shared memory, threads in a block write to shared memory and synchronize with __syncthreads(). Why not simply write to global memory (through L1), and synchronize with __threadfence_block()? The latter option should be easier to implement since it doesn't have to relate to two different locations of values, and it should be faster because there is no explicit copying from global to shared memory. Since the data gets cached in L1, threads don't have to wait for data to actually make it all the way out to global memory. With shared memory, one is guaranteed that a value that was put there remains there throughout the duration of the block. This is as opposed to values in L1, which get evicted if they are not used often enough. Are there any cases where it's better too cache such rarely used data in shared memory than to let the L1 manage them based on the usage pattern that the algorithm actually has?

    Read the article

  • Copy all current system data content in memory

    - by Tom Brito
    I'm studying security, and I would like to know: in Windows or Unix based OS environment, is there a way for a malicious program to copy all the content of the computer's memory? My worry is about a program that can get my decrypted data loaded in memory. And how to avoid it.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >