Search Results

Search found 12267 results on 491 pages for 'in memory'.

Page 40/491 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • ffmpeg(libavcodec). memory leaks in avcodec_encode_video

    - by gavlig
    I'm trying to transcode a video with help of libavcodec. On transcoding big video files(hour or more) i get huge memory leaks in avcodec_encode_video. I have tried to debug it, but with different video files different functions produce leaks, i have got a little bit confused about that :). [Here] (http://stackoverflow.com/questions/4201481/ffmpeg-with-qt-memory-leak) is the same issue that i have, but i have no idea how did that person solve it. QtFFmpegwrapper seems to do the same i do(or i missed something). my method is lower. I took care about aFrame and aPacket outside with av_free and av_free_packet. int Videocut::encode( AVStream *anOutputStream, AVFrame *aFrame, AVPacket *aPacket ) { AVCodecContext *outputCodec = anOutputStream->codec; if (!anOutputStream || !aFrame || !aPacket) { return 1; /* NOTREACHED */ } uint8_t * buffer = (uint8_t *)malloc( sizeof(uint8_t) * _DefaultEncodeBufferSize ); if (NULL == buffer) { return 2; /* NOTREACHED */ } int packetSize = avcodec_encode_video( outputCodec, buffer, _DefaultEncodeBufferSize, aFrame ); if (packetSize < 0) { free(buffer); return 1; /* NOTREACHED */ } aPacket->data = buffer; aPacket->size = packetSize; return 0; }

    Read the article

  • How to debug anomalous C memory/stack problems

    - by EBM
    Hello, Sorry I can't be specific with code, but the problems I am seeing are anomalous. Character string values seem to be getting changed depending on other, unrelated code. For example, the value of the argument that is passed around below will change merely depending on if I comment out one or two of the fprintf() calls! By the last fprintf() the value is typically completely empty (and no, I have checked to make sure I am not modifying the argument directly... all I have to do is comment out a fprintf() or add another fprintf() and the value of the string will change at certain points!): static process_args(char *arg) { /* debug */ fprintf(stderr, "Function arg is %s\n", arg); ...do a bunch of stuff including call another function that uses alloc()... /* debug */ fprintf(stderr, "Function arg is now %s\n", arg); } int main(int argc, char *argv[]) { char *my_arg; ... do a bunch of stuff ... /* just to show you it's nothing to do with the argv array */ my_string = strdup(argv[1]); /* debug */ fprintf(stderr, "Argument 1 is %s\n", my_string); process_args(my_string); } There's more code all around, so I can't ask for someone to debug my program -- what I want to know is HOW can I debug why character strings like this are getting their memory changed or overwritten based on unrelated code. Is my memory limited? My stack too small? How do I tell? What else can I do to track down the issue? My program isn't huge, it's like a thousand lines of code give or take and a couple dynamically linked external libs, but nothing out of the ordinary. HELP! TIA!

    Read the article

  • OpenGL/Carbon/Cocoa Memory Management Autorelease issue

    - by Stephen Furlani
    Hoooboy, I've got another doozy of a memory problem. I'm creating a Carbon (AGL) Window, in C++ and it's telling me that I'm autorelease-ing it without a pool in place. uh... what? I thought Carbon existed outside of the NSAutoreleasePool... When I call glEnable(GL_TEXTURE_2D) to do some stuff, it gives me a EXC_BAD_ACCESS warning - but if the AGL Window is never getting release'd, then shouldn't it exist? Setting set objc-non-blocking-mode at (gdb) doesn't make the problem go away. So I guess my question is WHAT IS UP WITH CARBON/COCOA/NSAutoreleasePool? And... are there any resources for Objective-C++? Because crap like this keeps happening to me. Thanks, -Stephen --- CODE --- Test Draw Function void Channel::frameDraw( const uint32_t frameID) { eq::Channel::frameDraw( frameID ); getWindow()->makeCurrent(false); glEnable(GL_TEXTURE_2D); // Throws Error Here } Make Current (Equalizer API from Eyescale) void Window::makeCurrent( const bool useCache ) const { if( useCache && getPipe()->isCurrent( this )) return; _osWindow->makeCurrent(); } void AGLWindow::makeCurrent() const { aglSetCurrentContext( _aglContext ); AGLWindowIF::makeCurrent(); if( _aglContext ) { EQ_GL_ERROR( "After aglSetCurrentContext" ); } } _aglContext is a valid memory location (i.e. not NULL) when I step through. -S!

    Read the article

  • Loading animation Memory leak

    - by Ayaz Alavi
    Hi, I have written network class that is managing all network calls for my application. There are two methods showLoadingAnimationView and hideLoadingAnimationView that will show UIActivityIndicatorView in a view over my current viewcontroller with fade background. I am getting memory leaks somewhere on these two methods. Here is the code -(void)showLoadingAnimationView { textmeAppDelegate *textme = (textmeAppDelegate *)[[UIApplication sharedApplication] delegate]; [[UIApplication sharedApplication] setNetworkActivityIndicatorVisible:YES]; if(wrapperLoading != nil) { [wrapperLoading release]; } wrapperLoading = [[UIView alloc] initWithFrame:CGRectMake(0.0, 0.0, 320.0, 480.0)]; wrapperLoading.backgroundColor = [UIColor clearColor]; wrapperLoading.alpha = 0.8; UIView *_loadingBG = [[UIView alloc] initWithFrame:CGRectMake(0.0, 0.0, 320.0, 480.0)]; _loadingBG.backgroundColor = [UIColor blackColor]; _loadingBG.alpha = 0.4; circlingWheel = [[UIActivityIndicatorView alloc] initWithActivityIndicatorStyle:UIActivityIndicatorViewStyleWhiteLarge]; CGRect wheelFrame = circlingWheel.frame; circlingWheel.frame = CGRectMake(((320.0 - wheelFrame.size.width) / 2.0), ((480.0 - wheelFrame.size.height) / 2.0), wheelFrame.size.width, wheelFrame.size.height); [wrapperLoading addSubview:_loadingBG]; [wrapperLoading addSubview:circlingWheel]; [circlingWheel startAnimating]; [textme.window addSubview:wrapperLoading]; [_loadingBG release]; [circlingWheel release]; } -(void)hideLoadingAnimationView { [[UIApplication sharedApplication] setNetworkActivityIndicatorVisible:NO]; wrapperLoading.alpha = 0.0; [self.wrapperLoading removeFromSuperview]; //[NSTimer scheduledTimerWithTimeInterval:0.8 target:wrapperLoading selector:@selector(removeFromSuperview) userInfo:nil repeats:NO]; } Here is how I am calling these two methods [NSThread detachNewThreadSelector:@selector(showLoadingAnimationView) toTarget:self withObject:nil]; and then somewhere later in the code i am using following function call to hide animation. [self hideLoadingAnimationView]; I am getting memory leaks when I call showLoadingAnimationView function. Anything wrong in the code or is there any better technique to show loading animation when we do network calls?

    Read the article

  • Mysqli results memory usage

    - by Poe
    Why is the memory consumption in this query continuing to rise as the internal pointer progresses through loop? How to make this more efficient and lean? $link = mysqli_connect(...); $result = mysqli_query($link,$query); // 403,268 rows in result set while ($row = mysqli_fetch_row($result)) { // print time, (get memory usage), -- row number } mysqli_free_result(); mysqli_close($link); /* 06:55:25 (1240336) -- Run query 06:55:26 (39958736) -- Query finished 06:55:26 (39958784) -- Begin loop 06:55:26 (39960688) -- Row 0 06:55:26 (45240712) -- Row 10000 06:55:26 (50520712) -- Row 20000 06:55:26 (55800712) -- Row 30000 06:55:26 (61080712) -- Row 40000 06:55:26 (66360712) -- Row 50000 06:55:26 (71640712) -- Row 60000 06:55:26 (76920712) -- Row 70000 06:55:26 (82200712) -- Row 80000 06:55:26 (87480712) -- Row 90000 06:55:26 (92760712) -- Row 100000 06:55:26 (98040712) -- Row 110000 06:55:26 (103320712) -- Row 120000 06:55:26 (108600712) -- Row 130000 06:55:26 (113880712) -- Row 140000 06:55:26 (119160712) -- Row 150000 06:55:26 (124440712) -- Row 160000 06:55:26 (129720712) -- Row 170000 06:55:27 (135000712) -- Row 180000 06:55:27 (140280712) -- Row 190000 06:55:27 (145560712) -- Row 200000 06:55:27 (150840712) -- Row 210000 06:55:27 (156120712) -- Row 220000 06:55:27 (161400712) -- Row 230000 06:55:27 (166680712) -- Row 240000 06:55:27 (171960712) -- Row 250000 06:55:27 (177240712) -- Row 260000 06:55:27 (182520712) -- Row 270000 06:55:27 (187800712) -- Row 280000 06:55:27 (193080712) -- Row 290000 06:55:27 (198360712) -- Row 300000 06:55:27 (203640712) -- Row 310000 06:55:27 (208920712) -- Row 320000 06:55:27 (214200712) -- Row 330000 06:55:27 (219480712) -- Row 340000 06:55:27 (224760712) -- Row 350000 06:55:27 (230040712) -- Row 360000 06:55:27 (235320712) -- Row 370000 06:55:27 (240600712) -- Row 380000 06:55:27 (245880712) -- Row 390000 06:55:27 (251160712) -- Row 400000 06:55:27 (252884360) -- End loop 06:55:27 (1241264) -- Free */

    Read the article

  • Delphi 6 OleServer.pas Invoke memory leak

    - by Mike Davis
    There's a bug in delphi 6 which you can find some reference online for when you import a tlb the order of the parameters in an event invocation is reversed. It is reversed once in the imported header and once in TServerEventDIspatch.Invoke. you can find more information about it here: http://cc.embarcadero.com/Item/16496 somewhat related to this issue there appears to be a memory leak in TServerEventDispatch.Invoke with a parameter of a Variant of type Var_Array (maybe others, but this is the more obvious one i could see). The invoke code copies the args into a VarArray to be passed to the event handler and then copies the VarArray back to the args after the call, relevant code pasted below: // Set our array to appropriate length SetLength(VarArray, ParamCount); // Copy over data for I := Low(VarArray) to High(VarArray) do VarArray[I] := OleVariant(TDispParams(Params).rgvarg^[I]); // Invoke Server proxy class if FServer <> nil then FServer.InvokeEvent(DispID, VarArray); // Copy data back for I := Low(VarArray) to High(VarArray) do OleVariant(TDispParams(Params).rgvarg^[I]) := VarArray[I]; // Clean array SetLength(VarArray, 0); There are some obvious work-arounds in my case: if i skip the copying back in case of a VarArray parameter it fixes the leak. to not change the functionality i thought i should copy the data in the array instead of the variant back to the params but that can get complicated since it can hold other variants and seems to me that would need to be done recursively. Since a change in OleServer will have a ripple effect i want to make sure my change here is strictly correct. can anyone shed some light on exactly why memory is being leaked here? I can't seem to look up the callstack any lower than TServerEventDIspatch.Invoke (why is that?) I imagine that in the process of copying the Variant holding the VarArray back to the param list it added a reference to the array thus not allowing it to be release as normal but that's just a rough guess and i can't track down the code to back it up. Maybe someone with a better understanding of all this could shed some light?

    Read the article

  • S3 Backup Memory Usage in Python

    - by danpalmer
    I currently use WebFaction for my hosting with the basic package that gives us 80MB of RAM. This is more than adequate for our needs at the moment, apart from our backups. We do our own backups to S3 once a day. The backup process is this: dump the database, tar.gz all the files into one backup named with the correct date of the backup, upload to S3 using the python library provided by Amazon. Unfortunately, it appears (although I don't know this for certain) that either my code for reading the file or the S3 code is loading the entire file in to memory. As the file is approximately 320MB (for today's backup) it is using about 320MB just for the backup. This causes WebFaction to quit all our processes meaning the backup doesn't happen and our site goes down. So this is the question: Is there any way to not load the whole file in to memory, or are there any other python S3 libraries that are much better with RAM usage. Ideally it needs to be about 60MB at the most! If this can't be done, how can I split the file and upload separate parts? Thanks for your help. This is the section of code (in my backup script) that caused the processes to be quit: filedata = open(filename, 'rb').read() content_type = mimetypes.guess_type(filename)[0] if not content_type: content_type = 'text/plain' print 'Uploading to S3...' response = connection.put(BUCKET_NAME, 'daily/%s' % filename, S3.S3Object(filedata), {'x-amz-acl': 'public-read', 'Content-Type': content_type})

    Read the article

  • Objective-C Getter Memory Management

    - by Marian André
    I'm fairly new to Objective-C and am not sure how to correctly deal with memory management in the following scenario: I have a Core Data Entity with a to-many relationship for the key "children". In order to access the children as an array, sorted by the column "position", I wrote the model class this way: @interface AbstractItem : NSManagedObject { NSArray * arrangedChildren; } @property (nonatomic, retain) NSSet * children; @property (nonatomic, retain) NSNumber * position; @property (nonatomic, retain) NSArray * arrangedChildren; @end @implementation AbstractItem @dynamic children; @dynamic position; @synthesize arrangedChildren; - (NSArray*)arrangedChildren { NSArray* unarrangedChildren = [[self.children allObjects] retain]; NSSortDescriptor* sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"position" ascending:YES]; [arrangedChildren release]; arrangedChildren = [unarrangedChildren sortedArrayUsingDescriptors:[NSArray arrayWithObject:sortDescriptor]]; [sortDescriptor release]; [unarrangedChildren release]; return [arrangedChildren retain]; } @end I'm not sure whether or not to retain unarrangedChildren and the returned arrangedChildren (first and last line of the arrangedChildren getter). Does the NSSet allObjects method already return a retained array? It's probably too late and I have a coffee overdose. I'd be really thankful if someone could point me in the right direction. I guess I'm missing vital parts of memory management knowledge and I will definitely look into it thoroughly.

    Read the article

  • Dataflow Pipeline holding on to memory

    - by Jesse Carter
    I've created a Dataflow pipeline consisting of 4 blocks (which includes one optional block) which is responsible for receiving a query object from my application across HTTP and retrieving information from a database, doing an optional transform on that data, and then writing the information back in the HTTP response. In some testing I've done I've been pulling down a significant amount of data from the database (570 thousand rows) which are stored in a List object and passed between the different blocks and it seems like even after the final block has been completed the memory isn't being released. Ram usage in Task Manager will spike up to over 2 GB and I can observe several large spikes as the List hits each block. The signatures for my blocks look like this: private TransformBlock<HttpListenerContext, Tuple<HttpListenerContext, QueryObject>> m_ParseHttpRequest; private TransformBlock<Tuple<HttpListenerContext, QueryObject>, Tuple<HttpListenerContext, QueryObject, List<string>>> m_RetrieveDatabaseResults; private TransformBlock<Tuple<HttpListenerContext, QueryObject, List<string>>, Tuple<HttpListenerContext, QueryObject, List<string>>> m_ConvertResults; private ActionBlock<Tuple<HttpListenerContext, QueryObject, List<string>>> m_ReturnHttpResponse; They are linked as follows: m_ParseHttpRequest.LinkTo(m_RetrieveDatabaseResults); m_RetrieveDatabaseResults.LinkTo(m_ConvertResults, tuple => tuple.Item2 is QueryObjectA); m_RetrieveDatabaseResults.LinkTo(m_ReturnHttpResponse, tuple => tuple.Item2 is QueryObjectB); m_ConvertResults.LinkTo(m_ReturnHttpResponse); Is it possible that I can set up the pipeline such that once each block is done with the list they no longer need to hold on to it as well as once the entire pipeline is completed that the memory is released?

    Read the article

  • Does SetFileBandwidthReservation affect memory-mapped file performance?

    - by Ghostrider
    Does this function affect Memory-mapped file performance? Here's the problem I need to solve: I have two applications competing for disk access: "reader" and "updater". Whole system runs on Windows Server 2008 R2 x64 "Updater" constantly accesses disk in a linear manner, updating data. They system is set up in such a way that updater always has infinite data to update. Consider that it is constantly approximating a solution of a huge set of equations that takes up entire 2TB disk drive. Updater uses ReadFile and WriteFile to process data in a linear fashion. "Reader" is occasionally invoked by user to get some pieces of data. Usually user would read several 4kb blocks from the drive and stop. Occasionally user needs to read up to 100mb sequentially. In exceptional cases up to several gigabytes. Reader maps files to memory to get data it needs. What I would like to achieve is for "reader" to have absolute priority so that "updater" would completely stop if needed so that "reader" could get the data user needs ASAP. Is this problem solvable by using SetPriorityClass and SetFileBandwidthReservation calls? I would really hate to put synchronization login in "reader" and "updater" and rather have the OS take care of priorities.

    Read the article

  • How do JVM's implicit memory barriers behave when chaining constructors

    - by Joonas Pulakka
    Referring to my earlier question on incompletely constructed objects, I have a second question. As Jon Skeet pointed out, there's an implicit memory barrier in the end of a constructor that makes sure that final fields are visible to all threads. But what if a constructor calls another constructor; is there such a memory barrier in the end of each of them, or only in one being called from outside? That is, when the "wrong" solution is: public class ThisEscape { public ThisEscape(EventSource source) { source.registerListener( new EventListener() { public void onEvent(Event e) { doSomething(e); } }); } } And the correct one would be a factory method version: public class SafeListener { private final EventListener listener; private SafeListener() { listener = new EventListener() { public void onEvent(Event e) { doSomething(e); } } } public static SafeListener newInstance(EventSource source) { SafeListener safe = new SafeListener(); source.registerListener(safe.listener); return safe; } } Would the following work too, or not? public class MyListener { private final EventListener Listener; private MyListener() { listener = new EventListener() { public void onEvent(Event e) { doSomething(e); } } } public MyListener(EventSource source) { this(); source.register(listener); } }

    Read the article

  • Putting a C++ Vector as a Member in a Class that Uses a Memory Pool

    - by Deep-B
    Hey, I've been writing a multi-threaded DLL for database access using ADO/ODBC for use with a legacy application. I need to keep multiple database connections for each thread, so I've put the ADO objects for each connection in an object and thinking of keeping an array of them inside a custom threadInfo object. Obviously a vector would serve better here - I need to delete/rearrange objects on the go and a vector would simplify that. Problem is, I'm allocating a heap for each thread to avoid heap contention and stuff and allocating all my memory from there. So my question is: how do I make the vector allocate from the thread-specific heap? (Or would it know internally to allocate memory from the same heap as its wrapper class - sounds unlikely, but I'm not a C++ guy) I've googled a bit and it looks like I might need to write an allocator or something - which looks like so much of work I don't want. Is there any other way? I've heard vector uses placement-new for all its stuff inside, so can overloading operator new be worked into it? My scant knowledge of the insides of C++ doesn't help, seeing as I'm mainly a C programmer (even that - relatively). It's very possible I'm missing something elementary somewhere. If nothing easier comes up - I might just go and do the array thing, but hopefully it won't come to that. I'm using MS-VC++ 6.0 (hey, it's rude to laugh! :-P ). Any/all help will be much appreciated.

    Read the article

  • Memory management of objects returned by methods (iOS / Objective-C)

    - by iOSNewb
    I am learning Objective-C and iOS programming through the terrific iTunesU course posted by Stanford (http://www.stanford.edu/class/cs193p/cgi-bin/drupal/) Assignment 2 is to create a calculator with variable buttons. The chain of commands (e.g. 3+x-y) is stored in a NSMutableArray as "anExpression", and then we sub in random values for x and y based on an NSDictionary to get a solution. This part of the assignment is tripping me up: The final two [methods] “convert” anExpression to/from a property list: + (id)propertyListForExpression:(id)anExpression; + (id)expressionForPropertyList:(id)propertyList; You’ll remember from lecture that a property list is just any combination of NSArray, NSDictionary, NSString, NSNumber, etc., so why do we even need this method since anExpression is already a property list? (Since the expressions we build are NSMutableArrays that contain only NSString and NSNumber objects, they are, indeed, already property lists.) Well, because the caller of our API has no idea that anExpression is a property list. That’s an internal implementation detail we have chosen not to expose to callers. Even so, you may think, the implementation of these two methods is easy because anExpression is already a property list so we can just return the argument right back, right? Well, yes and no. The memory management on this one is a bit tricky. We’ll leave it up to you to figure out. Give it your best shot. Obviously, I am missing something with respect to memory management because I don't see why I can't just return the passed arguments right back. Thanks in advance for any answers!

    Read the article

  • Is memory allocation in linux non-blocking?

    - by Mark
    I am curious to know if the allocating memory using a default new operator is a non-blocking operation. e.g. struct Node { int a,b; }; ... Node foo = new Node(); If multiple threads tried to create a new Node and if one of them was suspended by the OS in the middle of allocation, would it block other threads from making progress? The reason why I ask is because I had a concurrent data structure that created new nodes. I then modified the algorithm to recycle the nodes. The throughput performance of the two algorithms was virtually identical on a 24 core machine. However, I then created an interference program that ran on all the system cores in order to create as much OS pre-emption as possible. The throughput performance of the algorithm that created new nodes decreased by a factor of 5 relative the the algorithm that recycled nodes. I'm curious to know why this would occur. Thanks. *Edit : pointing me to the code for the c++ memory allocator for linux would be helpful as well. I tried looking before posting this question, but had trouble finding it.

    Read the article

  • How to control virtual memory management in linux?

    - by chmike
    I'm writing a program that uses an mmap file to hold a huge buffer organized as an array of 64 MB blocks. The blocks are used to aggregate data received from different hosts through the network. As a consequence the total data size written in each block is not known in advance. Most of the time it is only 2MB but in some cases it can be up to 20MB or more. The data doesn't stay long in the buffer. 90% is deleted after less than a second and the rest is transmitted to another host. I would like to know if there is a way to tell the virtual memory manager that ram pages are not dirty anymore when data is deleted. Should I use mmap and munmap when a block is used and released to control the virtual memory ? What would be the overhead of doing this ? Also, some colleagues expressed concerns about the performance impact of allocating such a big mmap space. I expect it to behave like a swap file so that only dirty pages are to be considered.

    Read the article

  • C release dynamically allocated memory

    - by user1152463
    I have defined function, which returns multidimensional array. allocation for rows arr = (char **)malloc(size); allocation for columns (in loop) arr[i] = (char *)malloc(v); and returning type is char** Everything works fine, except freeing the memory. If I call free(arr[i]) and/or free(arr) on array returned by function, it crashes. Thanks for help EDIT:: allocating fuction pole = malloc(zaznamov); char ulica[52], t[52], datum[10]; float dan; int i = 0, v; *max = 0; while (!is_eof(f)) { get_record(t, ulica, &dan, datum, f); v = strlen(ulica) - 1; pole[i] = malloc(v); strcpy(pole[i], ulica); pole[i][v] = '\0'; if (v > *max) { *max = v; } i++; } return pole;` part of main where i am calling function pole = function(); releasing memory int i; for (i = 0; i < zaznamov; i++) { free(pole[i]); pole[i] = NULL; } free(pole); pole = NULL;

    Read the article

  • Javascript memory leak/ performance issue?

    - by Tom
    I just cannot for the life of me figure out this memory leak in Internet Explorer. insertTags simple takes string str and places each word within start and end tags for HTML (usually anchor tags). transliterate is for arabic numbers, and replaces normal numbers 0-9 with a &#..n; XML identity for their arabic counterparts. fragment = document.createDocumentFragment(); for (i = 0, e = response.verses.length; i < e; i++) { fragment.appendChild((function(){ p = document.createElement('p'); p.setAttribute('lang', (response.unicode) ? 'ar' : 'en'); p.innerHTML = ((response.unicode) ? (response.surah + ':' + (i+1)).transliterate() : response.surah + ':' + (i+1)) + ' ' + insertTags(response.verses[i], '<a href="#" onclick="window.popup(this);return false;" class="match">', '</a>'); try { return p } finally { p = null; } })()); } params[0].appendChild( fragment ); fragment = null; I would love some links other than MSDN and about.com, because neither of them have sufficiently explained to me why my script leaks memory. I am sure this is the problem, because without it everything runs fast (but nothing displays). I've read that doing a lot of DOM manipulations can be dangerous, but the for loops a max of 286 times (# of verses in surah 2, the longest surah in the Qur'an).

    Read the article

  • HashMap Memory Leak because of Dynamic Array

    - by Jake M
    I am attempting to create my own HashMap to understand how they work. I am using an array of Linked Lists to store the values(strings) in my hashmap. I am creating the array like this: Node** list; Instead of this: Node* list[nSize]; This is so the array can be any size at runtime. But I think I am getting a memory leak because of how I am doing this. I dont know where the error is but when I run the following simple code the .exe crashes. Why is my application crashing and how can I fix it? Note: I am aware that using a vector would be much better than an array but this is just for learning and I want to challenge myself to create the hashmap using a 'Dynamic' Array. PS: is that the correct term(Dynamic Array) for the kind of array I am using? struct Node { // to implement }; class HashMap { public: HashMap(int dynSize) { *list = new Node[dynSize]; size = dynSize; for (int i=0; i<size; i++) list[i] = NULL; cout << "END\n"; } ~HashMap() { for (int i=0; i<size; i++) delete list[i]; } private: Node** list; // I could use a vector here but I am experimenting with a pointer to an array(pointer), also its more elegant int size; }; int main() { // When I run this application it crashes. Where is my memory leak? HashMap h(5); system("PAUSE"); return 0; }

    Read the article

  • C++/Qt - Memory allocation question

    - by HardCoder1986
    Hello! I recently started investigating Qt for myself and have the following question: Suppose I have some QTreeWidget* widget. At some moment I want to add some items to it and this is done via the following call: QList<QTreeWidgetItem*> items; // Prepare the items QTreeWidgetItem* item1 = new QTreeWidgetItem(...); QTreeWidgetItem* item2 = new QTreeWidgetItem(...); items.append(item1); items.append(item2); widget->addTopLevelItems(items); So far it looks ok, but I don't actually understand who should control the objects' lifetime. I should explain this with an example: Let's say, another function calls widget->clear();. I don't know what happens beneath this call but I do think that memory allocated for item1 and item2 doesn't get disposed here, because their ownage wasn't actually transfered. And, bang, we have a memory leak. The question is the following - does Qt have something to offer for this kind of situation? I could use boost::shared_ptr or any other smart pointer and write something like shared_ptr<QTreeWidgetItem> ptr(new QTreeWidgetItem(...)); items.append(ptr.get()); but I don't know if the Qt itself would try to make explicit delete calls on my pointers (which would be disastrous since I state them as shared_ptr-managed). How would you solve this problem? Maybe everything is evident and I miss something really simple?

    Read the article

  • Fatal error: Allowed memory size exhausted...

    - by Nano HE
    HI, I upload my php testing script to online vps server just now. The script used to parse a big size XML file(about 4M, 7000Lines). But my IE explorer show the online error message below. Fatal error: Allowed memory size of 16777216 bytes exhausted (tried to allocate 77 bytes) in /var/www/test/result/index.php on line 26 I am sure I already tested the php script on localhost successfully. Is there any configuration need be enable/modify on my VPS? Such as php.ini or some setting for apache server? I just verified there are about 200M memory usage are avaliable for my VPS. How can I fix this? ...... function startElementHandler ($parser,$name,$attrib){ global $usercount; global $userdata; global $state; // Line #26; //Debug //print "name is: ".$name."\n"; switch ($name) { case $name=="_ID" : { $userdata[$usercount]["first"] = $attrib["FIRST"]; $userdata[$usercount]["last"] = $attrib["LAST"]; $userdata[$usercount]["nick"] = $attrib["NICK"]; $userdata[$usercount]["title"] = $attrib["TITLE"]; break; } ...... default : {$state=$name;break;} } }

    Read the article

  • Memory not being returned after function python call

    - by Dan
    I've got a function which does a parse of a sentence by building up a big chart. For some reason, Python holds on to whatever memory was allocated during that function call. That is, I do best = translate(sentence, grammar) and somehow my memory goes up and stays up. Here is the function: from string import join from heapq import nsmallest, heappush def translate(f, g): words = f.split() chart = {} for col in range(len(words)): for row in reversed(range(0,col+1)): # get rules for this subspan rules = g[join(words[row:col+1], ' ')] # ensure there's at least one rule on the diagonal if not rules and row==col: rules=[(0.0, join(words[row:col+1]))] # pick up rules below & to the left for k in range(row,col): if (row,k) and (k+1,col) in chart: for (w1, e1) in chart[row, k]: for (w2, e2) in chart[k+1,col]: heappush(rules, (w1+w2, e1+' '+e2)) # add all rules to chart chart[row,col] = nsmallest(MAX_TRANSLATIONS, rules) (w, best) = chart[0, len(words)-1][0] return best EDIT: Using Python 2.7 on OS X. The grammar g is just a dictionary from strings to heaps, e.g.: g['et'] [(1.05, 'and'), (6.92, ', and'), (9.95, 'and ,'), (11.17, 'and to')] EDIT: If you want to run the code, try the sentence "Cela est difficile" with the following grammar: >>> g['cela'] [(8.28, 'this'), (11.21, 'it'), (11.57, 'that'), (15.26, 'this ,')] >>> g['est'] [(2.69, 'is'), (10.21, 'is ,'), (11.15, 'has'), (11.28, ', is')] >>> g['difficile'] [(2.01, 'difficult'), (10.08, 'hard'), (10.19, 'difficult ,'), (10.57, 'a difficult')]

    Read the article

  • Memory fragmentation @ boost::asio ?

    - by Poni
    I'm pretty much stuck with a question I never got an answer for, a question which addresses an extremely important issue; memory fragmentation at boost::asio. Found nothing at the documentation nor here at SO. The functions at boost::asio, for example async_write() & async_read_some() always allocate something. (in my case it's 144 & 96 bytes respectively, in VC9 Debug build). How do I know about it? I connect a client to the "echo server" example provided with this library. I put a breakpoint at "new.cpp" at the code of "operator new(size_t size)". Then I send "123". Breakpoint is hit! Now using the stack trace I can clearly see that the root to the "new" call is coming from the async_write() & async_read_some() calls I make in the function handlers. So memory fragmentation will come sooner or later, thus I can't use ASIO, and I wish I could! Any idea? Any helpful code example?

    Read the article

  • Java: How ArrayList memory management

    - by cka3o4nik
    In my Data Structures class we have studies the Java ArrayList class, and how it grows the underlying array when a user adds more elements. That is understood. However, I cannot figure out how exactly this class frees up memory when lots of elements are removed from the list. Looking at the source, there are three methods that remove elements: [code] public E remove(int index) { RangeCheck(index); modCount++; E oldValue = (E) elementData[index]; int numMoved = size - index - 1; if (numMoved 0) System.arraycopy(elementData, index+1, elementData, index, numMoved); elementData[--size] = null; // Let gc do its work return oldValue; } public boolean remove(Object o) { if (o == null) { for (int index = 0; index < size; index++) if (elementData[index] == null) { fastRemove(index); return true; } } else { for (int index = 0; index < size; index++) if (o.equals(elementData[index])) { fastRemove(index); return true; } } return false; } private void fastRemove(int index) { modCount++; int numMoved = size - index - 1; if (numMoved 0) System.arraycopy(elementData, index+1, elementData, index, numMoved); elementData[--size] = null; // Let gc do its work } {/code] None of them reduce the datastore array. I even started questioning if memory free up ever happens, but empirical tests show that it does. So there must be some other way it is done, but where and how? I checked the parent classes as well with no success.

    Read the article

  • Emacs - nxhtml-mode - memory full

    - by mbutz
    working with nxhtml-mode in emacs, I get problems since a few weeks. While working emacs pauses unexpectingly until showing a message in the mode line "!MEM FULL!"; obviously nxhtml-mode is filling up the memory until emacs stopps to work. I am working with html, php and css files. I have no idea how I could debug this problem in a meaningfull way. Also I seem to be the only one to have this problem, because googling did not deliver any answers to this question. I am using emacs 2.32 on an Linux Mint 11 system. I can not find out the verson of nxhtml, it says revision 829 downloaded from http://bazaar.launchpad.net/~nxhtml/nxhtml/main/revision/829. I set up a test scenario with a minimal dot-emacs just to test the nxhtml-mode. It seemed to be alright, but it does not reflect my productive set up. It would probably take a week or so to gradually include everything I used to use within emacs (e.g. org-mode) while testing whether nxhtml-mode does not like anything, which is called in my dot-emacs file. Is there another way? Can I find out, what causes the memory overload? Does anyone has similar problems using nxhtml-mode? Greetings Martin

    Read the article

  • Adding images to an array memory issue

    - by Friendlydeveloper
    Hello all, I'm currently facing the following issue: My app dynamically creates images (320 x 480 pixels) and adds them to a NSMutableArray. I need those images inside that array in order to allow users to browse through them back and forth. I only need to keep the latest 5 images. So I wrote a method like below: - (void)addImageToArray:(UIImage*)theImage { if ([myMutableArray count] < 5) { [myMutableArray addObject:theImage]; } else { [myMutableArray removeObjectAtIndex:0]; [myMutableArray addObject:theImage]; } } This method basically does what it's supposed to do. However, in instruments I can see, that memory usage is permanently incrementing. At some point, even though I do not have any memory leaks, the app finally crashes. The way I see it, XCode does remove the image from my array, but does not release it. Is there a way I can make sure, that the object I want to remove from my array will also get released? Maybe my approach is completely wrong and I need to find a different way. Any help appreciated. Thanks in advance

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >