Search Results

Search found 12282 results on 492 pages for 'memory deallocation'.

Page 91/492 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • Memory allocation error from MySql ODBC 5.1 driver in C# application on insert statement

    - by Chinjoo
    I have a .NET Wndows application in C#. It's a simple Windows application that is using the MySql 5.1 database community edition. I've downloaded the MySql ODBC driver and have created a dsn to my database on my local machine. On my application, I can perform get type queries without problems, but when I execute a given insert statement (not that I've tried doing any others), I get the following error: {"ERROR [HY001] [MySQL][ODBC 5.1 Driver][mysqld-5.0.27-community-nt]Memory allocation error"} I'm running on a Windows XP machine. My machine has 1 GB of memory. Anyone have any ideas? See code below OdbcConnection MyConn = DBConnection.getDBConnection(); int result = -1; try { MyConn.Open(); OdbcCommand myCmd = new OdbcCommand(); myCmd.Connection = MyConn; myCmd.CommandType = CommandType.Text; OdbcParameter userName = new OdbcParameter("@UserName", u.UserName); OdbcParameter password = new OdbcParameter("@Password", u.Password); OdbcParameter firstName = new OdbcParameter("@FirstName", u.FirstName); OdbcParameter LastName = new OdbcParameter("@LastName", u.LastName); OdbcParameter sex = new OdbcParameter("@sex", u.Sex); myCmd.Parameters.Add(userName); myCmd.Parameters.Add(password); myCmd.Parameters.Add(firstName); myCmd.Parameters.Add(LastName); myCmd.Parameters.Add(sex); myCmd.CommandText = mySqlQueries.insertChatUser; result = myCmd.ExecuteNonQuery(); } catch (Exception e) { //{"ERROR [HY001] [MySQL][ODBC 5.1 Driver][mysqld-5.0.27-community-nt]Memory // allocation error"} EXCEPTION ALWAYS THROWN HERE } finally { try { if (MyConn != null) MyConn.Close(); } finally { } }

    Read the article

  • Memory management in ObjC/iPhone

    - by Manu
    Hi, I have question in memory management (objective C). There are two ideal scenario. ============================= scenario 1 ======================================== (void) funcA { MyObj *c = [otherObj getMyObject]; [c release]; } -(MyObj *) getMyObject //(this method is available in other OtherObj.m file) { MyObj *temp = [[MyObj alloc] init]; // do smothing here return temp; } ============================= scenario 2 ======================================== (void) funcA { MyObj *c = [otherObj getMyObject]; } -(MyObj *) getMyObject //(this method is available in other OtherObj.m file) { MyObj *temp = [[myObj alloc] init]; // do smothing here return [temp autorelease]; } myObj is holding huge chunk of data. In first scenario I am getting myObj(allocated) from other file so I have to release in my own method. (as per any C/C++ language library ,like strdup will return string duplicate which will realase later by developer not by strdup method). In second scenario I am getting myObj(allocated) from otherObj.m file so otherObj.m file is responsible to release that allocated memory(mean autorelease)? Is it right? Please let me know Which scenario is more efficient and valid as per apple memory guidelines. Please Please don't show me any document link. Thanks Manu

    Read the article

  • C# File IO with Streams - Best Memory Buffer Size

    - by AJ
    Hi, I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance). Thanks in advance for any ideas, Adam

    Read the article

  • Java Memory Model: reordering and concurrent locks

    - by Steffen Heil
    Hi The java meomry model mandates that synchronize blocks that synchronize on the same monitor enforce a before-after-realtion on the variables modified within those blocks. Example: // in thread A synchronized( lock ) { x = true; } // in thread B synchronized( lock ) { System.out.println( x ); } In this case it is garanteed that thread B will see x==true as long as thread A already passed that synchronized-block. Now I am in the process to rewrite lots of code to use the more flexible (and said to be faster) locks in java.util.concurrent, especially the ReentrantReadWriteLock. So the example looks like this: // in thread A synchronized( lock ) { lock.writeLock().lock(); x = true; lock.writeLock().unlock(); } // in thread B synchronized( lock ) { lock.readLock().lock(); System.out.println( x ); lock.readLock().unlock(); } However, I have not seen any hints within the memory model specification that such locks also imply the nessessary ordering. Looking into the implementation it seems to rely on the access to volatile variables inside AbstractQueuedSynchronizer (for the sun implementation at least). However this is not part of any specification and moreover access to non-volatile variables is not really condsidered covered by the memory barrier given by these variables, is it? So, here are my questions: Is it safe to assume the same ordering as with the "old" synchronized blocks? Is this documented somewhere? Is accessing any volatile variable a memory barrier for any other variable? Regards, Steffen

    Read the article

  • Scala Interpreter scala.tools.nsc.interpreter.IMain Memory leak

    - by Peter
    I need to write a program using the scala interpreter to run scala code on the fly. The interpreter must be able to run an infinite amount of code without being restarted. I know that each time the method interpret() of the class scala.tools.nsc.interpreter.IMain is called, the request is stored, so the memory usage will keep going up forever. Here is the idea of what I would like to do: var interpreter = new IMain while (true) { interpreter.interpret(some code to be run on the fly) } If the method interpret() stores the request each time, is there a way to clear the buffer of stored requests? What I am trying to do now is to count the number of times the method interpret() is called then get a new instance of IMain when the number of times reaches 100, for instance. Here is my code: var interpreter = new IMain var counter = 0 while (true) { interpreter.interpret(some code to be run on the fly) counter = counter + 1 if (counter > 100) { interpreter = new IMain counter = 0 } } However, I still see that the memory usage is going up forever. It seems that the IMain instances are not garbage-collected by the JVM. Could somebody help me solve this issue? I really need to be able to keep my program running for a long time without restarting, but I cannot afford such a memory usage just for the scala interpreter. Thanks in advance, Pet

    Read the article

  • Dynamic memory inside a struct

    - by Maximilien
    Hello, I'm editing a piece of code, that is part of a big project, that uses "const's" to initialize a bunch of arrays. Because I want to parametrize these const's I have to adapt the code to use "malloc" in order to allocate the memory. Unfortunately there is a problem with structs: I'm not able to allocate dynamic memory in the struct itself. Doing it outside would cause to much modification of the original code. Here's a small example: int globalx,globaly; struct bigStruct{ struct subStruct{ double info1; double info2; bool valid; }; double data; //subStruct bar[globalx][globaly]; subStruct ** bar=(subStruct**)malloc(globalx*sizeof(subStruct*)); for(int i=0;i<globalx;i++) bar[i]=(*subStruct)malloc(globaly*sizeof(subStruct)); }; int main(){ globalx=2; globaly=3; bigStruct foo; for(int i=0;i<globalx;i++) for(int j=0;j<globaly;j++){ foo.bar[i][j].info1=i+j; foo.bar[i][j].info2=i*j; foo.bar[i][j].valid=(i==j); } return 0; } Note: in the program code I'm editing globalx and globaly were const's in a specified namespace. Now I removed the "const" so they can act as parameters that are set exactly once. Summarized: How can I properly allocate memory for the substruct inside the struct? Thank you very much! Max

    Read the article

  • ExecutorService memory leak on exception

    - by TofuBeer
    I am having a hard time tracking this down since the profiler keeps crashing (hotspot error). Before I go too deep into figuring it out I'd like to know if I really have a problem or not :-) I have a few thread pools created via: Executors.newFixedThreadPool(10); The threads connect to different web sites and, on occasion, I get connection refused and wind up throwing an exception. When I later on call Future.get() to get the result it will then catch the ExecutionException that wraps the exception that was thrown when the connection could not be made. The program uses a fairly constant amount of memory up until the point in time that the exceptions get thrown (they tend to happen in batches when a particular site is overloaded). After that point the memory again remains constant but at a higher level. So my question is along the lines of is the memory behaviour (reported by "top" on Unix) expected because the exceptions just triggered something or do I probably have an actual leak that I'll need to track down? Additionally when Future.get() throws an exception is there anything else I need to do besides catch the exception (such as call Future.cancel() on it)?

    Read the article

  • how to make war file take up less memory

    - by Myy
    I need help on how to decrease the memory usage of my web app. so I can fit more into my webserver. so I'm building a java web app with JSF 2.0 developing in eclipse helios and running on an Apache tomcat Server. And I have a dedicated virtual server with a tomcat aswell where I deploy these war files. the webApp is about 35MB in size ( it has a lot of jars and such) but when I deploy it to my tomcat webserver, I can see it takes about 300MB of RAM, is this normal? my dedicated server only has 2GB of ram from which normally have 1 to use. so I as soon as I deploy 3 apps I get an OOM error, I've gotten permgen OOM and a out of swamp Memory error; to fix this I upped my MaxPermGen to about a gig and resytarted the server to get back some swamp space. so I tried deploying smaller older apps ( about 15MB) and they take up waay less memory. If I have 1 GB of ram I want to be able to fit more apps into my webserver without getting any OOM Errors. now I found this stack overflow Question, Can that be applied to my case? and if so, which are the common folders in the tomcat server? anyone done this before or have a different more effective, not so complicated approach? Any ideas, and or commets are more than appreciated. Thanks! Myy

    Read the article

  • Accessing memory buffer after fread()

    - by xiongtx
    I'm confused as to how fread() is used. Below is an example from cplusplus.com /* fread example: read a complete file */ #include <stdio.h> #include <stdlib.h> int main () { FILE * pFile; long lSize; char * buffer; size_t result; pFile = fopen ( "myfile.bin" , "rb" ); if (pFile==NULL) {fputs ("File error",stderr); exit (1);} // obtain file size: fseek (pFile , 0 , SEEK_END); lSize = ftell (pFile); rewind (pFile); // allocate memory to contain the whole file: buffer = (char*) malloc (sizeof(char)*lSize); if (buffer == NULL) {fputs ("Memory error",stderr); exit (2);} // copy the file into the buffer: result = fread (buffer,1,lSize,pFile); if (result != lSize) {fputs ("Reading error",stderr); exit (3);} /* the whole file is now loaded in the memory buffer. */ // terminate fclose (pFile); free (buffer); return 0; } Let's say that I don't use fclose() just yet. Can I now just treat buffer as an array and access elements like buffer[i]? Or do I have to do something else?

    Read the article

  • ExecutorSerrvice memory leak on exception

    - by TofuBeer
    I am having a hard time tracking this down since the profiler keeps crashing (hotspot error). Before I go too deep into figuring it out I'd like to know if I really have a problem or not :-) I have a few thread pools created via: Executors.newFixedThreadPool(10); The threads connect to different web sites and, on occasion, I get connection refused and wind up throwing an exception. When I later on call Future.get() to get the result it will then catch the ExecutionException that wraps the exception that was thrown when the connection could not be made. The program uses a fairly constant amount of memory up until the point in time that the exceptions get thrown (they tend to happen in batches when a particular site is overloaded). After that point the memory again remains constant but at a higher level. So my question is along the lines of is the memory behaviour (reported by "top" on Unix) expected because the exceptions just triggered something or do I probably have an actual leak that I'll need to track down? Additionally when Future.get() throws an exception is there anything else I need to do besides catch the exception (such as call Future.cancel() on it)?

    Read the article

  • Custom UITableView headerView disappears after memory warning

    - by psychotik
    I have a UITableViewController. I create a custom headerView for it's tableView in the loadView method like so: (void)loadView { [super loadView]; UIView* containerView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, width, height * 2 )]; containerView.tag = 'cont'; containerView.autoresizingMask = UIViewAutoresizingFlexibleLeftMargin | UIViewAutoresizingFlexibleRightMargin | UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleTopMargin; UIButton* button = [UIButton buttonWithType:UIButtonTypeCustom]; button.frame = CGRectMake(padding, height, width, height); ... //configure UIButton and events UIImageView* imageView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"image.png"] highlightedImage:[UIImage imageNamed:@"highlight.png"]]; imageView.frame = CGRectMake(0, 0, width, height ); ... //configure UIImageView [containerView addSubview:button]; [containerView addSubview:imageView]; self.tableView.tableHeaderView = containerView; [imageView release]; [containerView release]; } None of the other methods (viewDidLoad/Unload, etc) are overloaded. This controller is hosted in a tab. When I switch to another tab and simulate a memory warning, and then come back to this tab, my UITableView is missing my custon header. All the rows/section are visible as I would expect. Putting a BP in the loadView code above, I see it being invoked when I switch back to the tab after the memory warning, and yet I can't actually see the header. Any ideas about what I'm missing here? EDIT: This happens on the device and the simulator. On the device, I just let a memory warning occur by opening a bunch of different apps while mine is in the background.

    Read the article

  • entity framework navigation property further filter without loading into memory

    - by cellik
    Hi, I've two entities with 1 to N relation in between. Let's say Books and Pages. Book has a navigation property as Pages. Book has BookId as an identifier and Page has an auto generated id and a scalar property named PageNo. LazyLoading is set to true. I've generated this using VS2010 & .net 4.0 and created a database from that. In the partial class of Book, I need a GetPage function like below public Page GetPage(int PageNumber) { //checking whether it exist etc are not included for simplicity return Pages.Where(p=>p.PageNo==PageNumber).First(); } This works. However, since Pages property in the Book is an EntityCollection it has to load all Pages of a book in memory in order to get the one page (this slows down the app when this function is hit for the first time for a given book). i.e. Framework does not merge the queries and run them at once. It loads the Pages in memory and then uses LINQ to objects to do the second part To overcome this I've changed the code as follows public Page GetPage(int PageNumber) { MyContainer container = new MyContainer(); return container.Pages.Where(p=>p.PageNo==PageNumber && p.Book.BookId==BookId).First(); } This works considerably faster however it doesn't take into account the pages that have not been serialized to the db. So, both options has its cons. Is there any trick in the framework to overcome this situation. This must be a common scenario where you don't want all of the objects of a Navigation property loaded in memory when you don't need them.

    Read the article

  • File IO with Streams - Best Memory Buffer Size

    - by AJ
    I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance).

    Read the article

  • Memory usage does not drop -- no leaks though

    - by climbon
    I have UINavigationController controlling several views. One of the views is composed of 20 scrollable pages. Each page is a constructed on the fly from UIViews by adding buttons, labels, UIImageViews etc. When this UIView is popped off the stack, the memory usage remains the same. Hence it keeps rising if I keep pushing/popping that view. In my dealloc, I am traversing through all 20 pages and finding each type of object which got added via addSubview and then do a release on it but instruments says my memory usage never goes down! I am trying to use 'retainCount' to see what is up with objects I am releasing but I am perhaps not getting true picture via retainCount. For some elements retainCount shows 2 so I try to release that object twice but then app crashes. If I release it once it works but memory usage never go down :( Q1: Do I need to traverse find each element and then do a release on that element ? Why can't I release a parent object and all objects contained by it would get released automatically ? Q2: Is retainCount a reliable indicator ?

    Read the article

  • Sheet and thread memory problem

    - by Xident
    Hi Guys, recently I started a project which can export some precalculated Grafix/Audio to files, for after processing. All I was doing is to put a new Window (with progressindicator and an Abort Button) in my main xib and opened it using the following code: [NSApp beginSheet: REC_Sheet modalForWindow: MOTHER_WINDOW modalDelegate: self didEndSelector: nil contextInfo: nil]; NSModalSession session=[NSApp beginModalSessionForWindow:REC_Sheet]; RECISNOTDONE=YES; while (RECISNOTDONE) { if ([NSApp runModalSession:session]!=NSRunContinuesResponse) break; usleep(100); } [NSApp endModalSession:session]; A Background Thread (pthread) was started earlier, to actually perform the work and save all the targas/wave file. Which worked great, but after an amount of time, it turned out that the main thread was not responding anymore and my memory footprint raised unstoppable. I tried to debug it with Instruments, and saw a lot of CFHash etc stuff growing to infinity. By accident i clicked below the sheet, and temporary it helped, the main thread (AppKit ?) was releasing it's stuff, but just for a little time. I can't explain it to me, first of all I thought it was the access from my thread to the Progressbar to update the Progress (intervalled at 0,5sec), so I cut it out. But even if I'm not updating anything and did nothing with the Progressbar, my Application eat up all the Memory, because of not releasing it's "Main-Event" or whatsoever Stuff. Is there any possibility to "drain" this Main thread Memory stuff (Runloop / NSApp call?). And why the heck doesn't the Main thread respond anymore (after this simple task) ??? I don't have a clou anymore, please help ! Thanks in advance ! P.S. How do you guys implement "threaded long task" Stuff and updating your gui ???

    Read the article

  • Jquery plugin seems to leak memory no matter what I do

    - by ddombrow
    I've recently been tasked with ferreting out some memory leaks in an application for my work. I've narrowed down one of the big leaks to a jquery plugin. It appears we're using a modified version of a popular context menu jquery plugin. It looks like one of the developers before me attempted to add a destroy method. I noticed it wasn't very well written and attempted to rewrite. Here's the meat of my destroy method: if (menu.childMenus) { for (var i = 0; i < menu.childMenus.length; i++) { $(menu.childMenus[i]).destroy(menu.childMenus[i], 'contextmenu'); } } var recursiveUnbind = function(node) { $(node).unbind(); //$(node).empty().remove(); $.each(node, function(obj) { recursiveUnbind(obj); }); }; $.each(menu, function() { recursiveUnbind(menu); }); $(menu).empty().remove(); In my mind this code should blow away all the jquery event binding and remove the dom elements, yet still the plugin leaks gobs of memory in IE7. The modified plugin with a test page can be found here: http://www.olduglyhead.com/jquery/leaks/ Clicking the button repeatedly will cause IE7 to leak a bunch of memory.

    Read the article

  • VB.Net Memory Issue

    - by Skulmuk
    We have an application that has some interesting memory usage issues. When it first opens, the program uses aroun 50-60MB of memory. This stays consistent on 32-bit machines. On 64-bit machines, however, re-activating the form in any way (clicking, dragging, alt-tabbing, etc.) adds around another 50MB to it's memory usage. It repeats this process several times before resetting back to around 45MB, at which point the cycle begins again. I've done some research and a lot of people have said that VB in general has pretty poor garbage collection, which could be affecting the software in some way. However, I've yet to find a solution. There are no events fired when the application is activated (as shown by 32-bit usage) - the applications is merely sitting awaiting the user's actions. At load, the system pulls some data into a tree view, but that's the only external connection, and it only re-fires the routine when the user makes a change to something and saves the change. Has anyone else experienced anything this strange, and if so, does anyone know of what might fix it? It seems strange that it only occurs under x64 systems. Thanks

    Read the article

  • OutOfMemoryException - out of ideas II

    - by Captain Comic
    Hello, This question is related to my previous question. The storyline: I have an application which consumes a lot of memory if you look at task manager VMSize. I am trying to find out what consumes this amount of memory. You see in the picture below that VM size is 2,46 GB Ok now I am looking at .net performance counters Committed and reserved bytes add up to only 1,2 GB Now lets look at windb sos debugging. Let's run eeheap -gc command The heap size used by GC is only 340 MB. Where is the rest of used memory? I need to discover why WM size in TaskManager is 2.4 GB

    Read the article

  • Heap Dump Root Classes

    - by Adnan Memon
    We have production system going into infinite loop of full gc and memory drops form 8 gigs to like 1 MB in just 2 minutes. After taking heap dump it tells me there an is an array of java.lang.Object ([Ljava.lang.Object) with millions of java.lang.String objects having same String taking 99% of heap. But it doesn't tell me which class is referencing to this array so that I can fix it in the code. I took the heap dump using jmap tool on JDK 6 and used JProfiler, NetBeans, SAP Memory Analyzer and IBM Memory Analyzer but none of those tell me what is causing this huge array of objects?? ... like what class is referencing to it or contains it. Do I have to take a different dump with different config in order to get that info? ... Or anything else that can help me find out the culprit class causing this ... it will help a lot.

    Read the article

  • Is writing a reference atomic on 64bit VMs

    - by Steffen Heil
    Hi The java memory model mandates that writing a int is atomic: That is, if you write a value to it (consisting of 4 bytes) in one thread and read it in another, you will get all bytes or none, but never 2 new bytes and 2 old bytes or such. This is not guaranteed for long. Here, writing 0x1122334455667788 to a variable holding 0 before could result in another thread reading 0x112233440000000 or 0x0000000055667788. Now the specification does not mandate object references to be either int or long-sized. For type safety reasons I suspect they are guaranteed to be written atomiacally, but on a 64bit VM these references could be very well 64bit values (merely memory addresses). No here are my question: Are there any memory model specs covering this (that I haven't found)? Are long-writes suspect to be atomic on 64bit VMs? Are VMs forced to map references to 32bit? Regards, Steffen

    Read the article

  • Follow up viewDidUnload vs. dealloc question...

    - by entaroadun
    Clarification question as a follow up to: http://stackoverflow.com/questions/2261972/what-exactly-must-i-do-in-viewdidunload http://stackoverflow.com/questions/1158788/when-should-i-release-objects-in-voidviewdidunload-rather-than-in-dealloc So let's say there's a low memory error, and the view is hidden, and viewDidUnload is called. We do the release and nil dance. Later the entire view stack is not needed, so dealloc is called. Since I already have the release and nil stuff in viewDidUnload, I don't have it in dealloc. Perfect. But if there's no low memory error, viewDidUnload is never called. dealloc is called and since I don't have the release and nil stuff, there's a memory leak. In other words, will dealloc ever be called without viewDidUnload being called first? And the practical follow up to that is, if I alloc and set something in viewDidLoad, and I release it and set to nil in viewDidUnload, do I leave it out of dealloc, or do I do a defensive nil check in dealloc and release/nil it if it's not nil?

    Read the article

  • Cuda program results are always zero in HW, correct in EMU??

    - by Orion Nebula
    Hi all! I am having a weird problem .. I have written a CUDA code which executes correctly in emulation and all results show up.. however, when executed on hardware "G210" .. the results in the result memory are always 0 I am passing two vectors to the kernel, one with random variables the other is initialized to zero, the code copies the first vector to shared memory, does some swapping and other operations and then writes back the results on the second vector (the one with the initial 0's) I am using double precision, the -arch sm13 flag is used, all memory allocation also use sizeof(double) .. I have checked if the kernel is invoked, it does .. so no problems here .. the cudaMemCpy has no problems .. what could be the problem .. :( why would it work in emulation but not on HW I am quite confused .. any ideas?

    Read the article

  • How to see what objects lie in which generation in YourKit?

    - by prams
    I am using YourKit (11.0) to try to profile my j2ee app. The app uses java 6 and running on 64-bit linux (centos). I was told that YourKit possibly tells us which objects exist in which generation (eden, old, etc) at any given point of time. On a side note, I am trying to chase a problem where memory usage keeps increasing until a major collection happens (every 4 hrs) and I am suspicious about few particular objects, so I am interested to know where those objects lie at different times. Fortunately I know lot of memory is being consumed in one particular area of code (so other objects are possibly directly being put into the old gen), but don't exactly know how much of that memory is being put into eden space, how much is being collected by the minor collections, etc. Thanks.

    Read the article

  • Dynamic allocated array is not freed

    - by Stefano
    I'm using the code above to dynamically allocate an array, do some work inside the function, return an element of the array and free the memory outside of the function. But when I try to deallocate the array it doesn't free the memory and I have a memory leak. The debugger pointed to the myArray variable shows me the error CXX0030. Why? struct MYSTRUCT { char *myvariable1; int myvariable2; char *myvariable2; .... }; void MyClass::MyFunction1() { MYSTRUCT *myArray= NULL; MYSTRUCT *myElement = this->MyFunction2(myArray); ... delete [] myArray; } MYSTRUCT* MyClass::MyFunction2(MYSTRUCT *array) { array = (MYSTRUCT*)operator new(bytesLength); ... return array[X]; }

    Read the article

  • Application Servers(java) : Should adding RAM to server depend on each domain's -Xmx value?

    - by ring bearer
    We have Glassfish application server running in Linux servers. Each Glassfish installation hosts 3 domains. Each domain has a JVM configuration such as -Xms 1GB and -XmX 2GB. That means if all these three domains are running at max memory, server should be able to allocate total 6GB to the JVMs With that math,each of our server has 8GB RAM (2 GB Buffer) First of all - is this a good approach? I did not think so, because when we analyzed memory utilization on this server over past few months, it was only up to 1GB; Now there are requests to add an additional domain to these servers - does that mean to add additional 2 GB RAM just to be safe or based on trend, continue with whatever memory the server has?

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >