Search Results

Search found 12357 results on 495 pages for 'memory lane'.

Page 91/495 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • Scala Interpreter scala.tools.nsc.interpreter.IMain Memory leak

    - by Peter
    I need to write a program using the scala interpreter to run scala code on the fly. The interpreter must be able to run an infinite amount of code without being restarted. I know that each time the method interpret() of the class scala.tools.nsc.interpreter.IMain is called, the request is stored, so the memory usage will keep going up forever. Here is the idea of what I would like to do: var interpreter = new IMain while (true) { interpreter.interpret(some code to be run on the fly) } If the method interpret() stores the request each time, is there a way to clear the buffer of stored requests? What I am trying to do now is to count the number of times the method interpret() is called then get a new instance of IMain when the number of times reaches 100, for instance. Here is my code: var interpreter = new IMain var counter = 0 while (true) { interpreter.interpret(some code to be run on the fly) counter = counter + 1 if (counter > 100) { interpreter = new IMain counter = 0 } } However, I still see that the memory usage is going up forever. It seems that the IMain instances are not garbage-collected by the JVM. Could somebody help me solve this issue? I really need to be able to keep my program running for a long time without restarting, but I cannot afford such a memory usage just for the scala interpreter. Thanks in advance, Pet

    Read the article

  • Dynamic memory inside a struct

    - by Maximilien
    Hello, I'm editing a piece of code, that is part of a big project, that uses "const's" to initialize a bunch of arrays. Because I want to parametrize these const's I have to adapt the code to use "malloc" in order to allocate the memory. Unfortunately there is a problem with structs: I'm not able to allocate dynamic memory in the struct itself. Doing it outside would cause to much modification of the original code. Here's a small example: int globalx,globaly; struct bigStruct{ struct subStruct{ double info1; double info2; bool valid; }; double data; //subStruct bar[globalx][globaly]; subStruct ** bar=(subStruct**)malloc(globalx*sizeof(subStruct*)); for(int i=0;i<globalx;i++) bar[i]=(*subStruct)malloc(globaly*sizeof(subStruct)); }; int main(){ globalx=2; globaly=3; bigStruct foo; for(int i=0;i<globalx;i++) for(int j=0;j<globaly;j++){ foo.bar[i][j].info1=i+j; foo.bar[i][j].info2=i*j; foo.bar[i][j].valid=(i==j); } return 0; } Note: in the program code I'm editing globalx and globaly were const's in a specified namespace. Now I removed the "const" so they can act as parameters that are set exactly once. Summarized: How can I properly allocate memory for the substruct inside the struct? Thank you very much! Max

    Read the article

  • ExecutorService memory leak on exception

    - by TofuBeer
    I am having a hard time tracking this down since the profiler keeps crashing (hotspot error). Before I go too deep into figuring it out I'd like to know if I really have a problem or not :-) I have a few thread pools created via: Executors.newFixedThreadPool(10); The threads connect to different web sites and, on occasion, I get connection refused and wind up throwing an exception. When I later on call Future.get() to get the result it will then catch the ExecutionException that wraps the exception that was thrown when the connection could not be made. The program uses a fairly constant amount of memory up until the point in time that the exceptions get thrown (they tend to happen in batches when a particular site is overloaded). After that point the memory again remains constant but at a higher level. So my question is along the lines of is the memory behaviour (reported by "top" on Unix) expected because the exceptions just triggered something or do I probably have an actual leak that I'll need to track down? Additionally when Future.get() throws an exception is there anything else I need to do besides catch the exception (such as call Future.cancel() on it)?

    Read the article

  • how to make war file take up less memory

    - by Myy
    I need help on how to decrease the memory usage of my web app. so I can fit more into my webserver. so I'm building a java web app with JSF 2.0 developing in eclipse helios and running on an Apache tomcat Server. And I have a dedicated virtual server with a tomcat aswell where I deploy these war files. the webApp is about 35MB in size ( it has a lot of jars and such) but when I deploy it to my tomcat webserver, I can see it takes about 300MB of RAM, is this normal? my dedicated server only has 2GB of ram from which normally have 1 to use. so I as soon as I deploy 3 apps I get an OOM error, I've gotten permgen OOM and a out of swamp Memory error; to fix this I upped my MaxPermGen to about a gig and resytarted the server to get back some swamp space. so I tried deploying smaller older apps ( about 15MB) and they take up waay less memory. If I have 1 GB of ram I want to be able to fit more apps into my webserver without getting any OOM Errors. now I found this stack overflow Question, Can that be applied to my case? and if so, which are the common folders in the tomcat server? anyone done this before or have a different more effective, not so complicated approach? Any ideas, and or commets are more than appreciated. Thanks! Myy

    Read the article

  • Accessing memory buffer after fread()

    - by xiongtx
    I'm confused as to how fread() is used. Below is an example from cplusplus.com /* fread example: read a complete file */ #include <stdio.h> #include <stdlib.h> int main () { FILE * pFile; long lSize; char * buffer; size_t result; pFile = fopen ( "myfile.bin" , "rb" ); if (pFile==NULL) {fputs ("File error",stderr); exit (1);} // obtain file size: fseek (pFile , 0 , SEEK_END); lSize = ftell (pFile); rewind (pFile); // allocate memory to contain the whole file: buffer = (char*) malloc (sizeof(char)*lSize); if (buffer == NULL) {fputs ("Memory error",stderr); exit (2);} // copy the file into the buffer: result = fread (buffer,1,lSize,pFile); if (result != lSize) {fputs ("Reading error",stderr); exit (3);} /* the whole file is now loaded in the memory buffer. */ // terminate fclose (pFile); free (buffer); return 0; } Let's say that I don't use fclose() just yet. Can I now just treat buffer as an array and access elements like buffer[i]? Or do I have to do something else?

    Read the article

  • ExecutorSerrvice memory leak on exception

    - by TofuBeer
    I am having a hard time tracking this down since the profiler keeps crashing (hotspot error). Before I go too deep into figuring it out I'd like to know if I really have a problem or not :-) I have a few thread pools created via: Executors.newFixedThreadPool(10); The threads connect to different web sites and, on occasion, I get connection refused and wind up throwing an exception. When I later on call Future.get() to get the result it will then catch the ExecutionException that wraps the exception that was thrown when the connection could not be made. The program uses a fairly constant amount of memory up until the point in time that the exceptions get thrown (they tend to happen in batches when a particular site is overloaded). After that point the memory again remains constant but at a higher level. So my question is along the lines of is the memory behaviour (reported by "top" on Unix) expected because the exceptions just triggered something or do I probably have an actual leak that I'll need to track down? Additionally when Future.get() throws an exception is there anything else I need to do besides catch the exception (such as call Future.cancel() on it)?

    Read the article

  • Custom UITableView headerView disappears after memory warning

    - by psychotik
    I have a UITableViewController. I create a custom headerView for it's tableView in the loadView method like so: (void)loadView { [super loadView]; UIView* containerView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, width, height * 2 )]; containerView.tag = 'cont'; containerView.autoresizingMask = UIViewAutoresizingFlexibleLeftMargin | UIViewAutoresizingFlexibleRightMargin | UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleTopMargin; UIButton* button = [UIButton buttonWithType:UIButtonTypeCustom]; button.frame = CGRectMake(padding, height, width, height); ... //configure UIButton and events UIImageView* imageView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"image.png"] highlightedImage:[UIImage imageNamed:@"highlight.png"]]; imageView.frame = CGRectMake(0, 0, width, height ); ... //configure UIImageView [containerView addSubview:button]; [containerView addSubview:imageView]; self.tableView.tableHeaderView = containerView; [imageView release]; [containerView release]; } None of the other methods (viewDidLoad/Unload, etc) are overloaded. This controller is hosted in a tab. When I switch to another tab and simulate a memory warning, and then come back to this tab, my UITableView is missing my custon header. All the rows/section are visible as I would expect. Putting a BP in the loadView code above, I see it being invoked when I switch back to the tab after the memory warning, and yet I can't actually see the header. Any ideas about what I'm missing here? EDIT: This happens on the device and the simulator. On the device, I just let a memory warning occur by opening a bunch of different apps while mine is in the background.

    Read the article

  • entity framework navigation property further filter without loading into memory

    - by cellik
    Hi, I've two entities with 1 to N relation in between. Let's say Books and Pages. Book has a navigation property as Pages. Book has BookId as an identifier and Page has an auto generated id and a scalar property named PageNo. LazyLoading is set to true. I've generated this using VS2010 & .net 4.0 and created a database from that. In the partial class of Book, I need a GetPage function like below public Page GetPage(int PageNumber) { //checking whether it exist etc are not included for simplicity return Pages.Where(p=>p.PageNo==PageNumber).First(); } This works. However, since Pages property in the Book is an EntityCollection it has to load all Pages of a book in memory in order to get the one page (this slows down the app when this function is hit for the first time for a given book). i.e. Framework does not merge the queries and run them at once. It loads the Pages in memory and then uses LINQ to objects to do the second part To overcome this I've changed the code as follows public Page GetPage(int PageNumber) { MyContainer container = new MyContainer(); return container.Pages.Where(p=>p.PageNo==PageNumber && p.Book.BookId==BookId).First(); } This works considerably faster however it doesn't take into account the pages that have not been serialized to the db. So, both options has its cons. Is there any trick in the framework to overcome this situation. This must be a common scenario where you don't want all of the objects of a Navigation property loaded in memory when you don't need them.

    Read the article

  • File IO with Streams - Best Memory Buffer Size

    - by AJ
    I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance).

    Read the article

  • Memory usage does not drop -- no leaks though

    - by climbon
    I have UINavigationController controlling several views. One of the views is composed of 20 scrollable pages. Each page is a constructed on the fly from UIViews by adding buttons, labels, UIImageViews etc. When this UIView is popped off the stack, the memory usage remains the same. Hence it keeps rising if I keep pushing/popping that view. In my dealloc, I am traversing through all 20 pages and finding each type of object which got added via addSubview and then do a release on it but instruments says my memory usage never goes down! I am trying to use 'retainCount' to see what is up with objects I am releasing but I am perhaps not getting true picture via retainCount. For some elements retainCount shows 2 so I try to release that object twice but then app crashes. If I release it once it works but memory usage never go down :( Q1: Do I need to traverse find each element and then do a release on that element ? Why can't I release a parent object and all objects contained by it would get released automatically ? Q2: Is retainCount a reliable indicator ?

    Read the article

  • Sheet and thread memory problem

    - by Xident
    Hi Guys, recently I started a project which can export some precalculated Grafix/Audio to files, for after processing. All I was doing is to put a new Window (with progressindicator and an Abort Button) in my main xib and opened it using the following code: [NSApp beginSheet: REC_Sheet modalForWindow: MOTHER_WINDOW modalDelegate: self didEndSelector: nil contextInfo: nil]; NSModalSession session=[NSApp beginModalSessionForWindow:REC_Sheet]; RECISNOTDONE=YES; while (RECISNOTDONE) { if ([NSApp runModalSession:session]!=NSRunContinuesResponse) break; usleep(100); } [NSApp endModalSession:session]; A Background Thread (pthread) was started earlier, to actually perform the work and save all the targas/wave file. Which worked great, but after an amount of time, it turned out that the main thread was not responding anymore and my memory footprint raised unstoppable. I tried to debug it with Instruments, and saw a lot of CFHash etc stuff growing to infinity. By accident i clicked below the sheet, and temporary it helped, the main thread (AppKit ?) was releasing it's stuff, but just for a little time. I can't explain it to me, first of all I thought it was the access from my thread to the Progressbar to update the Progress (intervalled at 0,5sec), so I cut it out. But even if I'm not updating anything and did nothing with the Progressbar, my Application eat up all the Memory, because of not releasing it's "Main-Event" or whatsoever Stuff. Is there any possibility to "drain" this Main thread Memory stuff (Runloop / NSApp call?). And why the heck doesn't the Main thread respond anymore (after this simple task) ??? I don't have a clou anymore, please help ! Thanks in advance ! P.S. How do you guys implement "threaded long task" Stuff and updating your gui ???

    Read the article

  • Jquery plugin seems to leak memory no matter what I do

    - by ddombrow
    I've recently been tasked with ferreting out some memory leaks in an application for my work. I've narrowed down one of the big leaks to a jquery plugin. It appears we're using a modified version of a popular context menu jquery plugin. It looks like one of the developers before me attempted to add a destroy method. I noticed it wasn't very well written and attempted to rewrite. Here's the meat of my destroy method: if (menu.childMenus) { for (var i = 0; i < menu.childMenus.length; i++) { $(menu.childMenus[i]).destroy(menu.childMenus[i], 'contextmenu'); } } var recursiveUnbind = function(node) { $(node).unbind(); //$(node).empty().remove(); $.each(node, function(obj) { recursiveUnbind(obj); }); }; $.each(menu, function() { recursiveUnbind(menu); }); $(menu).empty().remove(); In my mind this code should blow away all the jquery event binding and remove the dom elements, yet still the plugin leaks gobs of memory in IE7. The modified plugin with a test page can be found here: http://www.olduglyhead.com/jquery/leaks/ Clicking the button repeatedly will cause IE7 to leak a bunch of memory.

    Read the article

  • VB.Net Memory Issue

    - by Skulmuk
    We have an application that has some interesting memory usage issues. When it first opens, the program uses aroun 50-60MB of memory. This stays consistent on 32-bit machines. On 64-bit machines, however, re-activating the form in any way (clicking, dragging, alt-tabbing, etc.) adds around another 50MB to it's memory usage. It repeats this process several times before resetting back to around 45MB, at which point the cycle begins again. I've done some research and a lot of people have said that VB in general has pretty poor garbage collection, which could be affecting the software in some way. However, I've yet to find a solution. There are no events fired when the application is activated (as shown by 32-bit usage) - the applications is merely sitting awaiting the user's actions. At load, the system pulls some data into a tree view, but that's the only external connection, and it only re-fires the routine when the user makes a change to something and saves the change. Has anyone else experienced anything this strange, and if so, does anyone know of what might fix it? It seems strange that it only occurs under x64 systems. Thanks

    Read the article

  • Bluetooth not found on BCM43228

    - by TK Kocheran
    I've got a Broadcom BCM43228 mPCIe card which came with my motherboard (ASUS ROG Maximus V Extreme, can't seem to find a link to what the card is) which is working great for WiFi right now, but I can't detect the Bluetooth hardware onboard. In Windows, I have full Bluetooth 4.0 support. $ lspci 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) 00:14.0 USB controller: Intel Corporation Panther Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:19.0 Ethernet controller: Intel Corporation 82579V Gigabit Network Connection (rev 04) 00:1a.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.4 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 5 (rev c4) 00:1c.6 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 7 (rev c4) 00:1c.7 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 8 (rev c4) 00:1d.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA Controller [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 01:00.0 VGA compatible controller: NVIDIA Corporation Device 1189 (rev a1) 01:00.1 Audio device: NVIDIA Corporation Device 0e0a (rev a1) 0d:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller 0e:00.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:01.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:04.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:05.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:06.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:07.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:08.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:09.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 10:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller 12:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 01) 15:00.0 Network controller: Broadcom Corporation BCM43228 802.11a/b/g/n 17:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 01) The key line seems to be: 15:00.0 Network controller: Broadcom Corporation BCM43228 802.11a/b/g/n If I try to detect the Bluetooth card, I don't see anything: $ hcitool dev Devices: rfkill list all: Output lspci: Output lsusb: Output I finally found the card with usb-devices: T: Bus=01 Lev=02 Prnt=02 Port=00 Cnt=01 Dev#= 3 Spd=12 MxCh= 0 D: Ver= 2.00 Cls=ff(vend.) Sub=01 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=0b05 ProdID=17b5 Rev=01.12 S: Manufacturer=Broadcom Corp S: Product=BCM20702A0 S: SerialNumber=############ C: #Ifs= 4 Cfg#= 1 Atr=e0 MxPwr=0mA I: If#= 0 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=01 Prot=01 Driver=(none) I: If#= 1 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=01 Prot=01 Driver=(none) I: If#= 2 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=(none) I: If#= 3 Alt= 0 #EPs= 0 Cls=fe(app. ) Sub=01 Prot=01 Driver=(none) I've heard that this card needs to have firmware injected into it in order to function. If that's the case, how do I do it?

    Read the article

  • OutOfMemoryException - out of ideas II

    - by Captain Comic
    Hello, This question is related to my previous question. The storyline: I have an application which consumes a lot of memory if you look at task manager VMSize. I am trying to find out what consumes this amount of memory. You see in the picture below that VM size is 2,46 GB Ok now I am looking at .net performance counters Committed and reserved bytes add up to only 1,2 GB Now lets look at windb sos debugging. Let's run eeheap -gc command The heap size used by GC is only 340 MB. Where is the rest of used memory? I need to discover why WM size in TaskManager is 2.4 GB

    Read the article

  • Heap Dump Root Classes

    - by Adnan Memon
    We have production system going into infinite loop of full gc and memory drops form 8 gigs to like 1 MB in just 2 minutes. After taking heap dump it tells me there an is an array of java.lang.Object ([Ljava.lang.Object) with millions of java.lang.String objects having same String taking 99% of heap. But it doesn't tell me which class is referencing to this array so that I can fix it in the code. I took the heap dump using jmap tool on JDK 6 and used JProfiler, NetBeans, SAP Memory Analyzer and IBM Memory Analyzer but none of those tell me what is causing this huge array of objects?? ... like what class is referencing to it or contains it. Do I have to take a different dump with different config in order to get that info? ... Or anything else that can help me find out the culprit class causing this ... it will help a lot.

    Read the article

  • Is writing a reference atomic on 64bit VMs

    - by Steffen Heil
    Hi The java memory model mandates that writing a int is atomic: That is, if you write a value to it (consisting of 4 bytes) in one thread and read it in another, you will get all bytes or none, but never 2 new bytes and 2 old bytes or such. This is not guaranteed for long. Here, writing 0x1122334455667788 to a variable holding 0 before could result in another thread reading 0x112233440000000 or 0x0000000055667788. Now the specification does not mandate object references to be either int or long-sized. For type safety reasons I suspect they are guaranteed to be written atomiacally, but on a 64bit VM these references could be very well 64bit values (merely memory addresses). No here are my question: Are there any memory model specs covering this (that I haven't found)? Are long-writes suspect to be atomic on 64bit VMs? Are VMs forced to map references to 32bit? Regards, Steffen

    Read the article

  • Follow up viewDidUnload vs. dealloc question...

    - by entaroadun
    Clarification question as a follow up to: http://stackoverflow.com/questions/2261972/what-exactly-must-i-do-in-viewdidunload http://stackoverflow.com/questions/1158788/when-should-i-release-objects-in-voidviewdidunload-rather-than-in-dealloc So let's say there's a low memory error, and the view is hidden, and viewDidUnload is called. We do the release and nil dance. Later the entire view stack is not needed, so dealloc is called. Since I already have the release and nil stuff in viewDidUnload, I don't have it in dealloc. Perfect. But if there's no low memory error, viewDidUnload is never called. dealloc is called and since I don't have the release and nil stuff, there's a memory leak. In other words, will dealloc ever be called without viewDidUnload being called first? And the practical follow up to that is, if I alloc and set something in viewDidLoad, and I release it and set to nil in viewDidUnload, do I leave it out of dealloc, or do I do a defensive nil check in dealloc and release/nil it if it's not nil?

    Read the article

  • Cuda program results are always zero in HW, correct in EMU??

    - by Orion Nebula
    Hi all! I am having a weird problem .. I have written a CUDA code which executes correctly in emulation and all results show up.. however, when executed on hardware "G210" .. the results in the result memory are always 0 I am passing two vectors to the kernel, one with random variables the other is initialized to zero, the code copies the first vector to shared memory, does some swapping and other operations and then writes back the results on the second vector (the one with the initial 0's) I am using double precision, the -arch sm13 flag is used, all memory allocation also use sizeof(double) .. I have checked if the kernel is invoked, it does .. so no problems here .. the cudaMemCpy has no problems .. what could be the problem .. :( why would it work in emulation but not on HW I am quite confused .. any ideas?

    Read the article

  • How to see what objects lie in which generation in YourKit?

    - by prams
    I am using YourKit (11.0) to try to profile my j2ee app. The app uses java 6 and running on 64-bit linux (centos). I was told that YourKit possibly tells us which objects exist in which generation (eden, old, etc) at any given point of time. On a side note, I am trying to chase a problem where memory usage keeps increasing until a major collection happens (every 4 hrs) and I am suspicious about few particular objects, so I am interested to know where those objects lie at different times. Fortunately I know lot of memory is being consumed in one particular area of code (so other objects are possibly directly being put into the old gen), but don't exactly know how much of that memory is being put into eden space, how much is being collected by the minor collections, etc. Thanks.

    Read the article

  • Dynamic allocated array is not freed

    - by Stefano
    I'm using the code above to dynamically allocate an array, do some work inside the function, return an element of the array and free the memory outside of the function. But when I try to deallocate the array it doesn't free the memory and I have a memory leak. The debugger pointed to the myArray variable shows me the error CXX0030. Why? struct MYSTRUCT { char *myvariable1; int myvariable2; char *myvariable2; .... }; void MyClass::MyFunction1() { MYSTRUCT *myArray= NULL; MYSTRUCT *myElement = this->MyFunction2(myArray); ... delete [] myArray; } MYSTRUCT* MyClass::MyFunction2(MYSTRUCT *array) { array = (MYSTRUCT*)operator new(bytesLength); ... return array[X]; }

    Read the article

  • Application Servers(java) : Should adding RAM to server depend on each domain's -Xmx value?

    - by ring bearer
    We have Glassfish application server running in Linux servers. Each Glassfish installation hosts 3 domains. Each domain has a JVM configuration such as -Xms 1GB and -XmX 2GB. That means if all these three domains are running at max memory, server should be able to allocate total 6GB to the JVMs With that math,each of our server has 8GB RAM (2 GB Buffer) First of all - is this a good approach? I did not think so, because when we analyzed memory utilization on this server over past few months, it was only up to 1GB; Now there are requests to add an additional domain to these servers - does that mean to add additional 2 GB RAM just to be safe or based on trend, continue with whatever memory the server has?

    Read the article

  • Why is address zero used for null pointer?

    - by Joel
    In C (or C++ for that matter), pointers are special if they have the value zero: I am adviced to set pointers to zero after freeing their memory, because it means freeing the pointer again isn't dangerous; when I call malloc it returns a pointer with the value zero if it can't get me memory; I use if (p != 0) all the time to make sure passed pointers are valid etc. But since memory addressing starts at 0, isn't 0 just as a valid address as any other? How can 0 be used for handling null pointers if that is the case? Why isn't a negative number null instead?

    Read the article

  • How does C free() work?

    - by slee
    #include <stdio.h> #include <stdlib.h> int * alloc() { int *p = (int *)calloc(5,4); printf("%d\n",p); return p; } int main() { int *p = alloc(); free(p); printf("%d\n",p); p[0] = 1; p[1] = 2; printf("%d %d\n",p[0],p[1]); } As to the code segment, I allocate 5 ints,first. And then I free the memory. When I printf p, why does p sill have a value same to the memory address allocated first? And I also can assign value to p[0] and p[1]. Does this mean free() do nothing? Once I allocate memory, I can use later though I have freed it.

    Read the article

  • C or C++: how do loaders/wrappers work?

    - by guitar-
    Here's an example of what I mean... User runs LOADER.EXE program LOADER.EXE downloads another EXE but keeps it all in memory without saving it to disk Runs the downloaded EXE just as it would if it were executed from disk, but does it straight from memory I've seen a few applications like this, and I've never seen an example or an explanation of how it works. Does anyone know? Another example is having an encrypted EXE embedded in another one. It gets extracted and decrypted in memory, without ever being saved to disk before it gets executed. I've seen that one used in some applications to prevent piracy.

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >