Search Results

Search found 992 results on 40 pages for 'garbage'.

Page 11/40 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • java GC periodically enters into several full GC cycles

    - by Peter
    Environment: sun JDK 1.6.0_16 vm settings: -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -Xms1024 -Xmx1024M -XX:MaxNewSize=448m -XX:NewSize=448m -XX:SurvivorRatio=4(6 also checked) -XX:MaxPermSize=128M OS: windows server 2003 processor: 4 cores of INTEL XEON 5130, 2000 Hz my application description: high intensity of concurrent(java 5 concurrency used) operations completed each time by commit to oracle. it's about 20-30 threads run non stop, doing tasks. application runs in JBOSS web container. My GC starts work normally, I see a lot of small GCs and all that time CPU shows good load, like all 4 cores loaded to 40-50%, CPU graph is stable. Then , after 1 min of good work, CPU starts drop to 0% on 2 cores from 4, it's graph becomes unstable, goes up and down("teeth"). I see, that my threads work slower(I have monitoring), I see that GC starts produce a lot of FULL GC during that time and next 4-5 minutes this situation remains as is, then for short period of time, like 1 minute, it gets back to normal situation, but shortly after that all bad thing repeats. Question: Why I have so frequent full GC??? How to prevent that? I played with SurvivorRatio - does not help. I noticed, that application behaves normally until first FULL GC occurs, while I have enough memory. Then it runs badly. my GC LOG: starts good then long period of FULL GCs(many of them) 1027.861: [GC 942200K-623526K(991232K), 0.0887588 secs] 1029.333: [GC 803279K(991232K), 0.0927470 secs] 1030.551: [GC 967485K-625549K(991232K), 0.0823024 secs] 1030.634: [GC 625957K(991232K), 0.0763656 secs] 1033.126: [GC 969613K-632963K(991232K), 0.0850611 secs] 1033.281: [GC 649899K(991232K), 0.0378358 secs] 1035.910: [GC 813948K(991232K), 0.3540375 secs] 1037.994: [GC 967729K-637198K(991232K), 0.0826042 secs] 1038.435: [GC 710309K(991232K), 0.1370703 secs] 1039.665: [GC 980494K-972462K(991232K), 0.6398589 secs] 1040.306: [Full GC 972462K-619643K(991232K), 3.7780597 secs] 1044.093: [GC 620103K(991232K), 0.0695221 secs] 1047.870: [Full GC 991231K-626514K(991232K), 3.8732457 secs] 1053.739: [GC 942140K(991232K), 0.5410483 secs] 1056.343: [Full GC 991232K-634157K(991232K), 3.9071443 secs] 1061.257: [GC 786274K(991232K), 0.3106603 secs] 1065.229: [Full GC 991232K-641617K(991232K), 3.9565638 secs] 1071.192: [GC 945999K(991232K), 0.5401515 secs] 1073.793: [Full GC 991231K-648045K(991232K), 3.9627814 secs] 1079.754: [GC 936641K(991232K), 0.5321197 secs]

    Read the article

  • Forcing deallocation of large cache object in Java

    - by Jack
    I use a large (millions) entries hashmap to cache values needed by an algorithm, the key is a combination of two objects as a long. Since it grows continuously (because keys in the map changes, so old ones are not needed anymore) it would be nice to be able to force wiping all the data contained in it and start again during the execution, is there a way to do effectively in Java? I mean release the associated memory (about 1-1.5gb of hashmap) and restart from the empty hashmap..

    Read the article

  • Remove pointer object whose reference is mantained in three different lists

    - by brainydexter
    I am not sure how to approach this problem: 'Player' class mantains a list of Bullet* objects: class Player { protected: std::list< Bullet* > m_pBullet_list; } When the player fires a Bullet, it is added to this list. Also, inside the constructor of bullet, a reference of the same object is updated in CollisionMgr, where CollisionMgr also mantains a list of Bullet*. Bullet::Bullet(GameGL*a_pGameGL, Player*a_pPlayer) : GameObject( a_pGameGL ) { m_pPlayer = a_pPlayer; m_pGameGL->GetCollisionMgr()->AddBullet(this); } class CollisionMgr { void AddBullet(Bullet* a_pBullet); protected: std::list< Bullet*> m_BulletPList; } In CollisionMgr.Update(); based on some conditions, I populate class Cell which again contain a list of Bullet*. Finally, certain conditions qualify a Bullet to be deleted. Now, these conditions are tested upon while iterating through a Cell's list. So, if I have to delete the Bullet object, from all these places, how should I do it so that there are no more dangling references to it? std::list< Bullet*>::iterator bullet_it; for( bullet_it = (a_pCell->m_BulletPList).begin(); bullet_it != (a_pCell->m_BulletPList).end(); bullet_it++) { bool l_Bullet_trash = false; Bullet* bullet1 = *bullet_it; // conditions would set this to true if ( l_Bullet_Trash ) // TrashBullet( bullet1 ); continue; } Also, I was reading about list::remove, and it mentions that it calls the destructor of the object we are trying to delete. Given this info, if I delete from one list, the object does not exist, but the list would still contain a reference to it..How do I handle all these problems ? Can someone please help me here ? Thanks PS: If you want me to post more code or provide explanation, please do let me know.

    Read the article

  • Disposing ActiveX resources owned by another thread

    - by Stefan Teitge
    I've got a problem problem with threading and disposing resources. I've got a C# Windows Forms application which runs expensive operation in a thread. This thread instantiates an ActiveX control (AxControl). This control must be disposed as it uses a high amount of memory. So I implemented a Dispose() method and even a destructor. After the thread ends the destructor is called. This is sadly called by the UI thread. So invoking activexControl.Dispose(); fails with the message "COM object that has been separated from its underlying RCW", as the object belongs to another thread. How to do this correctly or is it just a bad design I use? (I stripped the code down to the minimum including removing any safety concerns.) class Program { [STAThread] static void Main() { // do stuff here, e.g. open a form new Thread(new ThreadStart(RunStuff); // do more stuff } private void RunStuff() { DoStuff stuff = new DoStuff(); stuff.PerformStuff(); } } class DoStuff : IDisposable { private AxControl activexControl; DoStuff() { activexControl = new AxControl(); activexControl.CreateControl(); // force instance } ~DoStuff() { Dispose(); } public void Dispose() { activexControl.Dispose(); } public void PerformStuff() { // invent perpetuum mobile here, takes time } }

    Read the article

  • Why "Finalize method should not reference any other objects" ?

    - by mishal153
    I have been pondering why it is recommended that we should not release managed resources inside finalize. If you see the code example at http://msdn.microsoft.com/en-us/library/system.gc.suppressfinalize.aspx , and search for string "Dispose(bool disposing) executes in two distinct scenarios" and read that comment, you will understand what I mean. Only possibility I can think of is that it probably has something to do with the fact that it is not possible to predict when finalizer will get called. Does anyone know the right answer ? thanks, mishal

    Read the article

  • Android Force Recycle Large Bitmap?

    - by GuyNoir
    From another stackoverflow question, it seems that Android handles large bitmaps differently than other memory. It also seems like there is a way to force Android to recycle the bitmaps to free up memory. Can anyone enlighten me on how to do this. My application uses 2-6 huge bitmaps at all times, so it nearly kills the phone's memory when running, and I want to clear it up when the user quits.

    Read the article

  • Web service performing asynchronous call

    - by kornelijepetak
    I have a webservice method FetchNumber() that fetches a number from a database and then returns it to the caller. But just before it returns the number to the caller, it needs to send this number to another service so instantiates and runs the BackgroundWorker whose job is to send this number to another service. public class FetchingNumberService : System.Web.Services.WebService { [WebMethod] public int FetchNumber() { int value = Database.GetNumber(); AsyncUpdateNumber async = new AsyncUpdateNumber(value); return value; } } public class AsyncUpdateNumber { public AsyncUpdateNumber(int number) { sendingNumber = number; worker = new BackgroundWorker(); worker.DoWork += asynchronousCall; worker.RunWorkerAsync(); } private void asynchronousCall(object sender, DoWorkEventArgs e) { // Sending a number to a service (which is Synchronous) here } private int sendingNumber; private BackgroundWorker worker; } I don't want to block the web service (FetchNumber()) while sending this number to another service, because it can take a long time and the caller does not care about sending the number to another service. Caller expects this to return as soon as possible. FetchNumber() makes the background worker and runs it, then finishes (while worker is running in the background thread). I don't need any progress report or return value from the background worker. It's more of a fire-and-forget concept. My question is this. Since the web service object is instantiated per method call, what happens when the called method (FetchNumber() in this case) is finished, while the background worker it instatiated and ran is still running? What happens to the background thread? When does GC collect the service object? Does this prevent the background thread from executing correctly to the end? Are there any other side-effects on the background thread? Thanks for any input.

    Read the article

  • Asp.net cached objects staying in memory

    - by GordonB
    I have a asp.net web forms app that uses System.Web.Caching.Cache to cache xml data from a number of web services for 2 hours. webCacheObj.Remove(dataCacheKey) webCacheObj.Insert(dataCacheKey, dataToCache, Nothing, DateTime.Now.AddHours(2), Nothing) Every 90 minutes a Microsoft Search Server hits a particular (spider) page which calls the code to put the objects into the cache. The issue i have is that over a period of time, the memory usage of the application grows exponentially. Lets say that in a week, the memory usage of the application pool grows to over 1gb. I'm using IIS7 and no application pool recycling is currently enabled.

    Read the article

  • Use PermGen space or roll-my-own intern method?

    - by Adamski
    I am writing a Codec to process messages sent over TCP using a bespoke wire protocol. During the decode process I create a number of Strings, BigDecimals and dates. The client-server access patterns mean that it is common for the client to issue a request and then decode thousands of response messages, which results in a large number of duplicate Strings, BigDecimals, etc. Therefore I have created an InternPool<T> class allowing me to intern each class of object. Internally, the pool uses a WeakHashMap<T, WeakReferemce<T>>. For example: InternPool<BigDecimal> pool = new InternPool<BigDecimal>(); ... // Read BigDecimal from in buffer and then intern. BigDecimal quantity = pool.intern(readBigDecimal(in)); My question: I am using InternPool for BigDecimal but should I consider also using it for String instead of String's intern() method, which I believe uses PermGen space? What is the advantage of using PermGen space?

    Read the article

  • JVM GC demote object to eden space?

    - by Kevin
    I'm guessing this isn't possible...but here goes. My understanding is that eden space is cheaper to collect than old gen space, especially when you start getting into very large heaps. Large heaps tend to come up with long running applications (server apps) and server apps a lot of the time want to use some kind of caches. Caches with some kind of eviction (LRU) tend to defeat some assumptions that GC makes (temporary objects die quickly). So cache evictions end up filling up old gen faster than you'd like and you end up with a more costly old gen collection. Now, it seems like this sort of thing could be avoided if java provided a way to mark a reference as about to die (delete keyword)? The difference between this and c++ is that the use is optional. And calling delete does not actually delete the object, but rather is a hint to the GC that it should demote the object back to Eden space (where it will be more easily collected). I'm guessing this feature doesn't exist, but, why not (is there a reason it's a bad idea)?

    Read the article

  • Displaying Flex Object References

    - by Pie21
    I have a bit of a memory leak issue in my Flex application, and the short version of my question is: is there any way (in AcitonScript 3) to find all live references to a given object? What I have is a number of views with presentation models behind each of them (using Swiz). The views of interest are children of a TabNavigator, so when I close the tab, the view is removed from the stage. When the view is removed from the stage, Swiz sets the model reference in the view to null, as it should. I also removeAllChildren() from the view. However when profiling the application, when I do this and run a GC, neither the view nor the presentation model are freed (though both set their references to each other to null). One model object used by the view (not a presenter, though) IS freed, so it's not completely broken. I've only just started profiling today (firmly believing in not optimising too early), so I imagine there's some kind of reference floating around somewhere, but I can't see where, and what would be super helpful would be the ability to debug and see a list of objects that reference the target object. Is this at all possible, and if not natively, is there some light-weight way to code this into future apps for debugging purposes? Cheers.

    Read the article

  • Why does GC not clear the Dialog references?

    - by Pavel
    I have a dialog. Every time I create it and then dispose, it stays in memory. It seems to be a memory leak somewhere, but I can't figure it out. Do you have any ideas? See the screenshot of heap dump for more information. Thanks in advance. http://img441.imageshack.us/img441/5764/leak.png

    Read the article

  • What could be the reason for continuous Full GC's during application startup.

    - by Kumar225
    What could be the reason for continuous Full GC's during application (webapplication deployed on tomcat) startup? JDK 1.6 Memory settings -Xms1024M -Xmx1024M -XX:PermSize=200M -XX:MaxPermSize=512M -XX:+UseParallelOldGC jmap output is below Heap Configuration: MinHeapFreeRatio = 40 MaxHeapFreeRatio = 70 MaxHeapSize = 1073741824 (1024.0MB) NewSize = 2686976 (2.5625MB) MaxNewSize = 17592186044415 MB OldSize = 5439488 (5.1875MB) NewRatio = 2 SurvivorRatio = 8 PermSize = 209715200 (200.0MB) MaxPermSize = 536870912 (512.0MB) 0.194: [GC [PSYoungGen: 10489K->720K(305856K)] 10489K->720K(1004928K), 0.0061190 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 0.200: [Full GC (System) [PSYoungGen: 720K->0K(305856K)] [ParOldGen: 0K->594K(699072K)] 720K->594K(1004928K) [PSPermGen: 6645K->6641K(204800K)], 0.0516540 secs] [Times: user=0.10 sys=0.00, real=0.06 secs] 6.184: [GC [PSYoungGen: 262208K->14797K(305856K)] 262802K->15392K(1004928K), 0.0354510 secs] [Times: user=0.18 sys=0.04, real=0.03 secs] 9.549: [GC [PSYoungGen: 277005K->43625K(305856K)] 277600K->60736K(1004928K), 0.0781960 secs] [Times: user=0.56 sys=0.07, real=0.08 secs] 11.768: [GC [PSYoungGen: 305833K->43645K(305856K)] 322944K->67436K(1004928K), 0.0584750 secs] [Times: user=0.40 sys=0.05, real=0.06 secs] 15.037: [GC [PSYoungGen: 305853K->43619K(305856K)] 329644K->72932K(1004928K), 0.0688340 secs] [Times: user=0.42 sys=0.01, real=0.07 secs] 19.372: [GC [PSYoungGen: 273171K->43621K(305856K)] 302483K->76957K(1004928K), 0.0573890 secs] [Times: user=0.41 sys=0.01, real=0.06 secs] 19.430: [Full GC (System) [PSYoungGen: 43621K->0K(305856K)] [ParOldGen: 33336K->54668K(699072K)] 76957K->54668K(1004928K) [PSPermGen: 36356K->36296K(204800K)], 0.4569500 secs] [Times: user=1.77 sys=0.02, real=0.46 secs] 19.924: [GC [PSYoungGen: 4280K->128K(305856K)] 58949K->54796K(1004928K), 0.0041070 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 19.928: [Full GC (System) [PSYoungGen: 128K->0K(305856K)] [ParOldGen: 54668K->54532K(699072K)] 54796K->54532K(1004928K) [PSPermGen: 36300K->36300K(204800K)], 0.3531480 secs] [Times: user=1.19 sys=0.10, real=0.35 secs] 20.284: [GC [PSYoungGen: 4280K->64K(305856K)] 58813K->54596K(1004928K), 0.0040580 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 20.288: [Full GC (System) [PSYoungGen: 64K->0K(305856K)] [ParOldGen: 54532K->54532K(699072K)] 54596K->54532K(1004928K) [PSPermGen: 36300K->36300K(204800K)], 0.2360580 secs] [Times: user=1.01 sys=0.01, real=0.24 secs] 20.525: [GC [PSYoungGen: 4280K->96K(305856K)] 58813K->54628K(1004928K), 0.0030960 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 20.528: [Full GC (System) [PSYoungGen: 96K->0K(305856K)] [ParOldGen: 54532K->54533K(699072K)] 54628K->54533K(1004928K) [PSPermGen: 36300K->36300K(204800K)], 0.2311320 secs] [Times: user=0.88 sys=0.00, real=0.23 secs] 20.760: [GC [PSYoungGen: 4280K->96K(305856K)] 58814K->54629K(1004928K), 0.0034940 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 20.764: [Full GC (System) [PSYoungGen: 96K->0K(305856K)] [ParOldGen: 54533K->54533K(699072K)] 54629K->54533K(1004928K) [PSPermGen: 36300K->36300K(204800K)], 0.2381600 secs] [Times: user=0.85 sys=0.01, real=0.24 secs] 21.201: [GC [PSYoungGen: 5160K->354K(305856K)] 59694K->54888K(1004928K), 0.0019950 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 21.204: [Full GC (System) [PSYoungGen: 354K->0K(305856K)] [ParOldGen: 54533K->54792K(699072K)] 54888K->54792K(1004928K) [PSPermGen: 36300K->36300K(204800K)], 0.2358570 secs] [Times: user=0.98 sys=0.01, real=0.24 secs] 21.442: [GC [PSYoungGen: 4280K->64K(305856K)] 59073K->54856K(1004928K), 0.0022190 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 21.444: [Full GC (System) [PSYoungGen: 64K->0K(305856K)] [ParOldGen: 54792K->54792K(699072K)] 54856K->54792K(1004928K) [PSPermGen: 36300K->36300K(204800K)], 0.2475970 secs] [Times: user=0.95 sys=0.00, real=0.24 secs] 21.773: [GC [PSYoungGen: 11200K->741K(305856K)] 65993K->55534K(1004928K), 0.0030230 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 21.776: [Full GC (System) [PSYoungGen: 741K->0K(305856K)] [ParOldGen: 54792K->54376K(699072K)] 55534K->54376K(1004928K) [PSPermGen: 36538K->36537K(204800K)], 0.2550630 secs] [Times: user=1.05 sys=0.00, real=0.25 secs] 22.033: [GC [PSYoungGen: 4280K->96K(305856K)] 58657K->54472K(1004928K), 0.0032130 secs] [Times: user=0.00 sys=0.00, real=0.01 secs] 22.036: [Full GC (System) [PSYoungGen: 96K->0K(305856K)] [ParOldGen: 54376K->54376K(699072K)] 54472K->54376K(1004928K) [PSPermGen: 36537K->36537K(204800K)], 0.2507170 secs] [Times: user=1.01 sys=0.01, real=0.25 secs] 22.289: [GC [PSYoungGen: 4280K->96K(305856K)] 58657K->54472K(1004928K), 0.0038060 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 22.293: [Full GC (System) [PSYoungGen: 96K->0K(305856K)] [ParOldGen: 54376K->54376K(699072K)] 54472K->54376K(1004928K) [PSPermGen: 36537K->36537K(204800K)], 0.2640250 secs] [Times: user=1.07 sys=0.02, real=0.27 secs] 22.560: [GC [PSYoungGen: 4280K->128K(305856K)] 58657K->54504K(1004928K), 0.0036890 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 22.564: [Full GC (System) [PSYoungGen: 128K->0K(305856K)] [ParOldGen: 54376K->54377K(699072K)] 54504K->54377K(1004928K) [PSPermGen: 36537K->36536K(204800K)], 0.2585560 secs] [Times: user=1.08 sys=0.01, real=0.25 secs] 22.823: [GC [PSYoungGen: 4533K->96K(305856K)] 58910K->54473K(1004928K), 0.0020840 secs] [Times: user=0.00 sys=0.00, real=0.01 secs] 22.825: [Full GC (System) [PSYoungGen: 96K->0K(305856K)] [ParOldGen: 54377K->54377K(699072K)] 54473K->54377K(1004928K) [PSPermGen: 36536K->36536K(204800K)], 0.2505380 secs] [Times: user=0.99 sys=0.01, real=0.25 secs] 23.077: [GC [PSYoungGen: 4530K->32K(305856K)] 58908K->54409K(1004928K), 0.0016220 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 23.079: [Full GC (System) [PSYoungGen: 32K->0K(305856K)] [ParOldGen: 54377K->54378K(699072K)] 54409K->54378K(1004928K) [PSPermGen: 36536K->36536K(204800K)], 0.2320970 secs] [Times: user=0.95 sys=0.00, real=0.23 secs] 24.424: [GC [PSYoungGen: 87133K->800K(305856K)] 141512K->55179K(1004928K), 0.0038230 secs] [Times: user=0.01 sys=0.01, real=0.01 secs] 24.428: [Full GC (System) [PSYoungGen: 800K->0K(305856K)] [ParOldGen: 54378K->54950K(699072K)] 55179K->54950K(1004928K) [PSPermGen: 37714K->37712K(204800K)], 0.4105190 secs] [Times: user=1.25 sys=0.17, real=0.41 secs] 24.866: [GC [PSYoungGen: 4280K->256K(305856K)] 59231K->55206K(1004928K), 0.0041370 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 24.870: [Full GC (System) [PSYoungGen: 256K->0K(305856K)] [ParOldGen: 54950K->54789K(699072K)] 55206K->54789K(1004928K) [PSPermGen: 37720K->37719K(204800K)], 0.4160520 secs] [Times: user=1.12 sys=0.19, real=0.42 secs] 29.041: [GC [PSYoungGen: 262208K->12901K(275136K)] 316997K->67691K(974208K), 0.0170890 secs] [Times: user=0.11 sys=0.00, real=0.02 secs]

    Read the article

  • How to make Android not to recycle my Bitmap until I don't need it?

    - by RankoR
    I'm getting drawing cache of the view, that is set as contentView to the Activity. Then I set new content view to the activity and pass that drawing cache to it. But Android recycles my bitmaps and I'm getting this exception: 06-13 01:58:04.132: E/AndroidRuntime(15106): java.lang.RuntimeException: Canvas: trying to use a recycled bitmap android.graphics.Bitmap@40e72dd8 Any way to fix it? I had an idea to extend Bitmap class, but it's final. Why GC is recycling it?

    Read the article

  • What is the best way to find a processed memory allocations in terms of C# objects

    - by Shantaram
    I have written various C# console based applications, some of them long running some not, which can over time have a large memory foot print. When looking at the windows perofrmance monitor via the task manager, the same question keeps cropping up in my mind; how do I get a break down of the number objects by type that are contributing to this footprint; and which of those are f-reachable and those which aren't and hence can be collected. On numerous occasions I've performed a code inspection to ensure that I am not unnecessarily holding on objects longer than required and disposing of objects with the using construct. I have also recently looked at employing the CG.Collect method when I have released a large number of objects (for example held in a collection which has just been cleared). However, I am not so sure that this made that much difference, so I threw that code away. I am guessing that there are tools in sysinternals suite than can help to resolve these memory type quiestions but I am not sure which and how to use them. The alternative would be to pay for a third party profiling tool such as JetBrains dotTrace; but I need to make sure that I've explored the free options first before going cap in hand to my manager.

    Read the article

  • for line in open(filename)

    - by foosion
    I frequently see python code similar to for line in open(filename): do_something(line) When does filename get closed with this code? Would it be better to write with open(filename) as f: for line in f.readlines(): do_something(line)

    Read the article

  • Full GC real time is much more that user+sys times

    - by Stas
    Hi. We have a Web Java based application running on JBoss with allowed maximum heap size of about 1.2 GB (total machine physical memory is 2 GB). At some point the application stops responding (to clients) for several minutes. After some analysis we found out that the culprit is the Full GC. Here's an excerpt from the verbose GC log: 74477.402: [Full GC [PSYoungGen: 3648K-0K(332160K)] [PSOldGen: 778476K-589497K(819200K)] 782124K-589497K(1151360K) [PSPermGen: 102671K-102671K(171328K)], 646.1546860 secs] [Times: user=3.84 sys=3.72, real=646.17 secs] What I don't understand is how is it possible that the real time spent on Full GC is about 11 minutes (646 seconds), while user+sys times are just 7.5 seconds. 7.5 seconds sound to me much more logical time to spend for cleaning <200 MB from the old generation. Where does all the other time go? Thanks a lot.

    Read the article

  • .NET Free memory usage (how to prevent overallocation / release memory to the OS)

    - by Ronan Thibaudau
    I'm currently working on a website that makes large use of cached data to avoid roundtrips. At startup we get a "large" graph (hundreds of thouthands of different kinds of objects). Those objects are retrieved over WCF and deserialized (we use protocol buffers for serialization) I'm using redgate's memory profiler to debug memory issues (the memory didn't seem to fit with how much memory we should need "after" we're done initializing and end up with this report Now what we can gather from this report is that: 1) Most of the memory .NET allocated is free (it may have been rightfully allocated during deserialisation, but now that it's free, i'd like for it to return to the OS) 2) Memory is fragmented (which is bad, as everytime i refresh the cash i need to redo the memory hungry deserialisation process and this, in turn creates large object that may throw an OutOfMemoryException due to fragmentation) 3) I have no clue why the space is fragmented, because when i look at the large object heap, there are only 30 instances, 15 object[] are directly attached to the GC and totally unrelated to me, 1 is a char array also attached directly to the GC Heap, the remaining 15 are mine but are not the cause of this as i get the same report if i comment them out in code. So my question is, what can i do to go further with this? I'm not really sure what to look for in debugging / tools as it seems my memory is fragmented, but not by me, and huge amounts of free spaces are allocated by .net , which i can't release. Also please make sure you understand the question well before answering, i'm not looking for a way to free memory within .net (GC.Collect), but to free memory that is already free in .net , to the system as well as to defragment said memory. Note that a slow solution is fine, if it's possible to manually defragment the large heap i'd be all for it as i can call it at the end of RefreshCache and it's ok if it takes 1 or 2 second to run. Thanks for your help! A few notes i forgot: 1) The project is a .net 2.0 website, i get the same results running it in a .net 4 pool, idem if i run it in a .net 4 pool and convert it to .net 4 and recompile. 2) These are results of a release build, so debug build can not be the issue. 3) And this is probably quite important, i do not get these issues at all in the webdev server, only in IIS, in the webdev i get memory consumption rather close to my actual consumption (well more, but not 5-10X more!)

    Read the article

  • C# - What is root reference ?

    - by DotNetGuy
    class GCTest { static Object r1; static void Main() { r1 = new Object(); Object r2 = new Object(); Object r3 = new Object(); System.GC.Collect(); // what can be reclaimed here ? r1 = null; r3.ToString(); System.GC.Collect(); // what can be reclaimed here ? } } // code from - DonBox's Essential .Net

    Read the article

  • What would happen to GC if I run process with priority = RealTime?

    - by Bobb
    I have a C# app which runs with priority RealTime. It was all fine until I made few hectic changes in past 2 days. Now it runs out of memory in few hours. I am trying to find whether it is a memory leak I created of this is because I consume lot more objects than before and GC simply cant collect them because it runs with same priority. My question is - what could happen to GC when it tries to collect objects in application with RealTime priority (there is also at least one thread running with Highest thread priority)? (P.S. by realtime priority I mean Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.RealTime) Sorry forgot to tell. GC is in Server mode

    Read the article

  • .NET object creation and generations

    - by nimoraca
    Is there a way to tell the .NET to allocate a new object in generation 2 heap. I have a problem where I need to allocate approximately 200 MB of objects, do something with them, and throw them away. What happens here is that all the data gets copied two times (from gen0 to gen1 and then from gen1 to gen2).

    Read the article

  • what can cause large discrepancy between minor GC time and total pause time?

    - by cxcg
    We have a latency-sensitive application, and are experiencing some GC-related pauses we don't fully understand. We occasionally have a minor GC that results in application pause times that are much longer than the reported GC time itself. Here is an example log snippet: 485377.257: [GC 485378.857: [ParNew: 105845K-621K(118016K), 0.0028070 secs] 136492K-31374K(1035520K), 0.0028720 secs] [Times: user=0.01 sys=0.00, real=1.61 secs] Total time for which application threads were stopped: 1.6032830 seconds The total pause time here is orders of magnitude longer than the reported GC time. These are isolated and occasional events: the immediately preceding and succeeding minor GC events do not show this large discrepancy. The process is running on a dedicated machine, with lots of free memory, 8 cores, running Red Hat Enterprise Linux ES Release 4 Update 8 with kernel 2.6.9-89.0.1EL-smp. We have observed this with (32 bit) JVM versions 1.6.0_13 and 1.6.0_18. We are running with these flags: -server -ea -Xms512m -Xmx512m -XX:+UseConcMarkSweepGC -XX:NewSize=128m -XX:MaxNewSize=128m -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCApplicationStoppedTime -XX:-TraceClassUnloading Can anybody offer some explanation as to what might be going on here, and/or some avenues for further investigation?

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >