Search Results

Search found 1329 results on 54 pages for 'garbage collecting'.

Page 5/54 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How can I write a unit test to determine whether an object can be garbage collected?

    - by driis
    In relation to my previous question, I need to check whether a component that will be instantiated by Castle Windsor, can be garbage collected after my code has finished using it. I have tried the suggestion in the answers from the previous question, but it does not seem to work as expected, at least for my code. So I would like to write a unit test that tests whether a specific object instance can be garbage collected after some of my code has run. Is that possible to do in a reliable way ? EDIT I currently have the following test based on Paul Stovell's answer, which succeeds: [TestMethod] public void ReleaseTest() { WindsorContainer container = new WindsorContainer(); container.Kernel.ReleasePolicy = new NoTrackingReleasePolicy(); container.AddComponentWithLifestyle<ReleaseTester>(LifestyleType.Transient); Assert.AreEqual(0, ReleaseTester.refCount); var weakRef = new WeakReference(container.Resolve<ReleaseTester>()); Assert.AreEqual(1, ReleaseTester.refCount); GC.Collect(); GC.WaitForPendingFinalizers(); Assert.AreEqual(0, ReleaseTester.refCount, "Component not released"); } private class ReleaseTester { public static int refCount = 0; public ReleaseTester() { refCount++; } ~ReleaseTester() { refCount--; } } Am I right assuming that, based on the test above, I can conclude that Windsor will not leak memory when using the NoTrackingReleasePolicy ?

    Read the article

  • Are C or C++ The Only Viable Languages for a GC

    - by user95312
    Background I have just finished writing a compiler for a functional language compiling to the JVM as a learning project. However, since I'm just doing this to learn, I thought it might be interesting to write a native backend and a RTS for it. As I've been planning out what this new backend will look like, the one point I'm stumbling on is the garbage collector. I've implemented the compiler in Haskell. But I have no desire to write the GC in Haskell since, while it may be possible, it'd suck. Question I've looked at several FOSS garbage collectors prior to posting and most of them were implemented in good old ANSI C. Is this still the most accepted choice for writing a GC nowadays? I've seen that this site tends to frown upon questions with multiple answers so I hope this will make it more specific: If some startup was writing a professional grade gc today, are the only viable choice for them C or C++? It's my first question here so please comment and let me know if this question is ill-suited for for programmers.

    Read the article

  • Why is free() not allowed in garbage-collected languages?

    - by sundar
    I was reading the C# entry on Wikipedia, and came across: Managed memory cannot be explicitly freed; instead, it is automatically garbage collected. Why is it that in languages with automatic memory management, manual management isn't even allowed? I can see that in most cases it wouldn't be necessary, but wouldn't it come in handy where you are tight on memory and don't want to rely on the GC being smart?

    Read the article

  • What's a good library to do computational geometry (like CGAL) in a garbage-collected language?

    - by Squash Monster
    I need a library to handle computational geometry in a project, especially boolean operations, but just about every feature is useful. The best library I can find for this is CGAL, but this is the sort of project I would hesitate to make without garbage collection. What language/library pairs can you recommend? So far my best bet is importing CGAL into D. There is also a project for making Python bindings for CGAL, but it's very incomplete.

    Read the article

  • JavaScript: How can I determine what objects are being collected by the garbage collector?

    - by shino
    I have significant garbage collection pauses. I'd like to pinpoint the objects most responsible for this collection before I try to fix the problem. I've looked at the heap snapshot on Chrome, but (correct me if I am wrong) I cannot seem to find any indicator of what is being collected, only what is taking up the most memory. Is there a way to answer this empirically, or am I limited to educated guesses?

    Read the article

  • how to get rid off garbage in array of chars?

    - by fang_dejavu
    hi, I'm writing a C program but I keep having problems with my array of chars. I keep getting garbage when I print it using prinf. here is an example of what I get when I print it: char at t.symbol is Aôÿ¿ char at tabl[0].symbol is A char at tabl[1].symbol is a char at tabl[2].symbol is a char at tabl[3].symbol is d char at tabl[4].symbol is e char at tabl[5].symbol is f char at tabl[6].symbol is g char at tabl[7].symbol is h char at tabl[8].symbol is i char at tabl[9].symbol is x char at t[0].symbol is a0AÃ char at t[1].symbol is b)@Ã4 char at t[2].symbol is ckU* char at t[3].symbol is Aôÿ¿ char at t[4].symbol is gØ could someone tell me how to get rid off the garbage in the array of chars? here is my code #define MAX 100 #ifndef SYMBSIZE #define SYMBSIZE 1 #endif typedef struct tableme { char symbol[SYMBSIZE]; int value; int casenmbr; int otherinfo; }tabletype; int main(int argc, char **argv) { tabletype t[MAX]; t[3].symbol[0] = 'A'; t[0].value=1; t[0].casenmbr = 7; t[0].otherinfo = 682; tabletype tabl[MAX]; tabl[0].value = 1; tabl[0].symbol[0] = 'A'; tabl[1].value = 11; tabl[1].symbol[0]= 'a'; tabl[2].value = 12; tabl[2].symbol[0] = 'a'; tabl[3].value = 13; tabl[3].symbol[0] = 'd'; tabl[4].value = 14; tabl[4].symbol[0] = 'e'; tabl[5].value = 15; tabl[5].symbol[0] = 'f'; tabl[6].value = 16; tabl[6].symbol[0] = 'g'; tabl[7].value = 17; tabl[7].symbol[0] = 'h'; tabl[8].symbol[0] = 'i'; tabl[9].symbol[0] = 'x'; t[1].symbol[0] = 'b'; t[0].symbol[0]= 'a'; t[2].symbol[0]= 'c'; t[4].symbol[0]= 'g'; printf("char at t.symbol is %s \n", t[3].symbol); for( x=0;x<10;x++) { printf("char at tabl[%d].symbol is %s \n",x, tabl[x].symbol); } int j; for(j = 0; j<5;j++) { printf("char at t[%d].symbol is %s \n",j, t[j].symbol); } return 0; }

    Read the article

  • What garbage collection algorithms do all 5 major browsers use?

    - by Martin Wittemann
    I am currently rethinking the object dispose handling of the qooxdoo JavaScript framework. Have a look at the following diagram (A is currently in scope): Let's say we want to delete B. Generally, we cut all reference between all objects. This means we cut connection 1 to 5 in the example. Is this really necessary? As far as I have read hear 1, browsers use the mark-and-sweep algorithm. In that case, we just need to cut reference 1 (connection to the scope) and 5 (connection to the DOM) which could be much faster. But can I be sure that all browsers use the mark-and-sweep algorithm or something similar? 1 http://stackoverflow.com/questions/864516/what-is-javascript-garbage-collection

    Read the article

  • Does allocation speed depend on the garbage collector being used?

    - by jkff
    My app is allocating a ton of objects (1mln per second; most objects are byte arrays of size ~80-100 and strings of the same size) and I think it might be the source of its poor performance. The app's working set is only tens of megabytes. Profiling the app shows that GC time is negligibly small. However, I suspect that perhaps the allocation procedure depends on which GC is being used, and some settings might make allocation faster or perhaps make a positive influence on cache hit rate, etc. Is that so? Or is allocation performance independent on GC settings under the assumption that garbage collection itself takes little time?

    Read the article

  • Does invoking System.gc() in java suggest garbage collection of the tenured generation as well as th

    - by Markus Jevring
    When invoking System.gc() in java (via JMX), it will dutifully (attempt to) clean the young generation. This generally works very well. I have never seen it attempt to clean the tenured generation, though. This leads me to two questions: Can the tenured generation even be collected (i.e. is there actually garbage in this generation, or do all objects in the tenured generation actually still have live references to them)? If the tenured generation can be collected, can this be done via System.gc(), or is there another way to do it (unlikely), or will I simply have to wait until I run out of space in the tenured generation?

    Read the article

  • How do I get .NET to garbage collect aggressively?

    - by mmr
    I have an application that is used in image processing, and I find myself typically allocating arrays in the 4000x4000 ushort size, as well as the occasional float and the like. Currently, the .NET framework tends to crash in this app apparently randomly, almost always with an out of memory error. 32mb is not a huge declaration, but if .NET is fragmenting memory, then it's very possible that such large continuous allocations aren't behaving as expected. Is there a way to tell the garbage collector to be more aggressive, or to defrag memory (if that's the problem)? I realize that there's the GC.Collect and GC.WaitForPendingFinalizers calls, and I've sprinkled them pretty liberally through my code, but I'm still getting the errors. It may be because I'm calling dll routines that use native code a lot, but I'm not sure. I've gone over that C++ code, and make sure that any memory I declare I delete, but still I get these C# crashes, so I'm pretty sure it's not there. I wonder if the C++ calls could be interfering with the GC, making it leave behind memory because it once interacted with a native call-- is that possible? If so, can I turn that functionality off? EDIT: Here is some very specific code that will cause the crash. According to this SO question, I do not need to be disposing of the BitmapSource objects here. Here is the naive version, no GC.Collects in it. It generally crashes on iteration 4 to 10 of the undo procedure. This code replaces the constructor in a blank WPF project, since I'm using WPF. I do the wackiness with the bitmapsource because of the limitations I explained in my answer to @dthorpe below as well as the requirements listed in this SO question. public partial class Window1 : Window { public Window1() { InitializeComponent(); //Attempts to create an OOM crash //to do so, mimic minute croppings of an 'image' (ushort array), and then undoing the crops int theRows = 4000, currRows; int theColumns = 4000, currCols; int theMaxChange = 30; int i; List<ushort[]> theList = new List<ushort[]>();//the list of images in the undo/redo stack byte[] displayBuffer = null;//the buffer used as a bitmap source BitmapSource theSource = null; for (i = 0; i < theMaxChange; i++) { currRows = theRows - i; currCols = theColumns - i; theList.Add(new ushort[(theRows - i) * (theColumns - i)]); displayBuffer = new byte[theList[i].Length]; theSource = BitmapSource.Create(currCols, currRows, 96, 96, PixelFormats.Gray8, null, displayBuffer, (currCols * PixelFormats.Gray8.BitsPerPixel + 7) / 8); System.Console.WriteLine("Got to change " + i.ToString()); System.Threading.Thread.Sleep(100); } //should get here. If not, then theMaxChange is too large. //Now, go back up the undo stack. for (i = theMaxChange - 1; i >= 0; i--) { displayBuffer = new byte[theList[i].Length]; theSource = BitmapSource.Create((theColumns - i), (theRows - i), 96, 96, PixelFormats.Gray8, null, displayBuffer, ((theColumns - i) * PixelFormats.Gray8.BitsPerPixel + 7) / 8); System.Console.WriteLine("Got to undo change " + i.ToString()); System.Threading.Thread.Sleep(100); } } } Now, if I'm explicit in calling the garbage collector, I have to wrap the entire code in an outer loop to cause the OOM crash. For me, this tends to happen around x = 50 or so: public partial class Window1 : Window { public Window1() { InitializeComponent(); //Attempts to create an OOM crash //to do so, mimic minute croppings of an 'image' (ushort array), and then undoing the crops for (int x = 0; x < 1000; x++){ int theRows = 4000, currRows; int theColumns = 4000, currCols; int theMaxChange = 30; int i; List<ushort[]> theList = new List<ushort[]>();//the list of images in the undo/redo stack byte[] displayBuffer = null;//the buffer used as a bitmap source BitmapSource theSource = null; for (i = 0; i < theMaxChange; i++) { currRows = theRows - i; currCols = theColumns - i; theList.Add(new ushort[(theRows - i) * (theColumns - i)]); displayBuffer = new byte[theList[i].Length]; theSource = BitmapSource.Create(currCols, currRows, 96, 96, PixelFormats.Gray8, null, displayBuffer, (currCols * PixelFormats.Gray8.BitsPerPixel + 7) / 8); } //should get here. If not, then theMaxChange is too large. //Now, go back up the undo stack. for (i = theMaxChange - 1; i >= 0; i--) { displayBuffer = new byte[theList[i].Length]; theSource = BitmapSource.Create((theColumns - i), (theRows - i), 96, 96, PixelFormats.Gray8, null, displayBuffer, ((theColumns - i) * PixelFormats.Gray8.BitsPerPixel + 7) / 8); GC.WaitForPendingFinalizers();//force gc to collect, because we're in scenario 2, lots of large random changes GC.Collect(); } System.Console.WriteLine("Got to changelist " + x.ToString()); System.Threading.Thread.Sleep(100); } } } If I'm mishandling memory in either scenario, if there's something I should spot with a profiler, let me know. That's a pretty simple routine there. Unfortunately, it looks like @Kevin's answer is right-- this is a bug in .NET and how .NET handles objects larger than 85k. This situation strikes me as exceedingly strange; could Powerpoint be rewritten in .NET with this kind of limitation, or any of the other Office suite applications? 85k does not seem to me to be a whole lot of space, and I'd also think that any program that uses so-called 'large' allocations frequently would become unstable within a matter of days to weeks when using .NET. EDIT: It looks like Kevin is right, this is a limitation of .NET's GC. For those who don't want to follow the entire thread, .NET has four GC heaps: gen0, gen1, gen2, and LOH (Large Object Heap). Everything that's 85k or smaller goes on one of the first three heaps, depending on creation time (moved from gen0 to gen1 to gen2, etc). Objects larger than 85k get placed on the LOH. The LOH is never compacted, so eventually, allocations of the type I'm doing will eventually cause an OOM error as objects get scattered about that memory space. We've found that moving to .NET 4.0 does help the problem somewhat, delaying the exception, but not preventing it. To be honest, this feels a bit like the 640k barrier-- 85k ought to be enough for any user application (to paraphrase this video of a discussion of the GC in .NET). For the record, Java does not exhibit this behavior with its GC.

    Read the article

  • C# creating a Class, having objects as member variables? I think the objects are garbage collecte

    - by Bryan
    So I have a class that has the following member variables. I have get and set functions for every piece of data in this class. public class NavigationMesh { public Vector3 node; int weight; bool isWall; bool hasTreasure; public NavigationMesh(int x, int y, int z, bool setWall, bool setTreasure) { //default constructor //Console.WriteLine(x + " " + y + " " + z); node = new Vector3(x, y, z); //Console.WriteLine(node.X + " " + node.Y + " " + node.Z); isWall = setWall; hasTreasure = setTreasure; weight = 1; }// end constructor public float getX() { Console.WriteLine(node.X); return node.X; } public float getY() { Console.WriteLine(node.Y); return node.Y; } public float getZ() { Console.WriteLine(node.Z); return node.Z; } public bool getWall() { return isWall; } public void setWall(bool item) { isWall = item; } public bool getTreasure() { return hasTreasure; } public void setTreasure(bool item) { hasTreasure = item; } public int getWeight() { return weight; } }// end class In another class, I have a 2-Dim array that looks like this NavigationMesh[,] mesh; mesh = new NavigationMesh[502,502]; I use a double for loop to assign this, my problem is I cannot get the data I need out of the Vector3 node object after I create this object in my array with my "getters". I've tried making the Vector3 a static variable, however I think it refers to the last instance of the object. How do I keep all of these object in memory? I think there being garbage collected. Any thoughts?

    Read the article

  • How do I get C# to garbage collect aggressively?

    - by mmr
    I have an application that is used in image processing, and I find myself typically allocating arrays in the 4000x4000 ushort size, as well as the occasional float and the like. Currently, the .NET framework tends to crash in this app apparently randomly, almost always with an out of memory error. 32mb is not a huge declaration, but if .NET is fragmenting memory, then it's very possible that such large continuous allocations aren't behaving as expected. Is there a way to tell the garbage collector to be more aggressive, or to defrag memory (if that's the problem)? I realize that there's the GC.Collect and GC.WaitForPendingFinalizers calls, and I've sprinkled them pretty liberally through my code, but I'm still getting the errors. It may be because I'm calling dll routines that use native code a lot, but I'm not sure. I've gone over that C++ code, and make sure that any memory I declare I delete, but still I get these C# crashes, so I'm pretty sure it's not there. I wonder if the C++ calls could be interfering with the GC, making it leave behind memory because it once interacted with a native call-- is that possible? If so, can I turn that functionality off?

    Read the article

  • Zabbix Proxy not collecting data

    - by Jordan Eunson
    I have a working Zabbix 1.8.2 server collecting data for our office and our colo facility. However the link between the colo and office is flaky. What I'm trying to do is setup a proxy on the colo side to have a 1 hour cache and relay the data to our primary server at the office. Our zabbix server is compiled from source and uses a mysql database I've followed the instructions found in the zabbix documentation to compile the proxy using a sqlite3 database. I add the proxy to zabbix under Administration-DM-Proxies. The zabbix server "sees" the proxy because the "last seen" field is always under 60s. However when I assign a colo host to the proxy I stop receiving data from it. The colo host's zabbix_agentd.log file says this: 29343:20100622:124847 Timeout while answering request 29343:20100622:124847 Getting list of active checks failed. Will retry after 60 seconds The zabbix_proxy.log says this. 2041:20100622:123131.760 Deleted 0 records from history [0.000994 seconds] 2028:20100622:124131.671 Error while receiving answer from server [ZBX_TCP_READ() failed I also am unable to receive any SNMP data which is more important to me than the zabbix agent data. Has anyone had this problem before? Zabbix Server OS: CentOS5.4 Zabbix Server Build: 1.8.2 from source Zabbix Proxy OS: CentOS5.4 Zabbix Proxy Build: 1.8.2 from source P.S. The SQLite database on the zabbix proxy never gets any data written to it, it is identical to when I created it from the blank schema in zabbix-1.8.2/create/schema. (Yes I've checked the permissions)

    Read the article

  • Zabbix Proxy not collecting data

    - by syntaxcollector
    Hi All I have a working Zabbix 1.8.2 server collecting data for our office and our colo facility. However the link between the colo and office is flaky. What I'm trying to do is setup a proxy on the colo side to have a 1 hour cache and relay the data to our primary server at the office. Our zabbix server is compiled from source and uses a mysql database I've followed the instructions found in the zabbix documentation to compile the proxy using a sqlite3 database. I add the proxy to zabbix under Administration-DM-Proxies. The zabbix server "sees" the proxy because the "last seen" field is always under 60s. However when I assign a colo host to the proxy I stop receiving data from it. The colo host's zabbix_agentd.log file says this: 29343:20100622:124847 Timeout while answering request 29343:20100622:124847 Getting list of active checks failed. Will retry after 60 seconds The zabbix_proxy.log says this. 2041:20100622:123131.760 Deleted 0 records from history [0.000994 seconds] 2028:20100622:124131.671 Error while receiving answer from server [ZBX_TCP_READ() failed I also am unable to receive any SNMP data which is more important to me than the zabbix agent data. Has anyone had this problem before? Zabbix Server OS: CentOS5.4 Zabbix Server Build: 1.8.2 from source Zabbix Proxy OS: CentOS5.4 Zabbix Proxy Build: 1.8.2 from source P.S. The SQLite database on the zabbix proxy never gets any data written to it, it is identical to when I created it from the blank schema in zabbix-1.8.2/create/schema. (Yes I've checked the permissions)

    Read the article

  • Investigating .NET Memory Management and Garbage Collection

    Investigating a subtle memory leak can be tricky business, but things are made easier by using The .NET framework's tool SOS (Son of Strike) which is a debugger extension for debugging managed code, used in collaboration with the Windows debugger....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Techniques to re-factor garbage and maintain sanity?

    - by Incognito
    So I'm sitting down to a nice bowl of c# spaghetti, and need to add something or remove something... but I have challenges everywhere from functions passing arguments that doesn't make sense, someone who doesn't understand data structures abusing strings, redundant variables, some comments are red-hearings, internationalization is on a per-every-output-level, SQL doesn't use any kind of DBAL, database connections are left open everywhere... Are there any tools or techniques I can use to at least keep track of the "functional integrity" of the code (meaning my "improvements" don't break it), or a resource online with common "bad patterns" that explains a good way to transition code? I'm basically looking for a guidebook on how to spin straw into gold. Here's some samples from the same 500 line function: protected void DoSave(bool cIsPostBack) { //ALWAYS a cPostBack cIsPostBack = true; SetPostBack("1"); string inCreate ="~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"; parseValues = new string []{"","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","",""}; if (!cIsPostBack) { //....... //.... //.... if (!cIsPostBack) { } else { } //.... //.... strHPhone = StringFormat(s1.Trim()); s1 = parseValues[18].Replace(encStr," "); strWPhone = StringFormat(s1.Trim()); s1 = parseValues[11].Replace(encStr," "); strWExt = StringFormat(s1.Trim()); s1 = parseValues[21].Replace(encStr," "); strMPhone = StringFormat(s1.Trim()); s1 = parseValues[19].Replace(encStr," "); //(hundreds of lines of this) //.... //.... SQL = "...... lots of SQL .... "; SqlCommand curCommand; curCommand = new SqlCommand(); curCommand.Connection = conn1; curCommand.CommandText = SQL; try { curCommand.ExecuteNonQuery(); } catch {} //.... } I've never had to refactor something like this before, and I want to know if there's something like a guidebook or knowledgebase on how to do this sort of thing, finding common bad patterns and offering the best solutions to repair them. I don't want to just nuke it from orbit,

    Read the article

  • Why did the team at LMAX use Java and design the architecture to avoid GC at all cost?

    - by kadaj
    Why did the team at LMAX design the LMAX Disruptor in Java but all their design points to minimizing GC use? If one does not want to have GC run then why use a garbage collected language? Their optimizations, the level of hardware knowledge and the thought they put are just awesome but why Java? I'm not against Java or anything, but why a GC language? Why not use something like D or any other language without GC but allows efficient code? Is it that the team is most familiar with Java or does Java possess some unique advantage that I am not seeing? Say they develop it using D with manual memory management, what would be the difference? They would have to think low level (which they already are), but they can squeeze the best performance out of the system as it's native.

    Read the article

  • Can I trigger PHP garbage collection to happen automatically if I have circular references?

    - by Beau Simensen
    I seem to recall a way to setup the __destruct for a class in such a way that it would ensure that circular references would be cleaned up as soon as the outside object falls out of scope. However, the simple test I built seems to indicate that this is not behaving as I had expected/hoped. Is there a way to setup my classes in such a way that PHP would clean them up correctly when the outermost object falls out of scope? I am not looking for alternate ways to write this code, I am looking for whether or not this can be done, and if so, how? I generally try to avoid these types of circular references where possible. class Bar { private $foo; public function __construct($foo) { $this->foo = $foo; } public function __destruct() { print "[destroying bar]\n"; unset($this->foo); } } class Foo { private $bar; public function __construct() { $this->bar = new Bar($this); } public function __destruct() { print "[destroying foo]\n"; unset($this->bar); } } function testGarbageCollection() { $foo = new Foo(); } for ( $i = 0; $i < 25; $i++ ) { echo memory_get_usage() . "\n"; testGarbageCollection(); } The output looks like this: 60440 61504 62036 62564 63092 63620 [ destroying foo ] [ destroying bar ] [ destroying foo ] [ destroying bar ] [ destroying foo ] [ destroying bar ] [ destroying foo ] [ destroying bar ] [ destroying foo ] [ destroying bar ] What I had hoped for: 60440 [ destorying foo ] [ destorying bar ] 60440 [ destorying foo ] [ destorying bar ] 60440 [ destorying foo ] [ destorying bar ] 60440 [ destorying foo ] [ destorying bar ] 60440 [ destorying foo ] [ destorying bar ] 60440 [ destorying foo ] [ destorying bar ]

    Read the article

  • How does the Garbage Collector decide when to kill objects held by WeakReferences?

    - by Kennet Belenky
    I have an object, which I believe is held only by a WeakReference. I've traced its reference holders using SOS and SOSEX, and both confirm that this is the case (I'm not an SOS expert, so I could be wrong on this point). The standard explanation of WeakReferences is that the GC ignores them when doing its sweeps. Nonetheless, my object survives an invocation to GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced). Is it possible for an object that is only referenced with a WeakReference to survive that collection? Is there an even more thorough collection that I can force? Or, should I re-visit my belief that the only references to the object are weak?

    Read the article

  • Dottrace Dead vs. Garbage

    - by Moshe
    After reading the dottrace documentation I realized that: Dead objects are objects deleted before the end point of the snapshot. Garbage objects are objects allocated after the starting point and deleted before the end point - in other words, "Garbage objects" is a subset of "Dead objects". But after doing some profiling sessions, I could see that sometimes the number of "Garbage objects" is by far greater than the number of "Dead objects" of the same class (for example System.String). How should I interpret this phenomenon?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >