Search Results

Search found 12476 results on 500 pages for 'memory leak detector'.

Page 2/500 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How to find number of memory accesses

    - by Sharat Chandra
    Can anybody tell me a unix command that can be used to find the number of memory accesses that took place in a given interval. vmstat, top and sar only give the amount of physical memory space occupied/available .. But do not give the number of memory of accesses in a given interval

    Read the article

  • Memory Profiling: How to detect which application/package is consuming too much memory

    - by malvim
    Hi, I have a situation here at work where we run a JEE server with several applications deployed on it. Lately, we've been having frequent OutOfMemoryException's. We suspect some of the apps might be behaving badly, maybe leaking, or something. The problem is, we can't really tell which one. We have run some memory profilers (like YourKit), and they're pretty good at telling what classes use the most memory. But they don't show relationships between classes, so that leaves us with a situation like this: We see that there are, say, lots of Strings and int arrays and HashMap entries, but we can't really tell which application or package they come from. Is there a way of knowing where these objects come from, so we can try to pinpoint the packages (or apps) that are allocating the most memory? Thank you in advance.

    Read the article

  • debugging a resource leak in a printer driver

    - by MK
    I'm trying to debug a memory leak in a printer driver. I'm pretty sure it's a resource leak, not just a plain memory leak because analyzing heap with !heap -s in windbg doesn't show any increase. How do I monitor other kinds of objects with windbg? Number of GDI objects and open handles is not growing either, so what could it be?

    Read the article

  • Memory leak - debugger and memory analyzer disagreeing

    - by Joe
    There is a memory leak in my android game - I've managed to narrow it down to a certain object, which has a list of objects to render on a texture. This object clears the list every time it draws though - so I can't work out how its managed to get thousands of elements in the list. I checked in the debugger and it doesn't have all these thousands of elements - usually about 2-20 which is what I'd expect... The game definitely slows down progressively only if I have rendering to texturing on. Here is a picture of Memory Analyzer showing 6,111 items: Memory Analyzer Here is a picture of the debugger showing 2: Debugger Can anyone help me find out whats wrong?

    Read the article

  • .NET memory leak?

    - by SA
    I have an MDI which has a child form. The child form has a DataGridView in it. I load huge amount of data in the datagrid view. When I close the child form the disposing method is called in which I dispose the datagridview this.dataGrid.Dispose(); this.dataGrid = null; When I close the form the memory doesn't go down. I use the .NET memory profiler to track the memory usage. I see that the memory usage goes high when I initially load the data grid (as expected) and then becomes constant when the loading is complete. When I close the form it still remains constant. However when I take a snapshot of the memory using the memory profiler, it goes down to what it was before loading the file. Taking memory snapshot causes it to forcefully run garbage collector. What is going on? Is there a memory leak? Or do I need to run the garbage collector forcefully? More information: When I am closing the form I no longer need the information. That is why I am not holding a reference to the data.

    Read the article

  • Memory dump much smaller than available memory

    - by Daniel
    I have a Tomcat Application Server that is configured to create a memory dump on OOM, and it is started with -Xmx1024M, so a Gigabyte should be available to him. Now I found one such dump and it contains only 260MB of unretained memory. How is it possible that the dump is so much smaller than the size he should have available?

    Read the article

  • Too much memory consumed during TFS automated build

    - by Bernard Chen
    We're running TFS 2010 Standard Edition, and we've set up an automated build to run whenever someone checks in code. We run through all of the automated tests (built with MSTest) as part of the build. We've configured the build to run the tests as a 64-bit process, but the QTAgent.exe that runs the tests grows in memory while the tests are running. It is currently reaching 8GB for the ~650 tests we have, and the process has slowed significantly when we went from 450 tests to 650 tests. When we run all of the tests in the local development environment, memory seems to be freed at least with each TestClass and never exceeds a certain level. The process of running all tests has not increased significantly in the local development environment. Is there a way to configure the build service to free up memory with each Test or each TestClass? With the way things are currently running, the build process gets very slow when we start to run out of memory on the machine. Edit: I found the MSTest invocation in the build log and ran it manually and saw the same behavior of runaway memory. I removed the /publish, /publishbuild, /teamproject, /platform, and /flavor parameters from the invocation of MSTest, in case the test runner was holding onto results until the end, but the behavior didn't change. I ran the same command line on a dev box, separate from the build server, and the memory freed up frequently. It seems there must be something wrong/different about the build server that is causing it to behave different, but I'm stumped where to look. I've looked at qtagent.exe.config, mstest.exe.config, versions of both executables. What else might affect this?

    Read the article

  • Automatically Kill/Restart Process(es) When Memory is Critically Low

    - by nemesisfixx
    I have a Debian Wheezy VPS box where am running a couple of Django apps in production. Ideally, would have tried addressed my current memory footprint issues by optimizing the apps, adding more RAM or augmenting with Swap. But the problem is that I doubt there's much memory optimization I'd milk from optimizing the Django apps (the stack being open-source and robust), and adding RAM is a cost constraint for me (this is a remote VPS), also, the host doesn't offer options to use Swap! So, in the meantime (as I wait to secure more resources to afford more RAM), I wish to mitigate the scenarios where the server runs out memory so that I just have to request a VPS restart (as in, at that point, I can't even SSH into the box!). So, what I would love in a solution is the ability to detect when a process (or generally, total system memory usage) exceeds a certain critical amount (for now, example the FREE RAM falls to say 10%) - which I've noticed occurs after the VPS's been up for long, and when also traffic is suddenly much to some of the heavy apps (most are just staging apps anyway). So, I wish to be able to kill/restart the offending process(es) - most likely Apache. Which solution when done manually in these situations has restored sane memory usage levels - a hint that possibly one or more of the Django apps has a memory leak? In brief: Monitor overall system RAM usage When FREE RAM falls below a given critical threshold (say below 10%), kill/restart the offending process(es) - or simpler, if we assume from my current log analysis (using linux-dash) that Apache is often the offender, then kill/restart it. Rinse and repeat...

    Read the article

  • what kind of memory can be categorized as Modified Memory in Resource Monitor

    - by Kavin
    In Windows 7 and Windows 2008 R2, there is a new Resource Monitor that is very useful and powerful to monitor the system. In the Memory section, I see a section called Modified (orange) The official description is: Memory whose contents must be to disk before it can be used for another purpose. But I am still confused. What kinds of memory is Modified? In which case can we say that this number of memory is Modified? Can anyone give me a specific example? Is the following guess correct? When a program want to write something into disk, it actually write the content to an IO buffer, which is in the memory. After OS flush this area of memory into disk, the memory is modified or standby?

    Read the article

  • very large string in memory

    - by bushman
    Hi, I am writing a program for formatting 100s of MB String data (nearing a gig) into xml == And I am required to return it as a response to an HTTP (GET) request . I am using a StringWriter/XmlWriter to build an XML of the records in a loop and returning the stringWriter.ToString() during testing I saw a few --out of memory exceptions-- and quite clueless on how to find a solution? do you guys have any suggestions for a memory optimized delivery of the response? is there a memory efficient way of encoding the data? or maybe chunking the data -- I just can not think of how to return it without building the whole thing into one HUGE string object thanks

    Read the article

  • Decreasing cached memory and increasing Free memory in RAM

    - by Greenhorn
    Hi, Im using windows 2007 server 64 bit OS, I've uploaded the snap shot of my task manager when minimum processes running It shows Total memory 8190 mb *Cached memory 4315 mb* Free 3402 mb So effectively I get only 3402 mb of total RAM usage My question here is more than half is used for cached memory is there any means I can decrease this cached memory inturn I can increase my free memory. I need to do this because my Application requires at least 5GB RAM and it crashed when run in this system. Please give me a solution for this Thanks in advance

    Read the article

  • SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

    - by Pinal Dave
    In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload. The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes. Points to remember Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP. Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit. Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table. Interpreted Transact-SQL stored procedures, which is what SQL Server has always used. Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables. Interop refers to interpreted Transact-SQL that references memory-optimized tables. Using In-Memory OLTP In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions. Creating Databases Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables: CREATE DATABASE InMemoryDB ON PRIMARY(NAME = [InMemoryDB_data], FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB), FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryDB_mod_dir], FILENAME = 'S:\data\InMemoryDB_mod_dir'), (NAME = [InMemoryDB_mod_dir], FILENAME = 'R:\data\InMemoryDB_mod_dir') LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB) COLLATE Latin1_General_100_BIN2; Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables. It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same. ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod; GO Creating Tables There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example. DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY) Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema. Indexing Memory Optimized Table A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified. CREATE TABLE Mem_Table ( [Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000), [City] VARCHAR(32) NULL, [State_Province] VARCHAR(32) NULL, [LastModified] DATETIME NOT NULL, ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables. We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well. Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables. I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise. USE InMemoryDB GO -- Creating a disk based table CREATE TABLE dbo.Disktable ( Id INT IDENTITY, Name CHAR(40) ) GO CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id) GO -- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA CREATE TABLE dbo.Memorytable_durable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO -- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only CREATE TABLE dbo.Memorytable_nondurable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only) GO -- Now insert 100000 records in dbo.Disktable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM. This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates. Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig

    Read the article

  • Linux bizarre memory report

    - by Igor Liner
    I took the following meminfo captures. I don't figure out how the free memory went from 8GB to almost 25GB, when only about 4GB of slab was freed. There was no change of the proccess memory consumption on time the meminfo output was taken. First meminfo with 8GB free memory: MemTotal: 66054256 kB MemFree: 8344960 kB Buffers: 1120 kB Cached: 30172312 kB SwapCached: 0 kB Active: 10795428 kB Inactive: 1914512 kB Active(anon): 10193124 kB Inactive(anon): 1441288 kB Active(file): 602304 kB Inactive(file): 473224 kB Unevictable: 26348912 kB Mlocked: 26348960 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 8886304 kB Mapped: 26383052 kB Shmem: 29097904 kB Slab: 6006384 kB SReclaimable: 3512404 kB SUnreclaim: 2493980 kB KernelStack: 15240 kB PageTables: 78724 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 33027128 kB Committed_AS: 44446908 kB VmallocTotal: 34359738367 kB VmallocUsed: 426656 kB VmallocChunk: 34325375716 kB HardwareCorrupted: 0 kB AnonHugePages: 7696384 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 6144 kB DirectMap2M: 2058240 kB DirectMap1G: 65011712 kB Second memory capture with almost 25GB free memory: MemTotal: 66054256 kB MemFree: 24949116 kB Buffers: 1120 kB Cached: 29085016 kB SwapCached: 0 kB Active: 10168904 kB Inactive: 1461156 kB Active(anon): 10168216 kB Inactive(anon): 1441956 kB Active(file): 688 kB Inactive(file): 19200 kB Unevictable: 26317328 kB Mlocked: 26317376 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 8861224 kB Mapped: 26351488 kB Shmem: 29066248 kB Slab: 1503440 kB SReclaimable: 232880 kB SUnreclaim: 1270560 kB KernelStack: 15256 kB PageTables: 79664 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 33027128 kB Committed_AS: 44418280 kB VmallocTotal: 34359738367 kB VmallocUsed: 426656 kB VmallocChunk: 34325375716 kB HardwareCorrupted: 0 kB AnonHugePages: 7665664 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 6144 kB DirectMap2M: 2058240 kB DirectMap1G: 65011712 kB

    Read the article

  • sizes of RAM, of virtual memory and of swap for 32-bit OS

    - by Tim
    If I understand correctly, a 32-bit OS (Ubuntu) can only address 4GiB memory, so RAM with size larger than 4Gib will only be used 4Gib of itself and the rest is a waste. I am now confused about this situation for RAM with similar one for virtual memory and for swap. with virtual memory being swap + RAM, if the size of the virtual memory exceeds 4Gib, will the exceeding part be a waste for the 32-bit OS? if I now have to choose the size for my swap partition, is it a factor to consider that the 32-bit OS can only address 4GiB memory? Does the size of swap have to be chosen with respect to the 4Gib addressible limitation? Will the swap exceeding 4GiB always be a waste? is virtual memory equal to RAM and swap? or can virtual memory use space on the hard drive outside the swap partition? Thanks and regards!

    Read the article

  • DNSCache excesive memory usage Windows Server 2008 R2

    - by MikeT
    We are having an issue with the dnscache service where its memory usage is becoming excessive (~6GB) after a week or two. Restarting the service frees this memory but performing ipconfig /flushdns does not, an ipconfig /displaydns shows aprox 15-20 entries in the cache. We have checked and there appears to be aprox 150 DNS queries per second taking place but I would not expect this to have the effect of causing this memory issue. I have tried to search MSDN for hotfixes or bug reports but I could only find a reference to a memory leak in windows 2003. can anyone suggest how to proceed.

    Read the article

  • Memory Usage of SQL Server

    - by Ashish
    SQL Server instance on my server is using almost full memory available in my Physical Server. Say if i am having 8GB of RAM than SQL Server is using 7.8 GB of RAM from system. I also have read articles and also read many similar questions regarding same on this forum and i understand that memory is reserved and it is using memory. But i have 2 same servers and 2 SQL Servers, why this is happening on a single SQL Instance not on other. Also when i run DBCC MemoryStatus than it is showing up... VM Reserved 8282008 VM Committed 537936 so from this we know that SQL reserved whole 8GB memory, but why this VM Committed keeps increasing. What i understand is VM Committed is: VM Committed: This value shows the overall amount of VAS that SQL Server has committed. VAS that is committed has been associated with physical memory. So this is the memory SQL Server has committed (from this i understand that physical memory actually SQL Server is using at instance). So like to know the reason behind this ever increasing VM Committed memory on my server and not on another. Thanks in Advance.

    Read the article

  • Why does jquery leak memory so badly?

    - by Thomas Lane
    This is kind of a follow-up to a question I posted last week: http://stackoverflow.com/questions/2429056/simple-jquery-ajax-call-leaks-memory-in-ie I love the jquery syntax and all of its nice features, but I've been having trouble with a page that automatically updates table cells via ajax calls leaking memory. So I created two simple test pages for experimenting. Both pages do an ajax call every .1 seconds. After each successful ajax call, a counter is incremented and the DOM is updated. The script stops after 1000 cycles. One uses jquery for both the ajax call and to update the DOM. The other uses the Yahoo API for the ajax and does a document.getElementById(...).innerHTML to update the DOM. The jquery version leaks memory badly. Running in drip (on XP Home with IE7), it starts at 9MB and finishes at about 48MB, with memory growing linearly the whole time. If I comment out the line that updates the DOM, it still finishes at 32MB, suggesting that even simple DOM updates leak a significant amount of memory. The non-jquery version starts and finishes at about 9MB, regardless of whether it updates the DOM. Does anyone have a good explanation of what is causing jquery to leak so badly? Am I missing something obvious? Is there a circular reference that I'm not aware of? Or does jquery just have some serious memory issues? Here is the source for the leaky (jquery) version: <html> <head> <script type="text/javascript" src="http://www.google.com/jsapi"></script> <script type="text/javascript"> google.load('jquery', '1.4.2'); </script> <script type="text/javascript"> var counter = 0; leakTest(); function leakTest() { $.ajax({ url: '/html/delme.x', type: 'GET', success: incrementCounter }); } function incrementCounter(data) { if (counter<1000) { counter++; $('#counter').text(counter); setTimeout(leakTest,100); } else $('#counter').text('finished.'); } </script> </head> <body> <div>Why is memory usage going up?</div> <div id="counter"></div> </body> </html> And here is the non-leaky version: <html> <head> <script type="text/javascript" src="http://yui.yahooapis.com/2.8.0r4/build/yahoo/yahoo-min.js"></script> <script type="text/javascript" src="http://yui.yahooapis.com/2.8.0r4/build/event/event-min.js"></script> <script type="text/javascript" src="http://yui.yahooapis.com/2.8.0r4/build/connection/connection_core-min.js"></script> <script type="text/javascript"> var counter = 0; leakTest(); function leakTest() { YAHOO.util.Connect.asyncRequest('GET', '/html/delme.x', {success:incrementCounter}); } function incrementCounter(o) { if (counter<1000) { counter++; document.getElementById('counter').innerHTML = counter; setTimeout(leakTest,100); } else document.getElementById('counter').innerHTML = 'finished.' } </script> </head> <body> <div>Memory usage is stable, right?</div> <div id="counter"></div> </body> </html>

    Read the article

  • Unmanaged Code calling leads to heavy memory leak!!

    - by konnychen
    Maybe I need change the title as "Unmanaged Code calling leads to heavy memory leak!" The leak is around 30M/hour I think maybe I need complete my code here because the memory leak maybe not from a static string whereas my real code derive this string from external device (see new code attached). so I handle also unmanaged code. Could it be possible the leak comes from unmanaged code? But I freed the resouce by Marshal.FreeCoTaskMem(pos); oThread2 = new Thread(new ThreadStart(Cyclic_Call)); oThread2.Start(); delegate void SetText_lab_Statubar(string text); private void m_SetText_lab_Statubar(string text) { if (this.lab_Statubar.InvokeRequired) { SetText_lab_Statubar d = new SetText_lab_Statubar(m_SetText_lab_Statubar); this.Invoke(d, new object[] { text }); } else { this.lab_Statubar.Text = text; } } private void Cyclic_Call() { do { //... ... ReadMatrixCode(Station6, 0, str_Code); this.m_SetText_lab_Statubar(str_Code[4]); Thread.Sleep(100); } while (!b_AbortThraed); } private void ReadMatrixCode(Station st, int ItemNr, string[] str_Code) { IntPtr pItemStates = IntPtr.Zero; IntPtr pErrors = IntPtr.Zero; int NumItems = itemServerHandles.Length; m_SyncIO.Read(DataSrc, NumItems, itemServerHandles, out pItemStates, out pErrors); // This calls external dll which has some of "out IntPtr" errors = new int[NumItems]; Marshal.Copy(pErrors, errors, 0, NumItems); IntPtr pos = pItemStates; // Now get the read values and check errors for (int dwCount = 0; dwCount < NumItems; dwCount++) { result[dwCount] = (ITEMSTATE)Marshal.PtrToStructure(pos, typeof(ITEMSTATE)); pos = (IntPtr)(pos.ToInt32() + Marshal.SizeOf(typeof(ITEMSTATE))); } // Free allocated COM-ressouces Marshal.FreeCoTaskMem(pItemStates); Marshal.FreeCoTaskMem(pErrors); pItemStates = IntPtr.Zero; pErrors = IntPtr.Zero; } m_syncIO is a class and finally it will call COM component which is defined below [Guid("39C12B52-011E-11D0-9675-1020AFD8ADB3")] [InterfaceType(1)] [ComConversionLoss] public interface ISyncIO { void Read(DATASOURCE dwSource, int dwCount, int[] phServer, out IntPtr ppItemValues, out IntPtr ppErrors); void Write(int dwCount, int[] phServer, object[] pItemValues, out IntPtr ppErrors); }

    Read the article

  • Yet another Memory Leak Issue (memory is still gone when program terminates)- C program on SLES

    - by user1426181
    I run my C program on Suse Linux Enterprise that compresses several thousand large files (between 10MB and 100MB in size), and the program gets slower and slower as the program runs (it's running multi-threaded with 32 threads on a Intel Sandy Bridge board). When the program completes, and it's run again, it's still very slow. When I watch the program running, I see that the memory is being depleted while the program runs, which you would think is just a classic memory leak problem. But, with a normal malloc()/free() mismatch, I would expect all the memory to return when the program terminates. But, most of the memory doesn't get reclaimed when the program completes. The free or top command shows Mem: 63996M total, 63724M used, 272M free when the program is slowed down to a halt, but, after the termination, the free memory only grows back to about 3660M. When the program is rerun, the free memory is quickly used up. The top program only shows that the program, while running, is using at most 4% or so of the memory. I thought that it might be a memory fragmentation problem, but, I built a small test program that simulates all the memory allocation activity in the program (many randomized aspects were built in - size/quantity), and it always returns all the memory upon completion. So, I don't think that's it. Questions: Can there be a malloc()/free() mismatch that will lose memory permanently, i.e. even after the process completes? What other things in a C program (not C++) can cause permanent memory loss, i.e. after the program completes, and even the terminal window closes? Only a reboot brings the memory back. I've read other posts about files not being closed causing problems, but, I don't think I have that problem. Is it valid to be looking at top and free for the memory statistics, i.e. do they accurately describe the memory situation? They do seem to correspond to the slowness of the program. If the program only shows a 4% memory usage, will something like valgrind find this problem?

    Read the article

  • Virtual memory on Linux doesn't add up?

    - by Brendan Long
    I was looking at System Monitor on Linux and noticed that Firefox is using 441 MB of memory, and several other applications are using 274, 257, 232, etc (adding up to over 3 GB of virtual memory). So I switch over to the Resources tab, and it says I'm using 462 MB of memory and not touching swap. I'm confused. What does the virtual memory amount mean then if the programs aren't actually using it. I was thinking maybe memory they've requested but aren't using, but how would the OS know that? I can't think of any "I might need this much memory in the future" function..

    Read the article

  • Free / Cached / Available memory on Linux

    - by pkoraca
    I have read that linux uses free memory for caching, to make system faster. However, both Nagios and Paessler PRTG monitoring system show me that my memory usage is critical. I could change Nagios mem_usage script to sum free and cached memory, but would that be correct information? I doubt that they misunderstood Linux memory usage. Lets say I have 8 GB RAM. 5 GB are used, 2 GB is cached, and I have 1 GB of free memory. Real available memory should be free+cached (3 GB)? If some new application would need additional 3 GB RAM, could it take everything from cache and free without using swap, or is there a minimum that should be in cache? Real example: $ cat /proc/meminfo MemTotal: 5984256 kB MemFree: 137052 kB Buffers: 140484 kB Cached: 3439616 kB SwapCached: 244 kB Active: 3148824 kB Inactive: 2341768 kB ... My monitoring tools show that I have 137 MB free RAM, however I have ~3,5 GB in Cache. Thanks!

    Read the article

  • Problem with memory leaks

    - by user191723
    Sorry, having difficulty formattin code to appear correct here??? I am trying to understand the readings I get from running instruments on my app which are telling me I am leaking memory. There are a number, quite a few in fact, that get reported from inside the Foundation, AVFoundation CoreGraphics etc that I assume I have no control over and so should ignore such as: Malloc 32 bytes: 96 bytes, AVFoundation, prepareToRecordQueue or Malloc 128 bytes: 128 bytes, CoreGraphics, open_handle_to_dylib_path Am I correct in assuming these are something the system will resolve? But then there are leaks that are reported that I believe I am responsible for, such as: This call reports against this line leaks 2.31KB [self createAVAudioRecorder:frameAudioFile]; Immediately followed by this: -(NSError*) createAVAudioRecorder: (NSString *)fileName { // flush recorder to start afresh [audioRecorder release]; audioRecorder = nil; // delete existing file to ensure we have clean start [self deleteFile: fileName]; VariableStore *singleton = [VariableStore sharedInstance]; // get full path to target file to create NSString *destinationString = [singleton.docsPath stringByAppendingPathComponent: fileName]; NSURL *destinationURL = [NSURL fileURLWithPath: destinationString]; // configure the recording settings NSMutableDictionary *recordSettings = [[NSMutableDictionary alloc] initWithCapacity:6]; //****** LEAKING 384 BYTES [recordSettings setObject:[NSNumber numberWithInt:kAudioFormatLinearPCM] forKey: AVFormatIDKey]; //***** LEAKING 32 BYTES float sampleRate = 44100.0; [recordSettings setObject:[NSNumber numberWithFloat: sampleRate] forKey: AVSampleRateKey]; //***** LEAKING 48 BYTES [recordSettings setObject:[NSNumber numberWithInt:2] forKey:AVNumberOfChannelsKey]; int bitDepth = 16; [recordSettings setObject: [NSNumber numberWithInt:bitDepth] forKey:AVLinearPCMBitDepthKey]; //***** LEAKING 48 BYTES [recordSettings setObject:[NSNumber numberWithBool:YES] forKey:AVLinearPCMIsBigEndianKey]; [recordSettings setObject:[NSNumber numberWithBool: NO]forKey:AVLinearPCMIsFloatKey]; NSError *recorderSetupError = nil; // create the new recorder with target file audioRecorder = [[AVAudioRecorder alloc] initWithURL: destinationURL settings: recordSettings error: &recorderSetupError]; //***** LEAKING 1.31KB [recordSettings release]; recordSettings = nil; // check for erros if (recorderSetupError) { UIAlertView *alert = [[UIAlertView alloc] initWithTitle: @"Can't record" message: [recorderSetupError localizedDescription] delegate: nil cancelButtonTitle: @"OK" otherButtonTitles: nil]; [alert show]; [alert release]; alert = nil; return recorderSetupError; } [audioRecorder prepareToRecord]; //***** LEAKING 512 BYTES audioRecorder.delegate = self; return recorderSetupError; } I do not understand why there is a leak as I release audioRecorder at the start and set to nil and I release recordSettings and set to nil? Can anyone enlighten me please? Thanks

    Read the article

  • Memory leak when returning object

    - by Yakattak
    I have this memory leak that has been very stubborn for the past week or so. I have a class method I use in a class called "ArchiveManager" that will unarchive a specific .dat file for me, and return an array with it's contents. Here is the method: +(NSMutableArray *)unarchiveCustomObject { NSMutableArray *array = [NSMutableArray arrayWithArray:[NSKeyedUnarchiver unarchiveObjectWithFile:/* Archive Path */]]; return array; } I understand I don't have ownership of it at this point, and I return it. CustomObject *myObject = [[ArchiveManager unarchiveCustomObject] objectAtIndex:0]; Then, later when I unarchive it in a view controller to be used (I don't even create an array of it, nor do I make a pointer to it, I just reference it to get something out of the array returned by unarchiveCustomIbject (objectAtIndex). This is where Instruments is calling a memory leak, yet I don't see how this can leak! Any ideas? Thanks in advance. Edit: CustomObject initWithCoder added: -(id)initWithCoder:(NSCoder *)aDecoder { if (self = [super init]) { self.string1 = [aDecoder decodeObjectForKey:kString1]; self.string2 = [aDecoder decodeObjectForKey:kString2]; self.string3 = [aDecoder decodeObjectForKey:kString3]; UIImage *picture = [[UIImage alloc] initWithData:[aDecoder decodeObjectForKey:kPicture]]; self.picture = picture; self.array = [aDecoder decodeObjectForKey:kArray]; [picture release]; } return self; }

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >