Search Results

Search found 12357 results on 495 pages for 'memory lane'.

Page 60/495 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • How to detect .NET WPF memory leak or GC long run?

    - by Néstor Sánchez A.
    I have the next very strange situation and problem: .NET 4.0 application for diagram editing (WPF). Runs ok in my PC: 8GM RAM, 3.0GHz, i7 quad-core. While creating objects (mostly diagram nodes and connectors, plus all the undo/redo information) the TaskManager show, as expected, some memory usage "jumps" (up and down). These mem-usage "jumps" also remains executing AFTER user interaction ended. Maybe this is the GC cleaning/regorganizing memory? To see what is going on, I've used the Ants mem profiler, but somewhat it prevents those "jumps" to happen after user interaction. PROBLEM: It Freezes/Hangs after seconds or minutes of usage in some slow/weak laptos/netbooks of my beta testers (under 2GHz of speed and under 2GB of RAM). I was thinking of a memory leak, but... EDIT: Also, there is the case that the memory usage grows and grows until collapse (only in slow machines). In a Windows XP Mode machine (VM in Win 7) with only 512MB of RAM Assigned it works fine without mem-usage "jumps" after user interaction (no GC cleaning?!). So, I really have a big trouble because I cannot reproduce the error, only see these strange behaviour (mem jumps), and the tool supposed to show me what is happening is hiding the problem (like the "observer's paradox"). Any ideas on what's happening and how to solve it?

    Read the article

  • Huge burst of memory in c# service, what could be the cause?

    - by Daniel
    I'm working on a c# service application and i have this problem where out of no where and for no obvious reason, the memory for the process will climb from 150mb to almost 2gb in about 5 seconds and then back to 150mb. But nothing in our system should be using any where near that amount of memory (so its probably a bug somewhere). It might be a tight while true loop somewhere but the cpu usage at the time was very low so i thought i'd look for other ideas. Now the weirder thing is when i compile the service for 64bit, the same massive burst will occur except it exceeded 10gb of ram (paging most of it) and it just caused lots of problems with the computer and everything running on it. After a while it shuts down but it looks like windows is still willing to give it more memory. Would you have any ideas or tools that i can use in order to find this? Yes it has lots of logging however nothing in the logs stand out as to why this is happening. I can run the service in a console app mode, so my next test was going to be running it in visual studio debugger and see if i can find anything. It only happens occasionally but usually about 10-20 minutes after startup. On 32bit mode it cleans up and continues on like normally. 64bit mode it crashes after a while and uses stupid amounts of memory. But i'm really stumped as to why this is happening!!!

    Read the article

  • Users take over a minute to log onto a 2008 windows server. LSM.exe running at 100MB+ memory.

    - by seanyboy
    We've a 64bit Windows Server 2008 running Remote Desktop. The application lsm.exe (the local session manager) appears to be leaking memory. Although the memory usage is quite low when the server is rebooted, this continues to climb until people can no longer log in. The server has no audio card and does not have any AV software installed. The server is fully service packed. (Service pack 2) What could be causing this, and how would I fix it?

    Read the article

  • SQL SERVER – Data Pages in Buffer Pool – Data Stored in Memory Cache

    - by pinaldave
    This will drop all the clean buffers so we will be able to start again from there. Now, run the following script and check the execution plan of the query. Have you ever wondered what types of data are there in your cache? During SQL Server Trainings, I am usually asked if there is any way one can know how much data in a table is stored in the memory cache? The more detailed question I usually get is if there are multiple indexes on table (and used in a query), were the data of the single table stored multiple times in the memory cache or only for a single time? Here is a query you can run to figure out what kind of data is stored in the cache. USE AdventureWorks GO SELECT COUNT(*) AS cached_pages_count, name AS BaseTableName, IndexName, IndexTypeDesc FROM sys.dm_os_buffer_descriptors AS bd INNER JOIN ( SELECT s_obj.name, s_obj.index_id, s_obj.allocation_unit_id, s_obj.OBJECT_ID, i.name IndexName, i.type_desc IndexTypeDesc FROM ( SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id ,allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.hobt_id AND (au.type = 1 OR au.type = 3) UNION ALL SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id, allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.partition_id AND au.type = 2 ) AS s_obj LEFT JOIN sys.indexes i ON i.index_id = s_obj.index_id AND i.OBJECT_ID = s_obj.OBJECT_ID ) AS obj ON bd.allocation_unit_id = obj.allocation_unit_id WHERE database_id = DB_ID() GROUP BY name, index_id, IndexName, IndexTypeDesc ORDER BY cached_pages_count DESC; GO Now let us run the query above and observe the output of the same. We can see in the above query that there are four columns. Cached_Pages_Count lists the pages cached in the memory. BaseTableName lists the original base table from which data pages are cached. IndexName lists the name of the index from which pages are cached. IndexTypeDesc lists the type of index. Now, let us do one more experience here. Please note that you should not run this test on a production server as it can extremely reduce the performance of the database. DBCC DROPCLEANBUFFERS This will drop all the clean buffers and we will be able to start again from there. Now run following script and check the execution plan for the same. USE AdventureWorks GO SELECT UnitPrice, ModifiedDate FROM Sales.SalesOrderDetail WHERE SalesOrderDetailID BETWEEN 1 AND 100 GO The execution plans contain the usage of two different indexes. Now, let us run the script that checks the pages cached in SQL Server. It will give us the following output. It is clear from the Resultset that when more than one index is used, datapages related to both or all of the indexes are stored in Memory Cache separately. Let me know what you think of this article. I had a great pleasure while writing this article because I was able to write on this subject, which I like the most. In the next article, we will exactly see what data are cached and those that are not cached, using a few undocumented commands. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL DMV

    Read the article

  • Game Object Factory: Fixing Memory Leaks

    - by Bunkai.Satori
    Dear all, this is going to be tough: I have created a game object factory that generates objects of my wish. However, I get memory leaks which I can not fix. Memory leaks are generated by return new Object(); in the bottom part of the code sample. static BaseObject * CreateObjectFunc() { return new Object(); } How and where to delete the pointers? I wrote bool ReleaseClassType(). Despite the factory works well, ReleaseClassType() does not fix memory leaks. bool ReleaseClassTypes() { unsigned int nRecordCount = vFactories.size(); for (unsigned int nLoop = 0; nLoop < nRecordCount; nLoop++ ) { // if the object exists in the container and is valid, then render it if( vFactories[nLoop] != NULL) delete vFactories[nLoop](); } return true; } Before taking a look at the code below, let me help you in that my CGameObjectFactory creates pointers to functions creating particular object type. The pointers are stored within vFactories vector container. I have chosen this way because I parse an object map file. I have object type IDs (integer values) which I need to translate them into real objects. Because I have over 100 different object data types, I wished to avoid continuously traversing very long Switch() statement. Therefore, to create an object, I call vFactoriesnEnumObjectTypeID via CGameObjectFactory::create() to call stored function that generates desired object. The position of the appropriate function in the vFactories is identical to the nObjectTypeID, so I can use indexing to access the function. So the question remains, how to proceed with garbage collection and avoid reported memory leaks? #ifndef GAMEOBJECTFACTORY_H_UNIPIXELS #define GAMEOBJECTFACTORY_H_UNIPIXELS //#include "MemoryManager.h" #include <vector> template <typename BaseObject> class CGameObjectFactory { public: // cleanup and release registered object data types bool ReleaseClassTypes() { unsigned int nRecordCount = vFactories.size(); for (unsigned int nLoop = 0; nLoop < nRecordCount; nLoop++ ) { // if the object exists in the container and is valid, then render it if( vFactories[nLoop] != NULL) delete vFactories[nLoop](); } return true; } // register new object data type template <typename Object> bool RegisterClassType(unsigned int nObjectIDParam ) { if(vFactories.size() < nObjectIDParam) vFactories.resize(nObjectIDParam); vFactories[nObjectIDParam] = &CreateObjectFunc<Object>; return true; } // create new object by calling the pointer to the appropriate type function BaseObject* create(unsigned int nObjectIDParam) const { return vFactories[nObjectIDParam](); } // resize the vector array containing pointers to function calls bool resize(unsigned int nSizeParam) { vFactories.resize(nSizeParam); return true; } private: //DECLARE_HEAP; template <typename Object> static BaseObject * CreateObjectFunc() { return new Object(); } typedef BaseObject*(*factory)(); std::vector<factory> vFactories; }; //DEFINE_HEAP_T(CGameObjectFactory, "Game Object Factory"); #endif // GAMEOBJECTFACTORY_H_UNIPIXELS

    Read the article

  • Is it possible to share a C struct in shared memory between apps compiled with different compilers?

    - by Joseph Garvin
    I realize that in general the C and C++ standards gives compiler writers a lot of latitude. But in particular it guarantees that POD types like C struct members have to be laid out in memory the same order that they're listed in the structs definition, and most compilers provide extensions letting you fix the alignment of members. So if you had a header that defined a struct and manually specified the alignment of its members, then compiled two apps with different compilers using the header, shouldn't one app be able to write an instance of the struct into shared memory and the other app be able to read it without errors? I am assuming though that the size of the types contained is consistent across two compilers on the same architecture (it has to be the same platform already since we're talking about shared memory). I realize that this is not always true for some types (e.g. long vs. long long in GCC and MSVC 64-bit) but nowadays there are uint16_t, uint32_t, etc. types, and float and double are specified by IEEE standards.

    Read the article

  • What are some of best Javascript memory detecting tools?

    - by Philip Fourie
    Our team is faced with slow but serious Javascript memory leak. We have read up on the normal causes for memory leaks in Javascript (eg. closures and circular references). We tried to avoid those pitfalls in the code but it likely we still have unknown mistakes left in our code. I started my search for available tools but would like input from people with actual experience with these tools. Some of the tools I found so far (but have no idea how good and useful they would be for our problem): Sieve Drip JavaScript Memory Leak Detector Our search is not limited to free tools, it will be a bonus, but more importantly something that will get the job done. We do the following in our Javascript code: AJAX calls to a .NET WCF back-end that send back JSON data Manipulate the DOM Keep a fairly sized object model in the Javascript to store current state

    Read the article

  • Can SHA-1 algorithm be computed on a stream? With low memory footprint?

    - by raoulsson
    I am looking for a way to compute SHA-1 checksums of very large files without having to fully load them into memory at once. I don't know the details of the SHA-1 implementation and therefore would like to know if it is even possible to do that. If you know the SAX XML parser, then what I look for would be something similar: Computing the SHA-1 checksum by only always loading a small part into memory at a time. All the examples I found, at least in Java, always depend on fully loading the file/byte array/string into memory. If you even know implementations (any language), then please let me know!

    Read the article

  • How do I force my std::map to deallocate memory used?

    - by monkeyking
    I'm using a std::map, and I can't seem to free the memory back to the OS. It looks like, int main(){ aMap m; while(keepGoing){ while(fillUpMap){ //populate m } doWhatIwantWithMap(m); m.clear(); //flush some buffered values into map for next iteration flushIntoMap(m); } } Each (fillUpmap) allocates around 1gig, so I'm very much interested in getting this back to my system before it eats up all my memory. Ive experienced the same with std::vector, but there I could force it to free by doing a swap with an empty std::vector. This doesn't work with map. When I use valgrind it says that all memory is freed, so its not a problem with a leak, since everything is cleared up nicely after a run.

    Read the article

  • How do you make your Java application memory efficient?

    - by Boune
    How do you optimize the heap size usage of an application that has a lot (millions) of long-lived objects? (big cache, loading lots of records from a db) Use the right data type Avoid java.lang.String to represent other data types Avoid duplicated objects Use enums if the values are known in advance Use object pools String.intern() (good idea?) Load/keep only the objects you need I am looking for general programming or Java specific answers. No funky compiler switch. Edit: Optimize the memory representation of a POJO that can appear millions of times in the heap. Use cases Load a huge csv file in memory (converted into POJOs) Use hibernate to retrieve million of records from a database Resume of answers: Use flyweight pattern Copy on write Instead of loading 10M objects with 3 properties, is it more efficient to have 3 arrays (or other data structure) of size 10M? (Could be a pain to manipulate data but if you are really short on memory...)

    Read the article

  • Which is faster in memory, ints or chars? And file-mapping or chunk reading?

    - by Nick
    Okay, so I've written a (rather unoptimized) program before to encode images to JPEGs, however, now I am working with MPEG-2 transport streams and the H.264 encoded video within them. Before I dive into programming all of this, I am curious what the fastest way to deal with the actual file is. Currently I am file-mapping the .mts file into memory to work on it, although I am not sure if it would be faster to (for example) read 100 MB of the file into memory in chunks and deal with it that way. These files require a lot of bit-shifting and such to read flags, so I am wondering that when I reference some of the memory if it is faster to read 4 bytes at once as an integer or 1 byte as a character. I thought I read somewhere that x86 processors are optimized to a 4-byte granularity, but I'm not sure if this is true... Thanks!

    Read the article

  • How much memory is reserved when i declare a string?

    - by Bhagya
    What exactly happens, in terms of memory, when i declare something like: char arr[4]; How many bytes are reserved for arr? How is null string accommodated when I 'strcpy' a string of length 4 in arr? I was writing a socket program, and when I tried to suffix NULL at arr[4] (i.e. the 5th memory location), I ended up replacing the values of some other variables of the program (overflow) and got into a big time mess. Any descriptions of how compilers (gcc is what I used) manage memory?

    Read the article

  • Which of FILE* or ifstream has better memory usage?

    - by Viet
    I need to read fixed number of bytes from files, whose sizes are around 50MB. To be more precise, read a frame from YUV 4:2:0 CIF/QCIF files (~25KB to ~100KB per frame). Not very huge number but I don't want whole file to be in the memory. I'm using C++, in such a case, which of FILE* or ifstream has better (less/minimal) memory usage? Please kindly advise. Thanks! EDIT: I read fixed number of bytes: 25KB or 100KB (depending on QCIF/CIF format). The reading is in binary mode and forward-only. No seeking needed. No writing needed, only reading. EDIT: If identifying better of them is hard, which one does not require loading the whole file into memory?

    Read the article

  • How to protect against GHC7 compiled programs taking all memory?

    - by Petr Pudlák
    When playing with various algorithms in Haskell it often happens to me that I create a program with a memory leak, as it often happens with lazy evaluation. The program taking all the memory isn't really fun, I often have difficulty killing it if I realize it too late. When using GHC6 I simply had export GHCRTS='-M384m' in my .bashrc. But in GHC7 they added a security measure that unless a program is compiled with -rtsopts, it simply fails when it is given any RTS option either on a command line argument or in GHCRTS. Unfortunately, almost no Haskell programs are compiled with this flag, so setting this variable makes everything to fail (as I discovered in After upgrading to GHC7, all programs suddenly fail saying "Most RTS options are disabled. Link with -rtsopts to enable them."). Any ideas how to make any use of GHCRTS with GHC7, or another convenient way how to prevent my programs taking all memory?

    Read the article

  • using arrays to get best memory alignment and cache use, is it necessary?

    - by Alberto Toglia
    I'm all about performance these days cause I'm developing my first game engine. I'm no c++ expert but after some research I discovered the importance of the cache and the memory alignment. Basically what I found is that it is recommended to have memory well aligned specially if you need to access them together, for example in a loop. Now, In my project I'm doing my Game Object Manager, and I was thinking to have an array of GameObjects references. meaning I would have the actual memory of my objects one after the other. static const size_t MaxNumberGameObjects = 20; GameObject mGameObjects[MaxNumberGameObjects]; But, as I will be having a list of components per object -Component based design- (Mesh, RigidBody, Transformation, etc), will I be gaining something with the array at all? Anyway, I have seen some people just using a simple std::map for storing game objects. So what do you guys think? Am I better off using a pure component model?

    Read the article

  • Under Windows CE, how can I check which RAM based DLLs are loaded in virtual memory space?

    - by Michal Drozdowicz
    I have a problem with loading a DLL under Windows Mobile 5.0. I'm pretty confident that this is caused by running out of the application virtual memory (the 32 MB slot of the process, as explained in Windows CE .NET Advanced Memory Management). I'm looking for a way to actually make sure that this is the issue and investigate whether my efforts bring expected results. Do you know of a way to check the contents of the virtual memory application slot? Any applications that can help me with this task?

    Read the article

  • Why and How to avoid Event Handler memory leaks ?

    - by gillyb
    Hey There, I just came to realize, by reading some questions and answers on StackOverflow, that adding event handlers using += in C# (or i guess, other .net languages) can cause common memory leaks... I have used event handlers like this in the past many times, and never realized that they can cause, or have caused, memory leaks in my applications. How does this work (meaning, why does this actually cause a memory leak) ? How can I fix this problem ? Is using -= to the same event handler enough ? Are there common design patterns or best practices for handling situations like this ? Example : How am I supposed to handle an application that has many different threads, using many different event handlers to raise several events on the UI ? Are there any good and simple ways to monitor this efficiently in an already built big application ? Thanks in advance!

    Read the article

  • Shared memory of same DLL in different 32 bit processes is sometimes different in a terminal session

    - by KBrusing
    We have an 32 bit application consisting of some processes. They communicate with shared memory of a DLL used by every process. Shared memory is build with global variables in C++ by "#pragma data_seg ("Shared")". When running this application sometime during starting a new process in addition to an existing (first) process we observe that the shared memory of both processes is not the same. All new started processes cannot communicate with the first process. After stopping all of our processes and restarting the application (with some processes) everything works fine. But sometime or other after successfully starting and finishing new processes the problem occurs again. Running on all other Windows versions or terminal sessions on Windows server 2003 our application never got this problem. Is there any new "feature" on Windows server 2008 that might disturb the hamony of our application?

    Read the article

  • How to manage memory for ios with large csv files?

    - by Pell000
    I'm new to ios development, and I'm running into issues relating to memory management and my approaches to dealing with large datasets. Right now, I am loading the csv files and storing the relevant data as objects in memory at app initialization. Some of the csv files are larger than 1MB, and in total, my app uses about 180MB of memory. This is obviously way too high of a number (unless the info I found is wrong and this is an acceptable number, then please let me know). I feel as though there is a fundamental flaw in my approach: is there a way I can avoid storing the csv files in the project itself? Or, is there a kind of "lazy" loading I can do so that I can simply look up info in the csv file, as opposed to loading all of the data from it at once? Any help would do. I think that I need a new perspective in how to manage this more efficiently.

    Read the article

  • How to interpret output from Linux 'top' command?

    - by Ali
    Following a discussion made HERE about how PHP-FPM consuming memory, I just found a problem in reading the memory in top command. Here is a screenshot of my top just after restarting PHP-FPM. Everything is normal: about 20 PHP-FPM processes, each consuming 5.5MB memory (0.3% of total). Here is the aged server right before restart of PHP-FPM (one day after the previous restart). Here, we still have about 25 PHP-FPM with double memory usage (10MB indicating 0.5% of total). Thus, the total memory used should be 600-700 MB. Then, why 1.6GB memory has been used?

    Read the article

  • 'Memory read error',Sever hardware error?

    - by wss8848
    hello I got a error about my server which is running CentOS5.5. MCE 20 HARDWARE ERROR. This is *NOT* a software problem! Please contact your hardware vendor CPU 1 BANK 8 TSC 6ab9ff9745f62 [at 2394 Mhz 9 days 1:50:52 uptime (unreliable)] MISC cf36ad0100081186 ADDR 203376500 MCG status: MCi status: MCi_MISC register valid MCi_ADDR register valid MCA: MEMORY CONTROLLER RD_CHANNELunspecified_ERR Transaction: Memory read error STATUS 8c0000400001009f MCGSTATUS 0 what is the matter? is memory card error or memory controller error?

    Read the article

  • Apache consuming large memory with Apache Mobile Filter

    - by VarunGupta
    I installed and configured Apache Mobile Filter module for Apache to redirect users to mobile version of our website, depending on their user agents. I configured the module to use WURFL. But as soon as I start Apache it hogs a large amount of memory (without any in-coming web requests): Resident Memory: 300 to 400 MB Virtual Memory: 300 to 650 MB Without this module, Apache was consuming much lesser memory (4 to 10 MB). What could be the reason here?

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >