Search Results

Search found 12802 results on 513 pages for 'memory profiler'.

Page 17/513 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Help with strange memory behavior. Looking for leaks both in my brain and in my code.

    - by BastiBechtold
    I spent the last few days trying to find memory leaks in a program we are developing. First of all, I tried using some leak detectors. After fixing a few issues, they do not find any leaks any more. However, I am also monitoring my application using perfmon.exe. Performance Monitor reports that 'Private Bytes' and 'Working Set - Private' are steadily rising when the app is used. To me, this suggests that the program is using more and more memory the longer it runs. Internal resources seem to be stable however, so this sounds like leaking to me. The program is loading a DLL at runtime. I suspect that these leaks or whatever they are occur in that library and get purged when the library is unloaded, hence they won't get picked up by the leak detectors. I used both DevPartner BoundsChecker and Virtual Leak Detector to look for memory leaks. Both supposedly catch leaks in DLLs. Also, the memory consumption is increasing in steps and those steps roughly, but not exactly, coincide with certain GUI actions I perform in the application. If these were errors in our code, they should get triggered every single time the actions are performed and not just most of the time. Whenever I am confronted with so much strangeness, I begin to question my basic assumptions. So I turn to you, who know everything, for suggestions. Is there a flaw in my assumptions? Do you have an idea of how to go about troubleshooting a problem like this? Edit: I am currently using Microsoft Visual C++ (x86) on Windows 7 64. Edit2: I just used IBM Purify to hunt for leaks. First of all, it lists a full 30% of the program as leaked memory. This can not be true. I guess it is identifying the whole DLL as leaked or something like that. However, if I search for new leaks every few actions, it reports leaks that correspond with the size increase reported by Performance Monitor. This could be a lead to a leak. Sadly, I am only using the trial version of Purify, so it won't show me the actual location of those leaks. (These leaks only show up at runtime. When the program exits, there are no leaks whatsoever reported by any tool.)

    Read the article

  • Determining if Memory Pointer is Valid - C++

    - by Jim Fell
    It has been my observation that if free( ptr ) is called where ptr is not a valid pointer to system-allocated memory, an access violation occurs. Let's say that I call free like this: LPVOID ptr = (LPVOID)0x12345678; free( ptr ); This will most definitely cause an access violation. Is there a way to test that the memory location pointed to by ptr is valid system-allocated memory? It seems to me that the the memory management part of the Windows OS kernel must know what memory has been allocated and what memory remains for allocation. Otherwise, how could it know if enough memory remains to satisfy a given request? (rhetorical) That said, it seems reasonable to conclude that there must be a function (or set of functions) that would allow a user to determine if a pointer is valid system-allocated memory. Perhaps Microsoft has not made these functions public. If Microsoft has not provided such an API, I can only presume that it was for an intentional and specific reason. Would providing such a hook into the system prose a significant threat to system security? Situation Report Although knowing whether a memory pointer is valid could be useful in many scenarios, this is my particular situation: I am writing a driver for a new piece of hardware that is to replace an existing piece of hardware that connects to the PC via USB. My mandate is to write the new driver such that calls to the existing API for the current driver will continue to work in the PC applications in which it is used. Thus the only required changes to existing applications is to load the appropriate driver DLL(s) at startup. The problem here is that the existing driver uses a callback to send received serial messages to the application; a pointer to allocated memory containing the message is passed from the driver to the application via the callback. It is then the responsibility of the application to call another driver API to free the memory by passing back the same pointer from the application to the driver. In this scenario the second API has no way to determine if the application has actually passed back a pointer to valid memory.

    Read the article

  • Can Anything be Done to Make Improv (a 1993 Win 3.1 App) handle larger Files?

    - by user75185
    My very favorite spradsheet is Improv, a 1993 Windows 3.1 application. It still puts Excel to shame for building spreadsheets and writing formulas. The only problem is because Improv was written when 1 Meg of RAM was state of the art, it becomes unstable when working with larger spreadsheets and often crashes and/or corrupts the data file. I am working on a project that greatly exceeds Improv's limits. Although it will ultimately require more robust databasing capability, I could save a lot of critical time if I could delay that headache and continue working in Improv for now. To that end, I moved to the only product I could find that comes close, Quantrix, which is nothing more than Improv updated to handle large spreadsheets and utilize today's technologies. The problems with Quantrix are its speed (significantly slower than Improv) and its $1000 price (which I cannot afford). I have already had 3 15 day extensions after the initial 30 day trial, so my time to use Quantrix as a bridge is at its end. Searches for Improv over the years have gotten me nowhere and, not surprisingly after reading some posts on this site, I got nothing for the money and time invested to find a programmer to write code to "fix" this problem. Improv is freely available as "abandonware" at http://vetusware.com/download/LotusImprov2.1/?id=5797 , and the best background info can be found on Wikipedia and at "Moose's Greatest Software Products of All Time - Lotus Improv" http://moosevalley.fhost.com.au/mooses_review_page_lotus_improv.html It is critically urgent for me to focus on analyzing the data asap. Working in a stable Improv would, without question, be the fastest route. To that end, I am looking for answers to the following questions and anything else that might be helpful: 1) Is it lawful to hire someone to fix Improv for my own use? If so, 2) About how much should it cost? 3) About how long should it take? 4) What skills should I be looking for &/or how should a post be worded? 5) Is there a niche site where it should it be posted? 6) What questions can I ask to quickly screen candidates? Since I am not a programmer, I need questions the answers to which leave no room to confuse me, whether intentional or not. For example, what tools or players should someone with an acceptable competency level have knowledge of?

    Read the article

  • CLR Profiler Allocated Bytes and XNA ContentManager

    - by Vackup
    I've been fighting with XNA ContentManager and memory allocations for some weeks because I'm trying to port my game from XNA (Windows) to ExEn / Monotouch (iphone). The problem is that after playing a few levels, my game exits unexpectedly on a real iPhone device (not simulator). Profiling memory usage on Windows with CLRProfile, I found some useful stuff but I also found something I dont understand. If I use 2 ContentManagers (1 for shared assets and 1 for level assets), when profiling, "Allocated Bytes" grows and grows after level through level but Memory consumption measured by Windows Task Manager stays constant (down when I unload the content manager and up again when I load content). Obviously, I contentManager.Unload() when level ends. After a few levels my game exits unexpectedly on an iPhone device. If I use 1 content manager, "CRLProfiler Allocated Bytes" stays constant on Windows and on the iPhone; I can play the game normally and it doesnt exit unexpectedly. I use the same assets level through level. It seems like in ios (iPhone) when loading and unloading the same assets, it allocates memory and consumes all device memory, so the ios kill it. Can anybody explain me how this really works? I've read quite a bit, but I still don't understand what's going on.

    Read the article

  • iOS6 MKMapView using a ton of memory, to the point of crashing the app, anyone else notice this?

    - by Jeremy Fox
    Has anyone else, who's using maps in their iOS 6 apps, noticing extremely high memory use to the point of receiving memory warnings over and over to the point of crashing the app? I've ran the app through instruments and I'm not seeing any leaks and until the map view is created the app consistently runs at around ~3mb Live Bytes. Once the map is created and the tiles are downloaded the Live Bytes jumps up to ~13mb Live Bytes. Then as I move the map around and zoom in and out the Live Bytes continuos to climb until the app crashes at around ~40mb Live Bytes. This is on an iPhone 4 by the way. On an iPod touch it crashes even earlier. I am reusing annotation views properly and nothing is leaking. Is anyone else seeing this same high memory usage with the new iOS 6 maps? Also, does anyone have a solution?

    Read the article

  • code throws std::bad_alloc, not enough memory or can it be a bug?

    - by Andreas
    I am parsing using a pretty large grammar (1.1 GB, it's data-oriented parsing). The parser I use (bitpar) is said to be optimized for highly ambiguous grammars. I'm getting this error: 1terminate called after throwing an instance of 'std::bad_alloc' what(): St9bad_alloc dotest.sh: line 11: 16686 Aborted bitpar -p -b 1 -s top -u unknownwordsm -w pos.dfsa /tmp/gsyntax.pcfg /tmp/gsyntax.lex arbobanko.test arbobanko.results Is there hope? Does it mean that it has ran out of memory? It uses about 15 GB before it crashes. The machine I'm using has 32 GB of RAM, plus swap as well. It crashes before outputting a single parse tree. The parser is an efficient CYK chart parser using bit vector representations; I presume it is already near the limit of memory efficiency. If it really requires too much memory I could sample from the grammar rules, but this will decrease parse accuracy of course.

    Read the article

  • Why am I seeing Zero errors in non-ECC RAM?

    - by Alexander Shcheblikin
    According to sources, memory errors are a very probable event: Some say the probability of a DRAM error is 95% in just 3 days of operation of a computer with just 4 GB of RAM, others say 32% of servers experience at least one error in a month with 8% of DIMMs being at fault. Contrary to those horrors, in my more than 10 years of personal computers use I have seen exactly none of the memory errors. I admit I never paid special attention to the subject. However, I have ventured multi-hour memtest86 runs couple of times and never seen an error either. Some of the factors that IMO should aggravate the memory problems: I build my computers out of the most "bulk commodity" parts: mainstream budget motherboards and the next to cheapest memory. also I usually max out the technology available, e.g. in the times of 32 bit OS'es I used 4 GB of RAM and with the current desktop CPUs and the newer 64 bit OS'es I use 32 GB of RAM. memory usage is moderately heavy with lots of virtual machines up running small and big tasks 24/7/365. But nevertheless, no memory-related problems ever found! How's that?

    Read the article

  • Apache memory leak with Subversion server

    - by bruce grissom
    Does anyone know of a way to fix the Apache memory leak in relation to Subversion Server? We have a windows server 2003 machine running Apache to host Subversion. From day one, we have had memory leak issues and have not found a solution yet. All we do is monitor our server when when the memory use reaches near the max it can handle we have to restart Apache.

    Read the article

  • Ubuntu 12.04 LTS vs Ubuntu 14.04 LTS memory usage

    - by geoffroy
    My droplet has 512 MB memory and is running Ubuntu 12.04 LTS 64 bits and a Rails 4 application + several workers. It's running well. I tried to deploy the same thing on a Ubuntu 14.04 LTS 64 bits droplet and I've got plenty of memory related problem (can't fork). Is Ubuntu 14.04 LTS using way more RAM than Ubuntu 12.04 LTS? Is there something I should know to lower memory usage ? Should I stick with Ubuntu 12.04 LTS?

    Read the article

  • Memory usage on debian webserver keeps going up

    - by Steven De Groote
    my webserver is running apache 1.3.x for a PHP application, along with mysql on the same machine. Most of the time it runs fine, CPU usage still with nice margin, but somehow memory usage keeps growing throughout uptime. While it looks like it is chunked from time to time, I've had moments my server going down because it's out of memory. Restarting apache or mysql only reduced memusage by 100M. Attached is an overview of monthly memory usage. The 2 massive drops are server restarts after out-of-memory situations. http://imageshack.us/photo/my-images/51/memorymonth.png/ Any explanations for his behaviour or how I could solve this? Thanks! Steven

    Read the article

  • Simple tool to graph memory usage?

    - by dbr
    Is there a script that will show memory usage as a graph, for example as a pie-chart, with each process being being a separate slice? I'm not looking for something like Munin to graph memory usage over time, but rather show the memory usage per-process at a single point in time. To make my request even more obscure, it is for a headless server (so no X applications). The simplest way would be to write a PNG file, or possibly an HTML file (which could use Javascript to allow the filtering of processes, changing between graph-types and so on)

    Read the article

  • Memory consumption of each accept() call on server running on Windows 2008 [migrated]

    - by Atul
    I've written a simple and small server application on Windows 2008 that just accepts connections and does nothing else. I am doing memory footprint assessment of socket calls, What I found that each connection (after accept()) consumes at least 2.5 KB of memory. Interestingly, the memory is not consumed by the process that has accept() call but it consumed by a OS process. I believe it might be because of data structures being created inside OS for each connection. Now, I have two key questions: Is it possible by any means to reduce this memory footprint (by changing any parameters, configuration etc) ? If yes how ? (Because 2K for each connection would be too much if we planning server to accept millions of connections) If my server is intended to accept million connections, is it good idea to use Windows 2008 ? or shall I switch to some other OS? Please advice me.

    Read the article

  • Tool for viewing used and free memory on windows system

    - by patrick
    I'm looking for a tool that will allow me to see all of the memory used by by Windows XP/2003 machine. I know that Process Explorer and others will show you what memory is being used by each process, but I need a full graphical view of all used and unused memory on my machine. I have seen an app that does this but cannot remember the name. Anyone know of a tool like this?

    Read the article

  • Mac has become insanely slow : Processes SystemUIServer, UserEventAgent and loginwindow using a lot of memory

    - by SatheeshJM
    I have been using my Mac for for many months without any problem. But recently all of a sudden the Mac became insanely slow. I opened Activity Manager to see what was happening. For three processes SystemUIServer, UserEventAgent and loginwindow, the memory gradually increases and reaches upto 2 GB for each process. This completely hangs up my Mac. I tried the following : 1. Restart Mac 2. Restart Mac in safe mode 3. Manually kill the processes 4. Remove Date and Time from Menu bar(this was supposed to be the problem for the SysteUIServer process's memory according to many users) 5. Removed the externally connected keyboard and mouse(some had suggested this for UserEventAgent's memory) No luck with any of those. The moment I log in, the memory spikes up. Any idea what the hell is happening? Please help.

    Read the article

  • Slow transfer with memory stick (819 kb/s)

    - by Nrew
    What do I do to optimize the file transfer rate of a Memory Stick Duo? The file transfer was not like this when it was still new. Can reformatting give new life to a memory stick? It takes about 20 minutes just to transfer 1Gb of file from computer to memory stick. The computer is decent enough. 2.50Ghz processor, 2Gb ram.

    Read the article

  • windows xp blue screen dumping physical memory

    - by dotnet-practitioner
    I get following blue screen after running my laptop for an hour... A problem has been detected and windows has been shut down to prevent damange to your computer. If this is the first time you've seen this stop error screen, restart your computer. If this screen appears again, follow these steps: Check to be sure you have adequate disk space. If a driver is identified in the stop message, disable the driver or check with the manufacturer for driver updates. Try changing video adapters. Check with your hardware vendor for any BIOS updates. Disable BIOS memory options such as cashing or shadowing. If you need to use safe mode to remove or disable components, restart your computer, press F8 to select advanced startup options, and the select safe mode. Technical Information: * STOP 0x0000008E (0xc0000005, 0x805B03F5, 0xF703DC7C, 0x00000000) Beginning dump of physical memory Physical memory dump complete. Contact you system administrator or technical support group for further assistance. so.... if this is a faulty memory.... from where I could buy RAM for following laptop.... TOSHIBA SATELLITE A45-S250 My local Frys store does not carry memory for this laptop.

    Read the article

  • Macbook memory upgrade question

    - by James Evans
    I've read some conflicting articles on Macbooks and memory upgrades. Some say you have to buy the "special" Mac memory (bulls$%t), others say manufacturers like Partriot and Ocz will work fine. My Macbook (non-pro) is about 6 mos old with it's 2 GB of memory (SO-DIMM 1066MHz DDR3). Does anyone have any definitive information of what will work? Thanks!

    Read the article

  • Macbook memory upgrade question

    - by James Evans
    I've read some conflicting articles on Macbooks and memory upgrades. Some say you have to buy the "special" Mac memory (bulls$%t), others say manufacturers like Partriot and Ocz will work fine. My Macbook (non-pro) is about 6 mos old with it's 2 GB of memory (SO-DIMM 1066MHz DDR3). Does anyone have any definitive information of what will work? Thanks!

    Read the article

  • Reducing apache VIRT and RES memory usage

    - by lisa
    On a quad-core server with 8GB of ram I have apache processes that use up to 2.3GB RES memory and 2.6GB VIRT memory. Here is a copy of the top -c command http://imgur.com/x8Lq9.png Is there a way to reduct the memory usage for these apache processes? These are my httpd.conf settings Timeout 160 TraceEnable Off ServerSignature Off ServerTokens ProductOnly FileETag None StartServers 6 <IfModule prefork.c> MinSpareServers 4 MaxSpareServers 16 </IfModule> ServerLimit 400 MaxClients 320 MaxRequestsPerChild 10000 KeepAlive On KeepAliveTimeout 4 MaxKeepAliveRequests 80

    Read the article

  • virtual memory commited

    - by vinu
    After a server bounce happens, and after around 40-45 days time period, we receive continuous “Committed Virtual Memory” alerts which indicates the usage of swap space in the magnitude of 4GB This also causes the application to perform very slowly and experience a number of stalled transactions. Server Setup: 4 Tomcat Servers (version 7.0.22) that are load balanced (not clustered) by 2 Apache Servers. And the Apache servers themselves supply static content and routing to these 4 tomcat servers. Java Runtime Version: java version "1.6.0_30" Java(TM) SE Runtime Environment (build 1.6.0_30-b12) Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode Memory Startup Parameters: MEMORY_OPTIONS="-Xms1024m -Xmx1024m -Xss192k -XX:MaxGCPauseMillis=500 -XX:+HeapDumpOnOutOfMemoryError -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled" Monitoring – Wily monitoring is available in all the production servers that monitors key server parameters and sends out configurable alert emails based on pre defined settings. Note: Each of the servers also has two other separate tomcat domains that run different applications Investigated area: There is no Heap Memory Leak and the GC is running fine without any issues over any period of time The current busy thread count corresponds directly to the application usage – weekends and nights have lesser no. of threads compared to business hours ThreadLocal uses a WeakReference internally. If the ThreadLocal is not strongly referenced, it will be garbage-collected, even though various threads have values stored via that ThreadLocal. Additionally, ThreadLocal values are actually stored in the Thread; if a thread dies, all of the values associated with that thread through a ThreadLocal are collected. If you have a ThreadLocal as a final class member, that's a strong reference, and it cannot be collected until the class is unloaded. But this is how any class member works, and isn't considered a memory leak. The cited problem only comes into play when the value stored in a ThreadLocal strongly references that ThreadLocal—sort of a circular reference. In this case, the value (a SimpleDateFormat), has no backwards reference to the ThreadLocal. There's no memory leak in this code. Can anyone please let me know what could be the cause of this and what to be monitored?

    Read the article

  • SQL server peformance, virtual memory usage

    - by user45641
    Hello, I have a very large DB used mostly for analytics. The performance overall is very sluggish. I just noticed that when running the query below, the amount of virtual memory used greatly exceeds the amount of physical memory available. Currently, physical memory is 10GB (10238 MB) whereas the virtual memory returns significantly more - 8388607 MB. That seems really wrong, but I'm at a bit of a loss on how to proceed. USE [master]; GO select cpu_count , hyperthread_ratio , physical_memory_in_bytes / 1048576 as 'mem_MB' , virtual_memory_in_bytes / 1048576 as 'virtual_mem_MB' , max_workers_count , os_error_mode , os_priority_class from sys.dm_os_sys_info

    Read the article

  • How to get the installed Memory Type

    - by balexandre
    Windows 7 could be better at this, it tells everything about the computer CPU but only the Memory amount Microsoft should add information about DDR type, speed and maybe CL as well. While this never happens, What's the best and easy way to check the installed memory so we can buy and upgrade it? I was thinking a simple software so I don't need to install the full SiSoft Sandra for example, just looking for something small, only for the memory part.

    Read the article

  • Reporting memory usage per process/program

    - by Nick Retallack
    How can I get the current memory usage (preferably in bytes so they can be added up accurately) for all running processes individually? Can I roll up the summaries for child processes into the process that spawned them? (e.g all apache threads together). Sometimes, my server runs out of memory and becomes unresponsive. I want to discover what is using up all the memory. Unfortunately, it's likely to not be a single process. Some programs spawn hundreds of processes, each using very little memory, but it adds up. On a side note, is it normal for apache to spawn 200+ processes?

    Read the article

  • Google Chrome is running my system out of memory

    - by jasondavis
    I am running Windows 7 x64 with 12GB of RAM I often have multiple windows and a ton of tabs open. I use the extension Session Buddy to restore all my windows and tabs once the memory gets too high. So my 12gb of ram will get up to around 93% used because of Chrome, now I can close chrome down and restore the same amount of windows and tabs and it will only use about 25% of memory, it then over time increases back up to the 90% zone after several hours. It seems that when I close tabs, instead of freeing that memory up, it doesn't so that is why the huge increase of memory usage as new tabs are opened and closed it just adds up, this sounds like a huge bug in chrome. Just for an example I just re-booted my system, I only have 1 window with 4 tabs open and in the task manager, it shows 29 chrome.exe processes I then killed all chrome processes and opened a chrome window with just 1 tab, it made 27 chrome.exe processes. Is this an issue that others have? More importantly, is there a fix? UPDATE I just read that each plugin and extension creates a chrome.exe process, I then couunted 24 extensions so that helps explain a portion of the large processes. Still not sure about memory not being freed up though!

    Read the article

  • Fresh install CentOS 6.4 64b with directadmin slowly consumes all memory and crashes

    - by Coen Ponsen
    Dear server fault community, This is my first question on server fault, i'm new to server (mis)configuration so please forgive me for asking something stupid :) I'm running Directadmin on a CentOS 6.4 64b with 4GB memory and over 10000Gh virtual machine. I migrated my websites because my former vps couldn't keep up anymore. Only half of the websites from this 1GB machine were migrated jet. So the migration is still in progress and already my server crashes every day. The server performance up until that moment is perfect. The directadmin log files show nothing out of the ordinary. Yesterday only the mysql server crashed but it also crashed the entire machine before. The memory usage in DA seems to be normal: directadmin directadmin (pid 3923 22158 22159 22160 22161 22162 )8.75 MB dovecot dovecot (pid 3851 ) 47.8 MB exim exim (pid 1350 ) 1.29 MB httpd (pid 21525 21528 21529 21530 21531 21532 21546 21571 21742 21743 21744 )490.4 MB mysqld mysqld (pid 1299 ) 287.8 MB named named (pid 3807 ) 16.3 MB proftpd proftpd (pid 1481 ) 1.91 MB sshd sshd (pid 1173 21494 ) 5.16 MB Restarting services immediately frees up memory, but slowly over time the memory usage increases(about 24 hours to crash). The commands: # sync # echo 3 > /proc/sys/vm/drop_caches Will free al memory correct. I could just create a cronjob but it seems the wrong way around to me. I can't seem to pinpoint the cause. Any advices, references or tips are highly appreciated! Greetings, Coen edit: free -m : after drop_caches: total used free shared buffers cached Mem: 3830 735 3095 0 0 21 -/+ buffers/cache: 712 3117 Swap: 991 0 991 I'll post another one this evening.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >