Search Results

Search found 25996 results on 1040 pages for 'memory address'.

Page 21/1040 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • What is private bytes, virtual bytes, working set?

    - by Devil Jin
    I am using perfmon windows utility to debug memory leak in a process. Perfmon explaination: Working Set- Working Set is the current size, in bytes, of the Working Set of this process. The Working Set is the set of memory pages touched recently by the threads in the process. If free memory in the computer is above a threshold, pages are left in the Working Set of a process even if they are not in use. When free memory falls below a threshold, pages are trimmed from Working Sets. If they are needed they will then be soft-faulted back into the Working Set before leaving main memory. Virtual Bytes- Virtual Bytes is the current size, in bytes, of the virtual address space the process is using. Use of virtual address space does not necessarily imply corresponding use of either disk or main memory pages. Virtual space is finite, and the process can limit its ability to load libraries. Private Bytes- Private Bytes is the current size, in bytes, of memory that this process has allocated that cannot be shared with other processes. Q1. Is it the private byte should I measure to be sure if the process is having any leak as it does not involve any shared libraries and any leak if happening will be coming from the process itself? Q2. What is the total memory consumed by the process? Is it the Virtual byte size? or Is it the sum of Virtual Bytes and Working Set Q3. Is there any relation between private bytes, working set and virtual bytes. Q4. Any tool which gives a better idea memory information?

    Read the article

  • Information about PTE's (Page Table Entries) in Windows

    - by Patrick
    In order to find more easily buffer overflows I am changing our custom memory allocator so that it allocates a full 4KB page instead of only the wanted number of bytes. Then I change the page protection and size so that if the caller writes before or after its allocated piece of memory, the application immediately crashes. Problem is that although I have enough memory, the application never starts up completely because it runs out of memory. This has two causes: since every allocation needs 4 KB, we probably reach the 2 GB limit very soon. This problem could be solved if I would make a 64-bit executable (didn't try it yet). even when I only need a few hundreds of megabytes, the allocations fail at a certain moment. The second problem is the biggest one, and I think it's related to the maximum number of PTE's (page table entries, which store information on how Virtual Memory is mapped to physical memory, and whether pages should be read-only or not) you can have in a process. My questions (or a cry-for-tips): Where can I find information about the maximum number of PTE's in a process? Is this different (higher) for 64-bit systems/applications or not? Can the number of PTE's be configured in the application or in Windows? Thanks, Patrick PS. note for those who will try to argument that you shouldn't write your own memory manager: My application is rather specific so I really want full control over memory management (can't give any more details) Last week we had a memory overwrite which we couldn't find using the standard C++ allocator and the debugging functionality of the C/C++ run time (it only said "block corrupt" minutes after the actual corruption") We also tried standard Windows utilities (like GFLAGS, ...) but they slowed down the application by a factor of 100, and couldn't find the exact position of the overwrite either We also tried the "Full Page Heap" functionality of Application Verifier, but then the application doesn't start up either (probably also running out of PTE's)

    Read the article

  • DDR3 10600 memory running at 533mhz

    - by Elger
    I bought a second hand hp dl160 g6 a year ago with 48GB configured in it. I discovered checking with cpu-z and doublechecked with speccy the memory runs at only 533Mhz. I checked the configuration with the HP memory configurator and the banks are populated correctly for max performance. There are 12 banks populated with micron and hynix memory of 4gb, all capable of running 1333mhz. What could me wrong here? C:\Users\Administrator>wmic Memorychip get manufacturer, partnumber, speed, seri alnumber, devicelocator, banklabel BankLabel DeviceLocator Manufacturer PartNumber SerialNumber Speed BANK0 PROC 1 DIMM 3A Micron 36JSZF51272PZ1G4F C5DF65D7 1333 BANK1 PROC 1 DIMM 2D Micron 36JSZF51272PY1G4D 951565E0 1333 BANK3 PROC 1 DIMM 6B Micron 36JSZF51272PZ1G4F 3F3160D6 1333 BANK4 PROC 1 DIMM 5E Hyundai HMT151R7BFR4C-H9 E28A3014 1333 BANK6 PROC 1 DIMM 9C Micron 36JSZF51272PZ1G4F 26DF7E1A 1333 BANK7 PROC 1 DIMM 8F Micron 36JSZF51272PZ1G4G 77FC67D7 1333 BANK9 PROC 2 DIMM 3A Hyundai HMT151R7BFR4C-H9 FB763433 1333 BANK10 PROC 2 DIMM 2D Hyundai HMT151R7BFR4C-H9 E18AA014 1333 BANK12 PROC 2 DIMM 6B Hyundai HMT151R7BFR4C-H9 DF8A1014 1333 BANK13 PROC 2 DIMM 5E Hyundai HMT151R7BFR4C-H9 6968511A 1333 BANK15 PROC 2 DIMM 9C Hyundai HMT151R7BFR4C-H9 F28A7014 1333 BANK16 PROC 2 DIMM 8F Micron 36JSZF51272PZ1G4G 76FC67D7 1333

    Read the article

  • lsass.exe memory leak on windows 2003 server

    - by thelsdj
    In the past month or so I noticed that lsass.exe has started to leak memory, getting to 500MB+ of ram in under a week after reboot. Before this I had never noticed it using any significant amount of memory compared to other processes on the system. This is happening on 2 identical servers, neither of which has anything to do with Active Directory. Maybe a recent Windows Update has caused this? Any thoughts on things to check? As a side question is there some way to recycle the memory usage of lsass.exe without rebooting? Edit: Here is what I'm seeing in Process Monitor, there are thousands of registry open/query/close a minute from lsass.exe. How can I track down what is triggering these?

    Read the article

  • Apache out of memory on

    - by Sherif Buzz
    Hi all, I have a VPS with 768 MB RAM and a 1.13 GHZ processor. I run a php/mysql dating site and the performance is excellent and server load is generally very low. Sometimes I place ads on Facebook and at peak times I can get 100-150 clicks within a few seconds - this causes the server to run out of memory : Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp .... And all users receive an error 500 page. I am just wondering if this sounds reasonable or not - to me 100-150 does not seem to be a number that should cause apache to run out of memory. Any advice/recommendations how to diagnose the issue highly appreciated.

    Read the article

  • Load balanced asp.net websites and required memory usage

    - by Matt
    Each of my servers has 8Gb RAM and the memory usage hovers around 7Gb. I have a load balancer available to me but at the moment I'm worried that putting my sites through it will cause the platform to fall over. The load balancer would be configured with a sticky round-robin where a new connection is round robin but subsequent connections for the same source ip will remain on the same server (until a limit is reached). Thats all standard stuff. How do I know what memory usage my sites will need across the platform when I put them through the load balancer? Rather than knowing that a site is using 150mb on a particular server I could face a situation where the 150mb is taken up on each of the servers. I know that with only 1 gb free I could have a serious problem on my hands. If I free up some memory then how can I work out what I need to have free to prevent this from happening? Thanks Matt

    Read the article

  • SBS 2003 crashes often due to limited memory

    - by Sanoj
    I have a Windows SBS 2003 Std that regularly crashes, in about every 20th day. The only thing I can see in the logs is that used memory increases with about 30MB/day. The process that uses more and more memory is sqlservr. We don't have much installed on the server; a Point-Of-Sale-system that uses Pervasive SQL as database and an Accounting application. We just have 2GB of RAM and I could upgrade to 4GB but I think that this just delay the problem. Is there any solution to this problem? Could I limit sqlservr to some memory?

    Read the article

  • My server is running out of memory, despite having all swap free

    - by Biohazard
    I am using Debian 6 (Squeeze). The server has 4gb of memory in it, and 8gb of swap. I'm starting to get memory alloc errors at high application load times, but from top command: Mem: 4055944k total, 3915436k used, 140508k free, 10444k buffers Swap: 7999480k total, 0k used, 7999480k free, 3604496k cached The system isn't even trying to use the swap? Why would this be happening? I would like to upgrade the primary memory, but this isn't possible just right now. Thanks.

    Read the article

  • How to avoid Memory "Hard Fault/sec"

    - by Flavio Oliveira
    i've a problem on my windows 2008 server x64, and i cannot understand how can i solve it. i'm looking to Resource Monitor and see about 100 to 200 hard faults/sec. and generally the machine is slow. As i've readed a bit it is caused by a "memory Page" that is no longer available on physical memory and causes a io operations (disk) and it is a problem. The current hardware is a intel core2duo E8400 (3.0GHz) with 6GB RAM on a Windows Server Web 64-bit. Actually the machine have about 2GB Ram used what having 4Gb available to use, Why is the machine requires that high level of Disk operations? what can i do to increase the performance? Im experiencing a memory issues? what should be my starting point?

    Read the article

  • iMac invalid memory access and then crash?

    - by Sam McAfee
    My iMac G5 from a few years ago (the square white one) dies now, with a message about an invalid memory access. It goes straight to an Open Firmware prompt and doesn't even load the Mac OS at all. Am I correct in guessing that the memory is probably the issue, and that if I replace it, we may be able to boot again as normal? That's my gut feeling anyway, so I ordered some more memory and plan to swap it today. Any thoughts? Ideas?

    Read the article

  • Memory upgrade - is the site reliable?

    - by Yuval
    Hi, I have a late-2008 unibody macbook model with 2 GB of ram. I am looking to upgrade to 4 GB of ram. I looked about a month ago at Other World Computing's 4 GB upgrade kit and I remember it being around $80. I looked today, finally getting to buy it and it went up to almost $100. I found another site, memoryupgrade.pro that calls itself "Pro memory upgrade" and it looks legitimate - it sells the memory for around $80 in its own brand. The only thing is, I haven't been able to find any reviews about it, and I'm not sure if it actually is reliable. Does anybody have any experience with this site? Does anybody have any other suggestions for buying macbook memory? I have friends who bought from OWC and were happy, should I just spend the extra $30 (including shipping) and buy from them? Thanks!

    Read the article

  • 32bit Application Memory Usage on 64bit Windows 7

    - by Brian
    I have an early 2012 Macbook Pro with and Intel I7 processor and 16 gigs RAM running Windows 7 Professional 64bit via Bootcamp. I work in Geographical Information Systems as a programmer so most of the applications I am running are 32bit Applications, but tend to use a lot of resources (i.e. ArcGIS, SQL Server Express, Visual Studio, etc.). I have been noticing that when I have multiple instances of either the same 32bit application or different 32bit applications and they are all working on hefty processing tasks, I am still only topping out at about 30% memory use. I understand 32bit applications are limited to less than 4gb RAM, but I assumed that one instance could use its own 4gb while another instance could use another 4gb to take full advantage of all the memory I have installed. Can anyone explain how this works and how I can get my applications to take advantage of all my memory via running multiple instances?

    Read the article

  • Memory is free, but still swapping?

    - by japancheese
    Hello, I'm sure this is a pretty basic question, but I'm just trying to get a grasp of what's going on with my Ubuntu (Hardy Herron) server (running a Rails-based site). It seems that I have free memory available, yet the system is reporting that it is still swapping memory (unless I'm reading this incorrectly?). Here is the "free -m" output total used free shared buffers cached Mem: 1024 905 118 0 33 409 -/+ buffers/cache: 462 561 Swap: 2047 95 1952 Could anyone explain to me some possible reasons that it is maintaining 95mb of swap at all times (it is never less)? I'm just looking for some leads on things I could check out that would explain to me exactly how memory is utilized in Linux.

    Read the article

  • Memory compatibility?

    - by nvillec
    I'm in the process of building a PC intended mostly for gaming and I've got a few questions. Currently, I only have the motherboard and CPU: Motherboard: GIGABYTE GA-Z68AP-D3 CPU: Intel Core i3-2120 My main question is about memory compatibility. I have 4 Hynix 2GB HMT125U7BFR8C-G7 with light-moderate use laying around and I'd love to save a few bucks if I can. I've read that this is server memory... a) Will that be a problem for PC use? b) Is it compatible with the motherboard? I've emailed Hynix and checked Crucial to no avail. If incompatible, what memory would be a good fit given the components I have? The motherboard has 4 sockets and supports up to 32GB, but I don't know that I have the budget for that at the moment. Thanks!!

    Read the article

  • How to debug high memory usage by registry?

    - by bkr
    I have a windows 2008 r2 server running ADFS2 and some web apps that is having issues with low memory. Digging around I found that the 5.5 GB were being used under Kernel Memory (paged). Further digging with Poolmon, I discovered that the majority of that (5GB+) was being used by CM - configuration manager. Also known as the registry. I'm really now sure how to tell why the registry is using so much memory however, or how to release it? Looking at the physical registry files they don't appear that large. EDIT #1 Using the powershell script @ http://jdhitsolutions.com/blog/2011/05/get-registry-size-and-age/ confirms what I saw looking at the physical registry files, that they're relatively small Computername : (removed) Status : OK CurrentSize : 67 MaximumSize : 2048 FreeSize : 1981 PercentFree : 96.728515625 Created : 4/1/2011 11:38:02 AM Age : 454.23:41:28.2540682

    Read the article

  • Force Firefox 14 to free memory when opening/closing lots of popups

    - by aknghiem
    I'm currently trying to run some tests on a web application using Selenium IDE with Firefox 14. The tests mainly consist in loading a page containing thousands of links and clicking on every of those links. Of course, each time a popup shows, I tell Selenium to close it and proceed with the remaining links. However, it seems that even if I close the popups, Firefox is not freeing memory. Usually, I end up with Firefox crashing after opening 1500 popups (around 2.5Gb of memory usage). Is there any way to force the browser to free memory? Maybe something I should set in about:config? Or is there a flaw with Selenium? Thanks.

    Read the article

  • What about IPv4 class E?

    - by Luc
    IPv4's class E network (240.0.0.0/4) contains 268 million addresses. Despite the advertisements for IPv6, claiming we have ran out of address space, this block ironically still claims to be "Reserved for future use". Why hasn't this block been freed up yet? Of course, IPv6 should be promoted instead of freeing up more IPv4 addresses, but we've seen the address shortage coming for years. There has even been a time they weren't sure there was enough time to develop IPv6 before we would run out of addresses. Why didn't they free up this block already? And is there any chance these addresses will be used in the future, like when IPv6 is fairly widely implemented but we still need IPv4 for backwards compatibility? It will be phased out regardless, but then ISPs don't have to employ NAT for IPv4 compatibility.

    Read the article

  • IIS not using available memory?

    - by Herb Caudill
    Recently launched an ASP.NET site running on a single 32-bit WS2003 box (SQL on a separate server). The server has 4GB intalled, 3GB available. According to task manager, the w3wp.exe process is only using between 200-600MB. The site has tens of thousands of pages and makes heavy use of page output caching, so I would expect it to use a lot more of the available memory. The app pool isn't set to throttle memory usage. Is there anything else that might be limiting the amount of memory that IIS takes?

    Read the article

  • Non-ECC memory with ZFS: a stupid idea?

    - by iconoclast
    I'm the proud new owner of an HP Proliant Microserver N40L, and planning to upgrade the (obviously paltry 2 GB of) memory to the maximum of 16 GB. (Theoretically 8 GB is the limit, but empirically 16 GB has been shown to work.) Some guides advise that ECC memory is not that important, but I'm not so sure I believe this. I've installed FreeNAS and am planning to add ZFS volumes as soon as my new hard drives arrive. Would it be stupid to skimp and get non-ECC memory for a ZFS-based NAS? If it's necessary, then I'll bite the bullet, but if it's just paranoia, then I'll probably skip it.

    Read the article

  • /dev/shm (shared memory) on linux

    - by Kirzilla
    Hello, Let's imagine that we have 8Gb of RAM on server. I'm mounting /dev/shm with 4Gb on board. mount -o remount,size=4G /dev/shm Will this memory be strictly reserved for shared memory or if /dev/shm is empty this memory could be used by regular applications (web server, php etc.)? PS:Sorry for my English. I'm asking it because I've just checked df -h and found tmpfs 6.0G 0 6.0G 0% /dev/shm on 8Gb RAM sever. I don't know who made this setup, but it seems to me awful. Thank you!

    Read the article

  • Dying SanDisk Memory Stick Pro Duo

    - by Different55
    I have a Memory Stick Pro Duo and after attempting to delete the largest file from a Mac the stick has become unusable. I can almost access it. When I put it in my PC I can open/delete/copy/paste/rename/modify one file/folder, then it can't detect the card. If I reinsert the card I can move on to the next file, but this is really annoying and my PSP won't read it at all. The memory card access light will flash for a really long time before it says that every file is corrupted. When I have tried to format it with either the PC, PSP, or a camera that uses a memory stick pro duo, it fails. I've tried with all the different options on windows, I tried formatting it through CMD, but nothing I have tried works. Should I copy every file off one by one or is there a way to fix it?

    Read the article

  • Good Choice of Memory for Asus K52F-BBR5

    - by Christopher Painter
    I recently purchased an Asus K52F-BBR5 notebook. It's a basic laptop with an Intel P6100 CPU and Mobile Intel® HM55 Express Chipset. It came with 3GB of DDR3 SODIMM memory and I'd like to expand it to 8GB. I'm a little confused by DDR3 nomenclature and not up to date on my knowledge of chipsets. I'd like to make a good choice when selecting memory for it. Crucial's database suggests using either a PC3-8500 with CAS 7 or a PC3-10600 with a CAS of 9. Is the 8500 better because of it's CAS 7 or will my chipset run the memory async at a higher speed and get better performance? Which would be a better choice for my chipset and CPU? Price difference is negligble.

    Read the article

  • Limiting memory usage and mimimizing swap thrashing on Unix / Linux

    - by camelccc
    I have a few machines that I machine that I use for running large numbers of jobs where I try to limit the number of jobs so as not to exceed the available RAM of the machine. Occasionally I mis-estimate how much memory some of the jobs will take, and the machine starts thrashing the swap file. I resolve this by sending the kill -s STOP to one of the jobs so that it can get swapped out. Does anyone know of a utility that will monitor a server for processes by a specific name, and then pause the one with the smallest memory footprint is the total memory consumption reaches a desired threshold so that the larger ones can run and complete with a minimum of swap file thrashing? Paused processes then need to be resumed once some existing processes have completed.

    Read the article

  • Design: How to declare a specialized memory handler class

    - by Michael Dorgan
    On an embedded type system, I have created a Small Object Allocator that piggy backs on top of a standard memory allocation system. This allocator is a Boost::simple_segregated_storage< class and it does exactly what I need - O(1) alloc/dealloc time on small objects at the cost of a touch of internal fragmentation. My question is how best to declare it. Right now, it's scope static declared in our mem code module, which is probably fine, but it feels a bit exposed there and is also now linked to that module forever. Normally, I declare it as a monostate or a singleton, but this uses the dynamic memory allocator (where this is located.) Furthermore, our dynamic memory allocator is being initialized and used before static object initialization occurs on our system (as again, the memory manager is pretty much the most fundamental component of an engine.) To get around this catch 22, I added an extra 'if the small memory allocator exists' to see if the small object allocator exists yet. That if that now must be run on every small object allocation. In the scheme of things, this is nearly negligable, but it still bothers me. So the question is, is there a better way to declare this portion of the memory manager that helps decouple it from the memory module and perhaps not costing that extra isinitialized() if statement? If this method uses dynamic memory, please explain how to get around lack of initialization of the small object portion of the manager.

    Read the article

  • Flushing writes in buffer of Memory Controller to DDR device

    - by Rohit
    At some point in my code, I need to push the writes in my code all the way to the DIMM or DDR device. My requirement is to ensure the write reaches the row,ban,column of the DDR device on the DIMM. I need to read what I've written to the main memory. I do not want caching to get me the value. Instead after writing I want to fetch this value from main memory(DIMM's). So far I've been using Intel's x86 instruction wbinvd(write back and invalidate cache). However this means the caches and TLB are flushed. Write-back requests go to the main memory. However, there is a reasonable amount of time this data might reside in the write buffer of the Memory Controller( Intel calls it integrated memory controller or IMC). The Memory Controller might take some more time depending on the algorithm that runs in the Memory Controller to handle writes. Is there a way I force all existing or pending writes in the write buffer of the memory controller to the DRAM devices ?? What I am looking for is something more direct and more low-level than wbinvd. If you could point me to right documents or specs that describe this I would be grateful. Generally, the IMC has a several registers which can be written or read from. From looking at the specs for that for the chipset I could not find anything useful. Thanks for taking the time to read this.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >