Search Results

Search found 13619 results on 545 pages for 'memory mapped'.

Page 20/545 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Why does my SQL Server use AWE memory? and why is this not visible in RAMMap?

    - by Marnix Klooster
    We have a Windows Server 2008 R2 (64-bit) 8GB server where, according to Sysinternals RAMMap, 2GB of memory is allocated using AWE. As far as I understood, this means that these pages stay in physical memory and are never pushed out. This causes other apps to be pushed out of physical memory. In RAMMap, on the Physical Pages tab, the Process column is empty for all of the AWE pages. We run SQL Server on that box, but (through SQL Server Management Studio, under Server Properties - Memory, under Server memory options) it says is configured not to use AWE. However, when stopping SQL Server, the AWE pages are suddenly gone. So it really is the culprit. So I have three questions: Why does RAMMap not know/show that a SQL Server process is responsible for that AWE memory? Why does SQL Server Management Studio say that AWE memory is not used? How do we make configure SQL Server to really not use AWE memory?

    Read the article

  • free memory in Linux

    - by user32425
    Hi, I did free -tm on my system, and I got the output below. Is the free buffers/cache part of the used memory? And therefore we can consider it as free memory? total used free shared buffers cached Mem: 5721 5689 32 0 137 4664 -/+ buffers/cache: 887 4834 Swap: 6000 13 5987 Total: 11722 5703 6019 Thanks

    Read the article

  • Tomcat memory usage

    - by Adrian Mester
    I'm running tomcat on a ubuntu 10.4 VPS with 512MB of RAM (1024 burstable). I'm using it for development, so performance isn't an issue, but memory is. Tomcat is currently using about 250MB without any apps installed (I compared memory usage with tomcat stopped and running), and I also need to run lighttpd and mysql. Is there any way to get that number down? I don't need it to be able to handle a large number of requests at once.

    Read the article

  • How can I free memory on linux

    - by user35153
    When I use top to see memory usage, I have 65gb ram but only 1.3gb of it free and remaining is shown as used. When I ran my program It gives memory insufficiency error. Although no other program is using the remaining 63.7gb ram it is hold. how can I get free the unused ram?

    Read the article

  • Glassfish V3 using up all available memory

    - by Mannaz
    I have a Virtual Server with 1GB of RAM. When i start glassfish with asadmin start-domain it instantly allocates all available memory, although i defined -Xmx128m in my domain.xml. Am I missing an option here? How can I prevent glassfish from using all free memory?

    Read the article

  • Java Runtime.freeMemory() returning bizarre results when adding more objects

    - by Sotirios Delimanolis
    For whatever reason, I wanted to see how many objects I could create and populate a LinkedList with. I used Runtime.getRuntime().freeMemory() to get the approximation of free memory in my JVM. I wrote this: public static void main(String[] arg) { Scanner kb = new Scanner(System.in); List<Long> mem = new LinkedList<Long>(); while (true) { System.out.println("Max memory: " + Runtime.getRuntime().maxMemory() + ". Available memory: " + Runtime.getRuntime().freeMemory() + " bytes. Press enter to use more."); String s = kb.nextLine(); if (s.equals("m")) for (int i = 0; i < 1000000; i++) { mem.add(new Long((new Random()).nextLong())); } } } If I write in m, the app adds a million Long objects to the list. You would think the more objects (to which we have references, so can't be gc'ed), the less free memory. Running the code: Max memory: 1897725952. Available memory: 127257696 bytes. m Max memory: 1897725952. Available memory: 108426520 bytes. m Max memory: 1897725952. Available memory: 139873296 bytes. m Max memory: 1897725952. Available memory: 210632232 bytes. m Max memory: 1897725952. Available memory: 137268792 bytes. m Max memory: 1897725952. Available memory: 239504784 bytes. m Max memory: 1897725952. Available memory: 169507792 bytes. m Max memory: 1897725952. Available memory: 259686128 bytes. m Max memory: 1897725952. Available memory: 189293488 bytes. m Max memory: 1897725952. Available memory: 387686544 bytes. The available memory fluctuates. How does this happen? Is the GC cleaning up other things (what other things are there on the heap to really clean up?), is the freeMemory() method returning an approximation that's way off? Am I missing something or am I crazy?

    Read the article

  • memory cards capacity needs to be the same?

    - by balalakshmi
    I am not a hardware guy. I just heard this from a service engineer Memory cards of unequal capacities should not be used. that is if there is a 1 GM already in the slot, we need to add another 1 GB card only. Not 512 MB or 2 GB. Is there a problem if we use memory cards which are not equal capacities?

    Read the article

  • VMMap - awesome memory analysis tool

    VMMap is a process virtual and physical memory analysis utility. It shows a breakdown of a process's committed virtual memory types as well as the amount of physical memory (working set) assigned by the operating system to those types. Besides graphical representations of memory usage, VMMap also shows summary information and a detailed process memory map. Powerful filtering and refresh capabilities allow you to identify the sources of process memory usage and the memory cost of application features. Besides flexible views for analyzing live processes, VMMap supports the export of data in multiple forms, including a native format that preserves all the information so that you can load back in. It also includes command-line options that enable scripting scenarios. VMMap is the ideal tool for developers wanting to understand and optimize their application's memory resource usage. span.fullpost {display:none;}

    Read the article

  • Motherboard memory question

    - by JERiv
    I am currently drawing up specs on a new workstation for my office. I am considering the Asus P6X58D for a motherboard. This board's specs list it as supporting 24 gigs of memory. Suppose I were to use six four gig memory cards and then two video cards with 1 gig of memory apiece. Is the maximum supported memory similar to how 32 bit operating systems only have enough address space for 4 gigs of memory? Simply: Will the board post? If so, will the system be able to address all the memory, both the 24 gigs on the ddr3 bus and the 3 gigs on the graphics card?

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 4

    - by MarkPearl
    Learning Outcomes Explain the characteristics of memory systems Describe the memory hierarchy Discuss cache memory principles Discuss issues relevant to cache design Describe the cache organization of the Pentium Computer Memory Systems There are key characteristics of memory… Location – internal or external Capacity – expressed in terms of bytes Unit of Transfer – the number of bits read out of or written into memory at a time Access Method – sequential, direct, random or associative From a users perspective the two most important characteristics of memory are… Capacity Performance – access time, memory cycle time, transfer rate The trade off for memory happens along three axis… Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access time This leads to people using a tiered approach in their use of memory   As one goes down the hierarchy, the following occurs… Decreasing cost per bit Increasing capacity Increasing access time Decreasing frequency of access of the memory by the processor The use of two levels of memory to reduce average access time works in principle, but only if conditions 1 to 4 apply. A variety of technologies exist that allow us to accomplish this. Thus it is possible to organize data across the hierarchy such that the percentage of accesses to each successively lower level is substantially less than that of the level above. A portion of main memory can be used as a buffer to hold data temporarily that is to be read out to disk. This is sometimes referred to as a disk cache and improves performance in two ways… Disk writes are clustered. Instead of many small transfers of data, we have a few large transfers of data. This improves disk performance and minimizes processor involvement. Some data designed for write-out may be referenced by a program before the next dump to disk. In that case the data is retrieved rapidly from the software cache rather than slowly from disk. Cache Memory Principles Cache memory is substantially faster than main memory. A caching system works as follows.. When a processor attempts to read a word of memory, a check is made to see if this in in cache memory… If it is, the data is supplied, If it is not in the cache, a block of main memory, consisting of a fixed number of words is loaded to the cache. Because of the phenomenon of locality of references, when a block of data is fetched into the cache, it is likely that there will be future references to that same memory location or to other words in the block. Elements of Cache Design While there are a large number of cache implementations, there are a few basic design elements that serve to classify and differentiate cache architectures… Cache Addresses Cache Size Mapping Function Replacement Algorithm Write Policy Line Size Number of Caches Cache Addresses Almost all non-embedded processors support virtual memory. Virtual memory in essence allows a program to address memory from a logical point of view without needing to worry about the amount of physical memory available. When virtual addresses are used the designer may choose to place the cache between the MMU (memory management unit) and the processor or between the MMU and main memory. The disadvantage of virtual memory is that most virtual memory systems supply each application with the same virtual memory address space (each application sees virtual memory starting at memory address 0), which means the cache memory must be completely flushed with each application context switch or extra bits must be added to each line of the cache to identify which virtual address space the address refers to. Cache Size We would like the size of the cache to be small enough so that the overall average cost per bit is close to that of main memory alone and large enough so that the overall average access time is close to that of the cache alone. Also, larger caches are slightly slower than smaller ones. Mapping Function Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines. The choice of mapping function dictates how the cache is organized. Three techniques can be used… Direct – simplest technique, maps each block of main memory into only one possible cache line Associative – Each main memory block to be loaded into any line of the cache Set Associative – exhibits the strengths of both the direct and associative approaches while reducing their disadvantages For detailed explanations of each approach – read the text book (page 148 – 154) Replacement Algorithm For associative and set associating mapping a replacement algorithm is needed to determine which of the existing blocks in the cache must be replaced by a new block. There are four common approaches… LRU (Least recently used) FIFO (First in first out) LFU (Least frequently used) Random selection Write Policy When a block resident in the cache is to be replaced, there are two cases to consider If no writes to that block have happened in the cache – discard it If a write has occurred, a process needs to be initiated where the changes in the cache are propagated back to the main memory. There are several approaches to achieve this including… Write Through – all writes to the cache are done to the main memory as well at the point of the change Write Back – when a block is replaced, all dirty bits are written back to main memory The problem is complicated when we have multiple caches, there are techniques to accommodate for this but I have not summarized them. Line Size When a block of data is retrieved and placed in the cache, not only the desired word but also some number of adjacent words are retrieved. As the block size increases from very small to larger sizes, the hit ratio will at first increase because of the principle of locality, which states that the data in the vicinity of a referenced word are likely to be referenced in the near future. As the block size increases, more useful data are brought into cache. The hit ratio will begin to decrease as the block becomes even bigger and the probability of using the newly fetched information becomes less than the probability of using the newly fetched information that has to be replaced. Two specific effects come into play… Larger blocks reduce the number of blocks that fit into a cache. Because each block fetch overwrites older cache contents, a small number of blocks results in data being overwritten shortly after they are fetched. As a block becomes larger, each additional word is farther from the requested word and therefore less likely to be needed in the near future. The relationship between block size and hit ratio is complex, and no set approach is judged to be the best in all circumstances.   Pentium 4 and ARM cache organizations The processor core consists of four major components: Fetch/decode unit – fetches program instruction in order from the L2 cache, decodes these into a series of micro-operations, and stores the results in the L2 instruction cache Out-of-order execution logic – Schedules execution of the micro-operations subject to data dependencies and resource availability – thus micro-operations may be scheduled for execution in a different order than they were fetched from the instruction stream. As time permits, this unit schedules speculative execution of micro-operations that may be required in the future Execution units – These units execute micro-operations, fetching the required data from the L1 data cache and temporarily storing results in registers Memory subsystem – This unit includes the L2 and L3 caches and the system bus, which is used to access main memory when the L1 and L2 caches have a cache miss and to access the system I/O resources

    Read the article

  • VMMap - awesome memory analysis tool

    VMMap is a process virtual and physical memory analysis utility. It shows a breakdown of a process's committed virtual memory types as well as the amount of physical memory (working set) assigned by the operating system to those types. Besides graphical representations of memory usage, VMMap also shows summary information and a detailed process memory map. Powerful filtering and refresh capabilities allow you to identify the sources of process memory usage and the memory cost of application features. Besides flexible views for analyzing live processes, VMMap supports the export of data in multiple forms, including a native format that preserves all the information so that you can load back in. It also includes command-line options that enable scripting scenarios. VMMap is the ideal tool for developers wanting to understand and optimize their application's memory resource usage. span.fullpost {display:none;}

    Read the article

  • Does 64bit Windows 8 have the same 75% memory-usage limitation for applications as Windows 7?

    - by Barleyman
    64bit Windows 7 (and Windows Vista) have a built-in limit of not being able to use the last 25% of RAM. You will get a low memory warning when you get close to the limit. Even if you disable that warning, applications will run out of memory and crash since the OS will refuse to allocate memory from that last 25%. That was fine when Vista was designed, when machines had 1 GB of total memory, but is pretty daft for today's 8 GB machines. Yes, the system will run cache, etc. on that extra 2 GB, but running out of memory when you have "merely" 2 GB left.... NB: this has nothing to do with the page file. If you limit the page file to a sensible size like 2 GB, you will still see this behavior. The system will cram the page file to the last byte while refusing to touch that 1/4th of the RAM. Does Windows 8 change this behavior? Is there now some fixed minimum free RAM requirement, like 512 MB, or is it still 25%? Can you actually adjust the low memory limit? EDIT: Here is another older post here which discusses this same behavior on Windows 7. There is fixed 25% limit in Windows 7 and I'd like to know if it's still in Windows 8. Windows 7 / Page File Disabled / 12 GB RAM / 2+ GB RAM free and "your computer is running low on memory" Edit2: Here is another link discussing the low memory warning and how to disable it. Note he claims the limit for RAM usage is 80%, not 75%. It would seem to be correct as you can in fact allocate 6.4GB of RAM with 8GB machine. Anything above and beyond that goes to the pagefile, though. http://halflight.com.au/2011/04/06/how-to-disable-low-memory-warnings-and-the-advantages-of-removing-the-page-file/ Edit3: a Here's couple of process explorer screenshots that demonstrate how it goes down. Exhibit1: https://dl.dropbox.com/u/42068601/sysinfo.jpg Exhibit2: https://dl.dropbox.com/u/42068601/sysint2.jpg You can see that Windows 7 will use the memory 6.4GB as the very last resort. I have low memory warning switched off here so programs crashed at the last screenshot allocation. With low memory warning turned on, it starts nagging before you can push OS to use that remaining 1.6GB. The question is not "Is it OK windows does not want to allocate last 20% of RAM because X", it's "Does Windows 8 still behave this way". With 16GB this really becomes dumb.

    Read the article

  • Where is the used memory in Task Manager & Resource Monitor coming from?

    - by Sam Adams
    On a Windows 7, the working set memory usage plus private memory does not add up to the total used memory in Task Manager and Windows 7 Resource Monitor. How do you find out where the used memory is coming from? The cached memory can't be part of it because sometimes the total cache is greater than the total in use. The commit memory plus the working set also doesn't add up to the total in use - but even that shouldn't be significant if it did, since commit is virtual.

    Read the article

  • Reducing memory for worker MPM in Apache

    - by ShyM
    I've moved from the prefork MPM to the worker MPM due to a process limit I was hitting on my VPS. However, memory usage increased after switching over (which is odd since the worker MPM is supposed to have a smaller memory footprint?). Most of them belong to php-cgi processes. Is there something I'm doing wrong? I have around 20 sites on it, each with a different fcgi wrapper script. Could that be a reason?

    Read the article

  • overview/history of resident memory usage

    - by kapet
    I have a fairly complicated program (Python with SWIG'ed C++ code, long running server) that shows a constantly growing resident memory usage. I've been digging with the usual tools for the leak (valgrind, Pythons gc module, etc.) but to no avail so far. I'm a bit afraid that the actual problem is memory fragmentation within Python and/or libc managed memory. Anyway, my question is more specific right now: Is there a tool to visualize resident memory usage and ideally show how it develops over time? I think the raw data is in /proc/$PID/smaps but I was hoping there's some tool that shows me a nice graph of the amounts used by mmap'ed files vs. anonymous mmap'ed memory vs. heap over time so that it's easier to see (literally) what's changing. I couldn't find anything though. Does anybody know of a ready to use tool that graphs memory usage over space and time in an intuitive way?

    Read the article

  • How to get available memory C++/g++ ?

    - by Agito
    I want to allocate my buffers according to memory available. Such that, when I do processing and memory usage goes up, but still remains in available memory limits. Is there a way to get available memory (I don't know will virtual or physical memory status will make any difference ?). And method has to be platform Independent as its going to be used on Windows, OS X, Linux and AIX. (And if possible then I would also like to allocate some of available memory for my application, someone it doesn't change during the execution).

    Read the article

  • Virtual Memory and SSD

    - by Zombian
    While studying for the A+ Exam I was reading about SSD's and I thought to myself that if you had a mobo with a low RAM limit you could use a dedicated SSD purely for Virtual RAM. I looked up some info on line and the info I found said that this was a poor practice but didn't explain why. Why shouldn't SSD's be used for Virtual Memory and what are your thoughts on a dedicated Virtual Memory drive? Thank you!

    Read the article

  • What exactly is a memory page fault?

    - by dontWatchMyProfile
    From the docs: Note: Core Data avoids the term unfaulting because it is confusing. There's no “unfaulting” a virtual memory page fault. Page faults are triggered, caused, fired, or encountered. Of course, you can release memory back to the kernel in a variety of ways (using the functions vm_deallocate, munmap, or sbrk). Core Data describes this as “turning an object into a fault”. Is a Fault in Core Data essentially a memory page fault? I have only a slight idea about what a memory page is. I believe it's a kind of "piece of code in memory" which is needed to execute procedures and stuff like that, and as the app is runing, pieces of code are sucked into memory as "pages" and thrown away as they're not needed anymore. Probably 99% wrong ;) Anyone?

    Read the article

  • How to research unmanaged memory leaks in .NET?

    - by Brandon
    I have a WCF service running over MSMQ. Memory gradually increases over time, indicating that there is some sort of memory leak. I ran the service locally and monitored some counters using PerfMon. Total CLR memory managed heap bytes remains relatively constant, while the process' private bytes increases over time. This leads me to believe that there is some sort of unmanaged memory leak. Assuming that unmanaged memory leak is the issue, how do I address the issue? Are there any tools I could use to give me hints as to what is causing the unmanaged memory leak? Also, all my service is doing is reading from the transactional queue and writing to a database, all as part of a DTC transaction (handled under the hood by requiring a transaction on the service contract). I am not doing anything explicitly with COM or DllImports. Thanks in advance!

    Read the article

  • SQL Server Memory Manager Changes in Denali

    - by SQLOS Team
    The next version of SQL Server will contain significant changes to the memory manager component.  The memory manager component has been rewritten for Denali.  In the previous versions of SQL Server there were two distinct memory managers.  There was one memory manager which handled allocation sizes of 8k or less and another for greater than 8k.  For Denali there will be one memory manager for all allocation sizes.   The majority of the changes will be transparent to the end user.  However, some changes will be visible to the user.  These are listed below: ·         The ‘max server memory’ configuration option has new lower limits.  Specifically, 32-bit versions of SQL Server will have a lower limit of 64 MB.  The 64-bit versions will have a lower limit of 128 MB. ·         All memory allocations by SQL Server components will observe the ‘max server memory’ configuration option.  In previous SQL versions only the 8k allocations were limited the ‘max server memory’ configuration option.  Allocations larger than 8k weren’t constrained. ·         DMVs which refer to memory manager internals have been modified.  This includes adding or removing columns and changing column names. ·         The memory manager configuration messages in the error log have minor changes. ·         DBCC memorystatus output has been changed. ·         Address Windowing Extensions (AWE) has been deprecated.   In the next blog post I will discuss the changes to the memory manager DMVs in greater detail.  In future blog posts I will discuss the other changes in greater detail.  

    Read the article

  • When and why will an OS initialise memory to 0xCD, 0xDD, etc. on malloc/free/new/delete?

    - by LeopardSkinPillBoxHat
    I know that the OS will sometimes initialise memory with certain patterns such as 0xCD and 0xDD. What I want to know is when and why this happens. When Is this specific to the compiler used? Do malloc/new and free/delete work in the same way with regard to this? Is it platform specific? Will it occur on other operating systems, such as Linux or VxWorks? Why My understanding is this only occurs in Win32 debug configuration, and it is used to detect memory overruns and to help the compiler catch exceptions. Can you give any practical examples as to how this initialisation is useful? I remember reading something (maybe in Code Complete 2) that it is good to initialise memory to a known pattern when allocating it, and certain patterns will trigger interrupts in Win32 which will result in exceptions showing in the debugger. How portable is this?

    Read the article

  • Memory Troubles with UIImagePicker

    - by Dan Ray
    I'm building an app that has several different sections to it, all of which are pretty image-heavy. It ties in with my client's website and they're a "high-design" type outfit. One piece of the app is images uploaded from the camera or the library, and a tableview that shows a grid of thumbnails. Pretty reliably, when I'm dealing with the camera version of UIImagePickerControl, I get hit for low memory. If I bounce around that part of the app for a while, I occasionally and non-repeatably crash with "status:10 (SIGBUS)" in the debugger. On low memory warning, my root view controller for that aspect of the app goes to my data management singleton, cruises through the arrays of cached data, and kills the biggest piece, the image associated with each entry. Thusly: - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Low Memory Warning" message:@"Cleaning out events data" delegate:nil cancelButtonTitle:@"All right then." otherButtonTitles:nil]; [alert show]; [alert release]; NSInteger spaceSaved; DataManager *data = [DataManager sharedDataManager]; for (Event *event in data.eventList) { spaceSaved += [(NSData *)UIImagePNGRepresentation(event.image) length]; event.image = nil; spaceSaved -= [(NSData *)UIImagePNGRepresentation(event.image) length]; } NSString *titleString = [NSString stringWithFormat:@"Saved %d on event images", spaceSaved]; for (WondrMark *mark in data.wondrMarks) { spaceSaved += [(NSData *)UIImagePNGRepresentation(mark.image) length]; mark.image = nil; spaceSaved -= [(NSData *)UIImagePNGRepresentation(mark.image) length]; } NSString *messageString = [NSString stringWithFormat:@"And total %d on event and mark images", spaceSaved]; NSLog(@"%@ - %@", titleString, messageString); // Relinquish ownership any cached data, images, etc that aren't in use. } As you can see, I'm making a (poor) attempt to eyeball the memory space I'm freeing up. I know it's not telling me about the actual memory footprint of the UIImages themselves, but it gives me SOME numbers at least, so I can see that SOMETHING'S happening. (Sorry for the hamfisted way I build that NSLog message too--I was going to fire another UIAlertView, but realized it'd be more useful to log it.) Pretty reliably, after toodling around in the image portion of the app for a while, I'll pull up the camera interface and get the low memory UIAlertView like three or four times in quick succession. Here's the NSLog output from the last time I saw it: 2010-05-27 08:55:02.659 EverWondr[7974:207] Saved 109591 on event images - And total 1419756 on event and mark images wait_fences: failed to receive reply: 10004003 2010-05-27 08:55:08.759 EverWondr[7974:207] Saved 4 on event images - And total 392695 on event and mark images 2010-05-27 08:55:14.865 EverWondr[7974:207] Saved 4 on event images - And total 873419 on event and mark images 2010-05-27 08:55:14.969 EverWondr[7974:207] Saved 4 on event images - And total 4 on event and mark images 2010-05-27 08:55:15.064 EverWondr[7974:207] Saved 4 on event images - And total 4 on event and mark images And then pretty soon after that we get our SIGBUS exit. So that's the situation. Now my specific questions: THE time I see this happening is when the UIPickerView's camera iris shuts. I click the button to take the picture, it does the "click" animation, and Instruments shows my memory footprint going from about 10mb to about 25mb, and sitting there until the image is delivered to my UIViewController, where usage drops back to 10 or 11mb again. If we make it through that without a memory warning, we're golden, but most likely we don't. Anything I can do to make that not be so expensive? Second, I have NSZombies enabled. Am I understanding correctly that that's actually preventing memory from being freed? Am I subjecting my app to an unfair test environment? Third, is there some way to programmatically get my memory usage? Or at least the usage for a UIImage object? I've scoured the docs and don't see anything about that.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >