Search Results

Search found 16794 results on 672 pages for 'memory usage'.

Page 24/672 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Does the speed of memory define its voltage?

    - by Zak
    What I'm asking here is, if I order PC3200 memory is it all going to be 1.8V, or will some be 2.5, some 1.8, etc... I don't mean variation within a specific part #, but rather across part numbers, is there a variation where some PC 3200 memory would be incompatible with others because it is 2.5 v 1.8V .

    Read the article

  • SUnreclaim size increasing in syc with a TCP-CLOSE_WAIT till application restart

    - by maver1k
    Recently found a behaviour where my application had a connection in TCP Close_wait state till the app was restarted (after about 5 hours). But during this period the SUnreclaim space was also increasing constantly and went down on restart. The application is runnning on a rhel5 os and Im not very familiar with the memory management system. Would appreciate if someone clould tell me what extactly is the Ureclaim space and why it is increasing in sync with the close_wait. Thanks.

    Read the article

  • Avoid linux out-of-memory application teardown

    - by Eddie Parker
    I'm finding that on occasion my Linux box runs out of memory and it starts tearing down random processes to deal with it. I'm curious what administrators do to avoid this? Is the only real solution to up the amount of memory (will upping the swap alone help?), or is there better ways to set up the box with software to avoid this? (i.e., quotas, or some such?). I'd appreciate some feedback. Cheers, -e-

    Read the article

  • Ubuntu 9.10 server requires frequent reboots to free up memory

    - by bcmcfc
    I'm running Ubuntu 9.10 on my server. It works fine, it's just that over time (usually a couple of days) the memory usage just grows and grows until it invariably runs out and needs to be rebooted. It's running Apache, Samba, ProFTPd, Postfix, Munin & Webmin. Is there anything that can be done to free up the memory that it doesn't need anymore?

    Read the article

  • Dealing with UIImagePickerController to minimize memory useage

    - by Gordon Fontenot
    So, I have read the SO post on UIImagePickerController, UIImage, Memory and More, and I read the post on Memory Leak Problems with UIImagePickerController in iPhone. I have VASTLY increased my memory efficiency between these 2 posts, and I thank the OPs and the people that provided the answers. I just had a question on the answer provided in the Memory Leak question, which was (essentially): only have one instance of the controller throughout the programs runtime What would be the best way to go about this without causing memory leaks? Right now I am initiating it and releasing it on every use from within the view, and I am seeing exactly what the answer describes (Memory warnings and a crash after about 20 uses). Should I initiate the UIImagePickerController when I need it, but use a seperate class unrelated to the view to control it? How should I deal with releasing the controller if I do it this way?

    Read the article

  • How to test video card memory

    - by oki
    I want to test the memory of my video card because lastly there are vertical lines on my screen. I do some basic troubleshoot and it seems that the problem is in video card. Therefore, I want to validate the error at the video card by using a video card memory test program. I find one that is used for nvidia card with CUDA support, but my card is Nvidia GeForce 7600 without CUDA support.

    Read the article

  • Why does my SQL Server use AWE memory? and why is this not visible in RAMMap?

    - by Marnix Klooster
    We have a Windows Server 2008 R2 (64-bit) 8GB server where, according to Sysinternals RAMMap, 2GB of memory is allocated using AWE. As far as I understood, this means that these pages stay in physical memory and are never pushed out. This causes other apps to be pushed out of physical memory. In RAMMap, on the Physical Pages tab, the Process column is empty for all of the AWE pages. We run SQL Server on that box, but (through SQL Server Management Studio, under Server Properties - Memory, under Server memory options) it says is configured not to use AWE. However, when stopping SQL Server, the AWE pages are suddenly gone. So it really is the culprit. So I have three questions: Why does RAMMap not know/show that a SQL Server process is responsible for that AWE memory? Why does SQL Server Management Studio say that AWE memory is not used? How do we make configure SQL Server to really not use AWE memory?

    Read the article

  • free memory in Linux

    - by user32425
    Hi, I did free -tm on my system, and I got the output below. Is the free buffers/cache part of the used memory? And therefore we can consider it as free memory? total used free shared buffers cached Mem: 5721 5689 32 0 137 4664 -/+ buffers/cache: 887 4834 Swap: 6000 13 5987 Total: 11722 5703 6019 Thanks

    Read the article

  • How can I free memory on linux

    - by user35153
    When I use top to see memory usage, I have 65gb ram but only 1.3gb of it free and remaining is shown as used. When I ran my program It gives memory insufficiency error. Although no other program is using the remaining 63.7gb ram it is hold. how can I get free the unused ram?

    Read the article

  • Glassfish V3 using up all available memory

    - by Mannaz
    I have a Virtual Server with 1GB of RAM. When i start glassfish with asadmin start-domain it instantly allocates all available memory, although i defined -Xmx128m in my domain.xml. Am I missing an option here? How can I prevent glassfish from using all free memory?

    Read the article

  • Java Runtime.freeMemory() returning bizarre results when adding more objects

    - by Sotirios Delimanolis
    For whatever reason, I wanted to see how many objects I could create and populate a LinkedList with. I used Runtime.getRuntime().freeMemory() to get the approximation of free memory in my JVM. I wrote this: public static void main(String[] arg) { Scanner kb = new Scanner(System.in); List<Long> mem = new LinkedList<Long>(); while (true) { System.out.println("Max memory: " + Runtime.getRuntime().maxMemory() + ". Available memory: " + Runtime.getRuntime().freeMemory() + " bytes. Press enter to use more."); String s = kb.nextLine(); if (s.equals("m")) for (int i = 0; i < 1000000; i++) { mem.add(new Long((new Random()).nextLong())); } } } If I write in m, the app adds a million Long objects to the list. You would think the more objects (to which we have references, so can't be gc'ed), the less free memory. Running the code: Max memory: 1897725952. Available memory: 127257696 bytes. m Max memory: 1897725952. Available memory: 108426520 bytes. m Max memory: 1897725952. Available memory: 139873296 bytes. m Max memory: 1897725952. Available memory: 210632232 bytes. m Max memory: 1897725952. Available memory: 137268792 bytes. m Max memory: 1897725952. Available memory: 239504784 bytes. m Max memory: 1897725952. Available memory: 169507792 bytes. m Max memory: 1897725952. Available memory: 259686128 bytes. m Max memory: 1897725952. Available memory: 189293488 bytes. m Max memory: 1897725952. Available memory: 387686544 bytes. The available memory fluctuates. How does this happen? Is the GC cleaning up other things (what other things are there on the heap to really clean up?), is the freeMemory() method returning an approximation that's way off? Am I missing something or am I crazy?

    Read the article

  • memory cards capacity needs to be the same?

    - by balalakshmi
    I am not a hardware guy. I just heard this from a service engineer Memory cards of unequal capacities should not be used. that is if there is a 1 GM already in the slot, we need to add another 1 GB card only. Not 512 MB or 2 GB. Is there a problem if we use memory cards which are not equal capacities?

    Read the article

  • VMMap - awesome memory analysis tool

    VMMap is a process virtual and physical memory analysis utility. It shows a breakdown of a process's committed virtual memory types as well as the amount of physical memory (working set) assigned by the operating system to those types. Besides graphical representations of memory usage, VMMap also shows summary information and a detailed process memory map. Powerful filtering and refresh capabilities allow you to identify the sources of process memory usage and the memory cost of application features. Besides flexible views for analyzing live processes, VMMap supports the export of data in multiple forms, including a native format that preserves all the information so that you can load back in. It also includes command-line options that enable scripting scenarios. VMMap is the ideal tool for developers wanting to understand and optimize their application's memory resource usage. span.fullpost {display:none;}

    Read the article

  • Motherboard memory question

    - by JERiv
    I am currently drawing up specs on a new workstation for my office. I am considering the Asus P6X58D for a motherboard. This board's specs list it as supporting 24 gigs of memory. Suppose I were to use six four gig memory cards and then two video cards with 1 gig of memory apiece. Is the maximum supported memory similar to how 32 bit operating systems only have enough address space for 4 gigs of memory? Simply: Will the board post? If so, will the system be able to address all the memory, both the 24 gigs on the ddr3 bus and the 3 gigs on the graphics card?

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 4

    - by MarkPearl
    Learning Outcomes Explain the characteristics of memory systems Describe the memory hierarchy Discuss cache memory principles Discuss issues relevant to cache design Describe the cache organization of the Pentium Computer Memory Systems There are key characteristics of memory… Location – internal or external Capacity – expressed in terms of bytes Unit of Transfer – the number of bits read out of or written into memory at a time Access Method – sequential, direct, random or associative From a users perspective the two most important characteristics of memory are… Capacity Performance – access time, memory cycle time, transfer rate The trade off for memory happens along three axis… Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access time This leads to people using a tiered approach in their use of memory   As one goes down the hierarchy, the following occurs… Decreasing cost per bit Increasing capacity Increasing access time Decreasing frequency of access of the memory by the processor The use of two levels of memory to reduce average access time works in principle, but only if conditions 1 to 4 apply. A variety of technologies exist that allow us to accomplish this. Thus it is possible to organize data across the hierarchy such that the percentage of accesses to each successively lower level is substantially less than that of the level above. A portion of main memory can be used as a buffer to hold data temporarily that is to be read out to disk. This is sometimes referred to as a disk cache and improves performance in two ways… Disk writes are clustered. Instead of many small transfers of data, we have a few large transfers of data. This improves disk performance and minimizes processor involvement. Some data designed for write-out may be referenced by a program before the next dump to disk. In that case the data is retrieved rapidly from the software cache rather than slowly from disk. Cache Memory Principles Cache memory is substantially faster than main memory. A caching system works as follows.. When a processor attempts to read a word of memory, a check is made to see if this in in cache memory… If it is, the data is supplied, If it is not in the cache, a block of main memory, consisting of a fixed number of words is loaded to the cache. Because of the phenomenon of locality of references, when a block of data is fetched into the cache, it is likely that there will be future references to that same memory location or to other words in the block. Elements of Cache Design While there are a large number of cache implementations, there are a few basic design elements that serve to classify and differentiate cache architectures… Cache Addresses Cache Size Mapping Function Replacement Algorithm Write Policy Line Size Number of Caches Cache Addresses Almost all non-embedded processors support virtual memory. Virtual memory in essence allows a program to address memory from a logical point of view without needing to worry about the amount of physical memory available. When virtual addresses are used the designer may choose to place the cache between the MMU (memory management unit) and the processor or between the MMU and main memory. The disadvantage of virtual memory is that most virtual memory systems supply each application with the same virtual memory address space (each application sees virtual memory starting at memory address 0), which means the cache memory must be completely flushed with each application context switch or extra bits must be added to each line of the cache to identify which virtual address space the address refers to. Cache Size We would like the size of the cache to be small enough so that the overall average cost per bit is close to that of main memory alone and large enough so that the overall average access time is close to that of the cache alone. Also, larger caches are slightly slower than smaller ones. Mapping Function Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines. The choice of mapping function dictates how the cache is organized. Three techniques can be used… Direct – simplest technique, maps each block of main memory into only one possible cache line Associative – Each main memory block to be loaded into any line of the cache Set Associative – exhibits the strengths of both the direct and associative approaches while reducing their disadvantages For detailed explanations of each approach – read the text book (page 148 – 154) Replacement Algorithm For associative and set associating mapping a replacement algorithm is needed to determine which of the existing blocks in the cache must be replaced by a new block. There are four common approaches… LRU (Least recently used) FIFO (First in first out) LFU (Least frequently used) Random selection Write Policy When a block resident in the cache is to be replaced, there are two cases to consider If no writes to that block have happened in the cache – discard it If a write has occurred, a process needs to be initiated where the changes in the cache are propagated back to the main memory. There are several approaches to achieve this including… Write Through – all writes to the cache are done to the main memory as well at the point of the change Write Back – when a block is replaced, all dirty bits are written back to main memory The problem is complicated when we have multiple caches, there are techniques to accommodate for this but I have not summarized them. Line Size When a block of data is retrieved and placed in the cache, not only the desired word but also some number of adjacent words are retrieved. As the block size increases from very small to larger sizes, the hit ratio will at first increase because of the principle of locality, which states that the data in the vicinity of a referenced word are likely to be referenced in the near future. As the block size increases, more useful data are brought into cache. The hit ratio will begin to decrease as the block becomes even bigger and the probability of using the newly fetched information becomes less than the probability of using the newly fetched information that has to be replaced. Two specific effects come into play… Larger blocks reduce the number of blocks that fit into a cache. Because each block fetch overwrites older cache contents, a small number of blocks results in data being overwritten shortly after they are fetched. As a block becomes larger, each additional word is farther from the requested word and therefore less likely to be needed in the near future. The relationship between block size and hit ratio is complex, and no set approach is judged to be the best in all circumstances.   Pentium 4 and ARM cache organizations The processor core consists of four major components: Fetch/decode unit – fetches program instruction in order from the L2 cache, decodes these into a series of micro-operations, and stores the results in the L2 instruction cache Out-of-order execution logic – Schedules execution of the micro-operations subject to data dependencies and resource availability – thus micro-operations may be scheduled for execution in a different order than they were fetched from the instruction stream. As time permits, this unit schedules speculative execution of micro-operations that may be required in the future Execution units – These units execute micro-operations, fetching the required data from the L1 data cache and temporarily storing results in registers Memory subsystem – This unit includes the L2 and L3 caches and the system bus, which is used to access main memory when the L1 and L2 caches have a cache miss and to access the system I/O resources

    Read the article

  • VMMap - awesome memory analysis tool

    VMMap is a process virtual and physical memory analysis utility. It shows a breakdown of a process's committed virtual memory types as well as the amount of physical memory (working set) assigned by the operating system to those types. Besides graphical representations of memory usage, VMMap also shows summary information and a detailed process memory map. Powerful filtering and refresh capabilities allow you to identify the sources of process memory usage and the memory cost of application features. Besides flexible views for analyzing live processes, VMMap supports the export of data in multiple forms, including a native format that preserves all the information so that you can load back in. It also includes command-line options that enable scripting scenarios. VMMap is the ideal tool for developers wanting to understand and optimize their application's memory resource usage. span.fullpost {display:none;}

    Read the article

  • Where is the used memory in Task Manager & Resource Monitor coming from?

    - by Sam Adams
    On a Windows 7, the working set memory usage plus private memory does not add up to the total used memory in Task Manager and Windows 7 Resource Monitor. How do you find out where the used memory is coming from? The cached memory can't be part of it because sometimes the total cache is greater than the total in use. The commit memory plus the working set also doesn't add up to the total in use - but even that shouldn't be significant if it did, since commit is virtual.

    Read the article

  • How to get available memory C++/g++ ?

    - by Agito
    I want to allocate my buffers according to memory available. Such that, when I do processing and memory usage goes up, but still remains in available memory limits. Is there a way to get available memory (I don't know will virtual or physical memory status will make any difference ?). And method has to be platform Independent as its going to be used on Windows, OS X, Linux and AIX. (And if possible then I would also like to allocate some of available memory for my application, someone it doesn't change during the execution).

    Read the article

  • Virtual Memory and SSD

    - by Zombian
    While studying for the A+ Exam I was reading about SSD's and I thought to myself that if you had a mobo with a low RAM limit you could use a dedicated SSD purely for Virtual RAM. I looked up some info on line and the info I found said that this was a poor practice but didn't explain why. Why shouldn't SSD's be used for Virtual Memory and what are your thoughts on a dedicated Virtual Memory drive? Thank you!

    Read the article

  • What exactly is a memory page fault?

    - by dontWatchMyProfile
    From the docs: Note: Core Data avoids the term unfaulting because it is confusing. There's no “unfaulting” a virtual memory page fault. Page faults are triggered, caused, fired, or encountered. Of course, you can release memory back to the kernel in a variety of ways (using the functions vm_deallocate, munmap, or sbrk). Core Data describes this as “turning an object into a fault”. Is a Fault in Core Data essentially a memory page fault? I have only a slight idea about what a memory page is. I believe it's a kind of "piece of code in memory" which is needed to execute procedures and stuff like that, and as the app is runing, pieces of code are sucked into memory as "pages" and thrown away as they're not needed anymore. Probably 99% wrong ;) Anyone?

    Read the article

  • How to research unmanaged memory leaks in .NET?

    - by Brandon
    I have a WCF service running over MSMQ. Memory gradually increases over time, indicating that there is some sort of memory leak. I ran the service locally and monitored some counters using PerfMon. Total CLR memory managed heap bytes remains relatively constant, while the process' private bytes increases over time. This leads me to believe that there is some sort of unmanaged memory leak. Assuming that unmanaged memory leak is the issue, how do I address the issue? Are there any tools I could use to give me hints as to what is causing the unmanaged memory leak? Also, all my service is doing is reading from the transactional queue and writing to a database, all as part of a DTC transaction (handled under the hood by requiring a transaction on the service contract). I am not doing anything explicitly with COM or DllImports. Thanks in advance!

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >