Search Results

Search found 14113 results on 565 pages for 'memory stick'.

Page 23/565 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Mapping of memory addresses to physical modules in Windows XP

    - by Josef Grahn
    I plan to run 32-bit Windows XP on a workstation with dual processors, based on Intel's Nehalem microarchitecture, and triple channel RAM. Even though XP is limited to 4 GB of RAM, my understanding is that it will function with more than 4 GB installed, but will only expose 4 GB (or slightly less). My question is: Assuming that 6 GB of RAM is installed in six 1 GB modules, which physical 4 GB will Windows actually map into its address space? In particular: Will it use all six 1 GB modules, taking advantage of all memory channels? (My guess is yes, and that the mapping to individual modules within a group happens in hardware.) Will it map 2 GB of address space to each of the two NUMA nodes (as each processor has it's own memory interface), or will one processor get fast access to 3 GB of RAM, while the other only has 1 GB? Thanks!

    Read the article

  • idle proccesses and high memory bad? uwsgi/django

    - by JimJimThe3rd
    I have a VPS with 256MB of ram. I'm running nginx, uwsgi and postgresql on Ubuntu 12.04 for a soon to be Django site. About 200MB of ram are being used despite the website not being active, the uwsgi processes seem to just be idling. Is this bad? I once heard that having a bunch of free memory isn't necessarily a good metric because it is possible that the memory in use can easily be freed up. I mean, it is possible that the server is storing commonly used "stuff" in case it is accessed but is more than happy to dump it if the ram is needed. But I'm really not sure, hence me asking this question. If it is bad I could set some of the application loading options for uwsgi like "cheap" or "idle" mode. Screenshot of my htop

    Read the article

  • Apache2 memory usage when uploading large files

    - by abhaga
    Hi, I am running apache2.2.12 along with PHP 5.2.10. PHP is configured to run as a separate process through fcgid. The problem is that when users upload a file, size of the apache process swells by almost the same amount. So if somebody tries to upload a 200 MB file, one of the child process swells to current size+200 MB. If 2 users simultaneously start uploading, my server crashes. Now it is the virtual memory size which is increasing but since I am on a OpenVZ based VPS, that is what counts. My questions are: Is it the normal Apache behavior or can I do something to fix this? If not, is there a more memory efficient way of handling big file uploads. Going by the current behavior, I will need 1 GB of free RAM for every apache child accepting a upload. Thanks! Abhaya -

    Read the article

  • reduce memory footprint of java virtual machine

    - by Lorenzo Boccaccia
    I've a citrix server where multiple users use a multiple java application. Is there a way to reduce the memory footprint of the jvm itself? The max heap is already set fairly low (64MB), as the permgen (32MB) space and we're to the point that the jvm itself uses way more memory than the application itself (the committed area is around 350MB) I'm looking for a way to reduce the jvm ram usage or to make the all the applications run within the same jvm or any other way of sharing common pages between running jvm (if possible) or try switch to switch to a jvm if a jvm exists having optimizations relative to this scenario currently using windows 2003 server and sun java virtual machine 1.6

    Read the article

  • Task Manager does not show memory usage

    - by Robin
    I just noticed this yesterday. I selected different memory columns, none of them worked, and I've tried showing processes from all users. I'm using Win 7. It doesn't slow down my computer or does anything else. I just want to know why and how to fix it. Could anyone help me on this? Thank you cannot post pix :( it is like this: only shows K, without actual number Image Name--------User Name----CPU----Memory (Private Working Set)------Description System -----------SYSTEM ------01-------------------------------K-------NT Kernel &system Smss.exe--------- SYSTEM -----00-------------------------------K-------Win Session Manager Wininit.exe------ SYSTEM ------00-------------------------------K-------Win Start-up Applic It's pretty much the same as http://www.sevenforums.com/general-discussion/56891-my-task-manager-doesnt-show-ram-usage-each-program.html that is the only one i found on google.

    Read the article

  • How do you record how much memory an app is using on OS X

    - by Ace Legend
    I'm on a Mac Mini with OS X 10.8.2. I am an app developer, but in this case am building an app in C++, so I can not use Xcode for this question. I would like to track how much memory my app is using, but I don't want to manually record it. How do I do this. MORE INFO: I want to record it all day long. I will have the app running all day, so that I can compare peaks in memory. I am not opposed to 3rd party apps, as long as they are reliable. Thanks.

    Read the article

  • Memory limit on PHP + Apache + Windows 32 bit?

    - by thkala
    I am considering using Apache 32-bit for a Moodle installation on a Windows 2008 R2 64-bit/16GB server. Since the available memory affects the number of concurrent of users that can be served, I was wondering how the 2GB memory limit on 32-bit Windows processes affects Apache+PHP. Is it a collective limit for the whole server, or is it applied separately for each Apache child process/thread? If it is separate, how many of those children are launched on Windows? One per request? One per processor core? Something in the middle? Is this somehow configurable?

    Read the article

  • Motherboard Issue - 3 Beep Bios (memory error) despite new RAM

    - by Glenn
    I have an Intel dG43RK motherboard, bought new and sealed, and have tried two different brands and speeds of RAM with a 3-beep BIOS indicating a memory error, which also occurs without RAM installed (as it should). The memory tried is; 1x4GB 1333 Kingston HyperX DDR3 RAM (New and Sealed) 2x4GB Team Elite 1066 DDR3 RAM (New and Sealed) I have tried multiple configurations and seating layouts and still no luck. I also have a GT520 graphics card on board as I dislike in-built graphics in most cases and had it at hand (also new and sealed). The only used parts are the CPU, which worked in my previous tower and was directly taken from the PC into the new set-up and the CPU Fan which will be replaced with a new fan in the foreseeable future once this is resolved. I've run out of ideas myself and any help is appreciated.

    Read the article

  • Would shell command join cause out of memory?

    - by Hancy
    I have two file to join. FILE 1: a A1 a A2 a A3 ... c C1 c C2 ... FILE 2: a feature1_of_a a feature2_of_a ... a featureN_of_a ... ... c feature1_of_c c feature2_of_c ... after join, i could get File like this: A1 feature1_of_a A2 feature1_of_a A3 feature1_of_a A1 feature2_of_a A2 feature2_of_a A3 feature2_of_a ... A1 featureN_of_a A2 featureN_of_a A3 featureN_of_a ... In order to do that: i wrote shell command join -11 -21 -o1.2,2.2 file1 file2. But the problem is: number N might be huge. So if join read all feautre of a into memory at once, memory might not be enough. I don't know how join is implemented. WQould the momery become a problem? If so, is there any way to get what I want?

    Read the article

  • Windows Vista Home memory usage problem

    - by lordg
    Hi, I have a Windows Vista Home laptop from a client that is running on 1GB ram. The laptop is used for super basic things, word, internet, outlook, etc. What makes zero sense is that the RAM is being completely consumed, causing the PC to hang sometimes when it can't take it anymore. However, in task manager, the processes appear to only be consuming maybe 100MB (Private Working Set). The client literally has a simple setup, and is running kaspersky, though that does not seem to be indicating it is the cause of the excessive memory usage. Does anyone have a suggestion on how to resolve the memory issue or how to track down what is actually happening and fix it? G

    Read the article

  • Swap 95%+ , but a lot of free ram memory

    - by Paolo_NL_FR
    I am running centos 5.8 with cpanel. Lately I am getting reports that my swap is full , but there is a lot of free memory to use. top - 10:33:43 up 133 days, 17:00, 1 user, load average: 0.05, 0.03, 0.05 Tasks: 170 total, 1 running, 169 sleeping, 0 stopped, 0 zombie Cpu(s): 2.1%us, 0.5%sy, 0.0%ni, 97.2%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 24726100k total, 8255368k used, 16470732k free, 599560k buffers Swap: 1046520k total, 984740k used, 61780k free, 3641828k cached How do I solve this? The unused ram memory should be used instead of the swap. Or should I increase the swap ( and how do I do that ? ). Thanks

    Read the article

  • Cherokee high virtual memory usage even after disabling I/O Cache

    - by nidheeshdas
    I've Ubuntu 10.04LTS 64-bit running on a openvz container and Cherokee 1.0.8 compiled from source. The virtual memory usage of cherokee-worker is around 430 MB even after disabling I/O cache from Advanced - I/O Cache - NOT enabled. Is this issue particular to openvz? Because many people reported to have successfully reduced virt memory usage by disabling io cache. htop output: http://imgur.com/z5JEL.jpg (newbies not allowed to post image.) thanks in advance. nidheeshdas

    Read the article

  • How to identify who is using Hardware Reserved Memory in Windows 7

    - by blasteralfred
    I run Windows 7 x86 Home Premium. I have an installed physical memory of 4 GB, out of which, 2.96 GB is usable (My Computer Properties). I checked the memory usage using Resource Monitor and found 3036 MB / 4096 MB is available. I noticed that 1060 MB is unavailable since it is reserved by some "Hardware component(s)". I would like to know which hardware component is using this 1060 MB. Is there any way or tool to identify this? Note: I know that Windows 7 Home Premium x86 supports a maximum of 4GB RAM.

    Read the article

  • Memory works fine separately, but not together

    - by patersonjs
    I've been given four 1GB Corsair DDR2 memory modules, and am trying to fit them into my computer but am getting BSOD on Windows XP and errors in Memtest86+. I've tried to identify if one particular module is faulty by trying them in pairs. They work fine in pairs, but when all four are inserted, Memtest86+ reports errors. The motherboard is an Asus P5N-E with dual channel support and the modules are all the same model (same speed, capacity and timings) but one pair is a different hardware revision. One is v2.1 and the other is v2.2... the voltages are the same too. Would this minor difference be a possible cause of the problem? I've got the BIOS memory timing settings all at AUTO - should I manually set the timings?

    Read the article

  • Cherokee high virtual memory usage even after disabling I/O Cache

    - by nidheeshdas
    hi all I've Ubuntu 10.04LTS 64-bit running on a openvz container and Cherokee 1.0.8 compiled from source. The virtual memory usage of cherokee-worker is around 430 MB even after disabling I/O cache from Advanced - I/O Cache - NOT enabled. Is this issue particular to openvz? Because many people reported to have successfully reduced virt memory usage by disabling io cache. htop output: http://imgur.com/z5JEL.jpg (newbies not allowed to post image.) thanks in advance. nidheeshdas

    Read the article

  • Windows Vista Home memory usage problem [closed]

    - by lordg
    Hi, I have a Windows Vista Home laptop from a client that is running on 1GB ram. The laptop is used for super basic things, word, internet, outlook, etc. What makes zero sense is that the RAM is being completely consumed, causing the PC to hang sometimes when it can't take it anymore. However, in task manager, the processes appear to only be consuming maybe 100MB (Private Working Set). The client literally has a simple setup, and is running kaspersky, though that does not seem to be indicating it is the cause of the excessive memory usage. Does anyone have a suggestion on how to resolve the memory issue or how to track down what is actually happening and fix it? G

    Read the article

  • How to Track CPU and Memory Usage Per Process

    - by Mjsk
    I have seen this question asked on here before but was unable to follow the answer which was given. I would like to monitor a processes CPU, Memory, and possibly GPU usage over a given time. The data would be useful if presented in a graph. It would be nice if I could do this using Performance Monitor, but I am open to alternative solutions as well. I have tried using Performance Monitor and my problem is that I'm not sure which performance counters to use since there are so many. I've been looking at a Process, Processor, Memory, etc. but I'm not sure which counters within those categories will be of interest to me. My OS is Windows 7.

    Read the article

  • How to get sharing options to stick - ubuntu 12.04

    - by Devin
    I'm having a really hard time trying to get my sharing set up on my 12.04 system. I've tried both desktop version and server - I'm a bit of a linux n00b, so I need a GUI, command line is beyond me and no time to learn it (not till after I get the shares setup, at least) My problem is, whenever I try to set permissions in Nautilus, it just reverts back to the default which is to "none" Basically when I choose an option... it doesn't stick. I can create shares, and it asks me if I want to add permissions automatically - but they do not stick either. When I go to look at the shared folders in Windows (or even on my Android Phone, or Mac) it gives me permissions errors and doesn't let me log in, despite me clicking "allow guest access" I have no idea what to do or where to go. I've tried searching forums and google, and I've tried everything I come across - no avail. I've even tried Mint builds to see if it's different, no change there either. Please help! I really want to setup a server to share my media files and do backups in my house. Thanks for your help!

    Read the article

  • How do install Ubuntu from a USB stick?

    - by Sophia
    When I go to the boot menu on my computer and select USB stick, the screen goes black and there comes a flickering underline. Like I could write something. But I can't. Whatever I push, nothing happens. Except the PrintScrn/SysRq button. When I push it, mu computer beeps. I get no choose menu. Nothing. I found out the usb stick is in msdos format. So what format should I use and how can I format it? I am not a computer geek who knows everything. I'm just a beginner. And only 16 years old. I've got a new problem. The screen isn't black anymore. Now there comes an error message: SYSLINUX 4.04 CHS 20110518 Copyright (C) 1994-2011 H.Peter Anvin et al ERROR:No configurationfile found No DEFAULT or UI configuration directive found! boot: And when I write something: boot:example Could not find kernel image:example boot: Why does this fail all the time? ps. I'm using Ubuntu Oneiric 11.10.

    Read the article

  • How can pointers to functions point to something that doesn't exist in memory yet? Why do prototypes have different addresses?

    - by Kacy Raye
    To my knowledge, functions do not get added to the stack until run-time after they are called in the main function. So how can a pointer to a function have a function's memory address if it doesn't exist in memory? For example: using namespace std; #include <iostream> void func() { } int main() { void (*ptr)() = func; cout << reinterpret_cast<void*>(ptr) << endl; //prints 0x8048644 even though func never gets added to the stack } Also, this next question is a little less important to me, so if you only know the answer to my first question, then that is fine. But anyway, why does the value of the pointer ( the memory address of the function ) differ when I declare a function prototype and implement the function after main? In the first example, it printed out 0x8048644 no matter how many times I ran the program. In the next example, it printed out 0x8048680 no matter how many times I ran the program. For example: using namespace std; #include <iostream> void func(); int main() { void ( *ptr )() = func; cout << reinterpret_cast<void*>(ptr) << endl; } void func(){ }

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    Hi All, I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

  • What can cause an increase in inactive memory and how to reclame it?

    - by Boaz
    Hi All, I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >