Search Results

Search found 12398 results on 496 pages for 'in memory oltp'.

Page 22/496 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • (12)Cannot allocate memory: couldn't spawn child process: /usr/lib/cgi-bin/mailman/admin

    - by virtuallight
    Hi, I'm trying to install mailman + postfix + apache2 on a VPS running Ubuntu 8.10. I think I got it all according to the official Ubuntu docs. I'm getting this error though when trying to access mailman's admin page. [Wed Jun 09 21:36:02 2010] [error] [client 77.65.61.4] (12)Cannot allocate memory: couldn't create child process: 12: admin [Wed Jun 09 21:36:02 2010] [error] [client 77.65.61.4] (12)Cannot allocate memory: couldn't spawn child process: /usr/lib/cgi-bin/mailman/admin I have no idea where the problem might be. Someone please help me :)

    Read the article

  • Server Memory with Magento

    - by Mohamed Elgharabawy
    I have a cloud server with the following specifications: 2vCPUs 4G RAM 160GB Disk Space Network 400Mb/s System Image: Ubuntu 12.04 LTS I am only running Magento CE 1.7.0.2 on this server. Nothing else. Usually, the server has a loading time of 4-5 seconds. Recently, this has dropped to over 30 seconds and sometimes the server just goes away and I get HTTP error reports to my email stating that HTTP requests took more than 20000ms. Running top command and sorting them returns the following: top - 15:29:07 up 3:40, 1 user, load average: 28.59, 25.95, 22.91 Tasks: 112 total, 30 running, 82 sleeping, 0 stopped, 0 zombie Cpu(s): 90.2%us, 9.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.2%st PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 31901 www-data 20 0 360m 71m 5840 R 7 1.8 1:39.51 apache2 32084 www-data 20 0 362m 72m 5548 R 7 1.8 1:31.56 apache2 32089 www-data 20 0 348m 59m 5660 R 7 1.5 1:41.74 apache2 32295 www-data 20 0 343m 54m 5532 R 7 1.4 2:00.78 apache2 32303 www-data 20 0 354m 65m 5260 R 7 1.6 1:38.76 apache2 32304 www-data 20 0 346m 56m 5544 R 7 1.4 1:41.26 apache2 32305 www-data 20 0 348m 59m 5640 R 7 1.5 1:50.11 apache2 32291 www-data 20 0 358m 69m 5256 R 6 1.7 1:44.26 apache2 32517 www-data 20 0 345m 56m 5532 R 6 1.4 1:45.56 apache2 30473 www-data 20 0 355m 66m 5680 R 6 1.7 2:00.05 apache2 32093 www-data 20 0 352m 63m 5848 R 6 1.6 1:53.23 apache2 32302 www-data 20 0 345m 56m 5512 R 6 1.4 1:55.87 apache2 32433 www-data 20 0 346m 57m 5500 S 6 1.4 1:31.58 apache2 32638 www-data 20 0 354m 65m 5508 R 6 1.6 1:36.59 apache2 32230 www-data 20 0 347m 57m 5524 R 6 1.4 1:33.96 apache2 32231 www-data 20 0 355m 66m 5512 R 6 1.7 1:37.47 apache2 32233 www-data 20 0 354m 64m 6032 R 6 1.6 1:59.74 apache2 32300 www-data 20 0 355m 66m 5672 R 6 1.7 1:43.76 apache2 32510 www-data 20 0 347m 58m 5512 R 6 1.5 1:42.54 apache2 32521 www-data 20 0 348m 59m 5508 R 6 1.5 1:47.99 apache2 32639 www-data 20 0 344m 55m 5512 R 6 1.4 1:34.25 apache2 32083 www-data 20 0 345m 56m 5696 R 5 1.4 1:59.42 apache2 32085 www-data 20 0 347m 58m 5692 R 5 1.5 1:42.29 apache2 32293 www-data 20 0 353m 64m 5676 R 5 1.6 1:52.73 apache2 32301 www-data 20 0 348m 59m 5564 R 5 1.5 1:49.63 apache2 32528 www-data 20 0 351m 62m 5520 R 5 1.6 1:36.11 apache2 31523 mysql 20 0 3460m 576m 8288 S 5 14.4 2:06.91 mysqld 32002 www-data 20 0 345m 55m 5512 R 5 1.4 2:01.88 apache2 32080 www-data 20 0 357m 68m 5512 S 5 1.7 1:31.30 apache2 32163 www-data 20 0 347m 58m 5512 S 5 1.5 1:58.68 apache2 32509 www-data 20 0 345m 56m 5504 R 5 1.4 1:49.54 apache2 32306 www-data 20 0 358m 68m 5504 S 4 1.7 1:53.29 apache2 32165 www-data 20 0 344m 55m 5524 S 4 1.4 1:40.71 apache2 32640 www-data 20 0 345m 56m 5528 R 4 1.4 1:36.49 apache2 31888 www-data 20 0 359m 70m 5664 R 4 1.8 1:57.07 apache2 32511 www-data 20 0 357m 67m 5512 S 3 1.7 1:47.00 apache2 32054 www-data 20 0 357m 68m 5660 S 2 1.7 1:53.10 apache2 1 root 20 0 24452 2276 1232 S 0 0.1 0:01.58 init Moreover, running free -m returns the following: total used free shared buffers cached Mem: 4003 3919 83 0 118 901 -/+ buffers/cache: 2899 1103 Swap: 0 0 0 To investigate this further, I have installed apache buddy, it recommeneded that I need to reduce the maxclient connections. Which I did. I also installed MysqlTuner and it suggests that I need to set my innodb_buffer_pool_size to = 3.0G. However, I cannot do that, since the whole memory is 4G. Here is the output from apache buddy: ### GENERAL REPORT ### Settings considered for this report: Your server's physical RAM: 4003MB Apache's MaxClients directive: 40 Apache MPM Model: prefork Largest Apache process (by memory): 73.77MB [ OK ] Your MaxClients setting is within an acceptable range. Max potential memory usage: 2950.8 MB Percentage of RAM allocated to Apache 73.72 % And this is the output of MySQLTuner: -------- Performance Metrics ------------------------------------------------- [--] Up for: 47m 22s (675K q [237.552 qps], 12K conn, TX: 1B, RX: 300M) [--] Reads / Writes: 45% / 55% [--] Total buffers: 2.1G global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 2.5G (64% of installed RAM) [OK] Slow queries: 0% (0/675K) [OK] Highest usage of available connections: 26% (40/151) [OK] Key buffer size / total MyISAM indexes: 36.0M/18.7M [OK] Key buffer hit rate: 100.0% (245K cached / 105 reads) [OK] Query cache efficiency: 92.5% (500K cached / 541K selects) [!!] Query cache prunes per day: 302886 [OK] Sorts requiring temporary tables: 0% (1 temp sorts / 15K sorts) [!!] Joins performed without indexes: 12135 [OK] Temporary tables created on disk: 25% (8K on disk / 32K total) [OK] Thread cache hit rate: 90% (1K created / 12K connections) [!!] Table cache hit rate: 17% (400 open / 2K opened) [OK] Open file limit used: 12% (123/1K) [OK] Table locks acquired immediately: 100% (196K immediate / 196K locks) [!!] InnoDB buffer pool / data size: 2.0G/3.5G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: query_cache_size ( 64M) join_buffer_size ( 128.0K, or always use indexes with joins) table_cache ( 400) innodb_buffer_pool_size (= 3G) Last but not least, the server still has more than 60% of free disk space. Now, based on the above, I have few questions: Are these numbers normal? Do they make sense? Do I need to upgrade the server? If I don't need to upgrade and my configuration is not correct, how do I optimize it?

    Read the article

  • Print spooler consumes over 1GB of memory

    - by Stephen Jennings
    Suddenly, on a Windows Vista Business workstation I manage, the Windows print spooler service is consuming over 1GB of memory. I got the call this morning that the user could not print. I discovered all printers were missing from the Printers applet in Control Panel. I rebooted the machine, and at first the printers were still missing, but after a few minutes (and much banging my head against the wall) they suddenly appeared. I stopped worrying about it until later today it happened again to the same workstation. To my knowledge, nothing has changed on the computer. No new printers have been added, no new print drivers would have been installed, and no new software is being used. I tried clearing out the spooler folder (C:\Windows\System32\spooler\printers) which did have four print jobs from this morning, but the problem persists after restarting the spooler service. When starting the service, it starts out using 824 KB of memory, then after about 20 seconds it starts creeping up about 10MB each second until it stabilizes around 1.8GB.

    Read the article

  • Debian server doesn't free memory after backup

    - by stan31337
    I have production server that is running Debian 6.0.6 Squeeze #uname -a Linux debsrv 2.6.32-5-xen-amd64 #1 SMP Sun Sep 23 13:49:30 UTC 2012 x86_64 GNU/Linux Every day cron executes backup script as root: #crontab -e 0 5 * * * /root/sites_backup.sh > /dev/null 2>&1 #nano /root/sites_backup.sh #!/bin/bash str=`date +%Y-%m-%d-%H-%M-%S` tar pzcf /home/backups/sites/mysite-$str.tar.gz /var/sites/mysite/public_html/www mysqldump -u mysite -pmypass mysite | gzip -9 > /home/backups/sites/mysite-$str.sql.gz cd /home/backups/sites/ sha512sum mysite-$str* > /home/backups/sites/mysite-$str.tar.gz.DIGESTS cd ~ Everything works perfectly, but I notice that Munin's memory graph shows increase of cache and buffers after backup. Then I just download backup files and delete them. After deletion Munin's memory graph returns cache and buffers to the state that was before backup. Here's Munin graph: Unfortunately I don't have enough rep to add image here. So here's a link:

    Read the article

  • Mapping of memory addresses to physical modules in Windows XP

    - by Josef Grahn
    I plan to run 32-bit Windows XP on a workstation with dual processors, based on Intel's Nehalem microarchitecture, and triple channel RAM. Even though XP is limited to 4 GB of RAM, my understanding is that it will function with more than 4 GB installed, but will only expose 4 GB (or slightly less). My question is: Assuming that 6 GB of RAM is installed in six 1 GB modules, which physical 4 GB will Windows actually map into its address space? In particular: Will it use all six 1 GB modules, taking advantage of all memory channels? (My guess is yes, and that the mapping to individual modules within a group happens in hardware.) Will it map 2 GB of address space to each of the two NUMA nodes (as each processor has it's own memory interface), or will one processor get fast access to 3 GB of RAM, while the other only has 1 GB? Thanks!

    Read the article

  • idle proccesses and high memory bad? uwsgi/django

    - by JimJimThe3rd
    I have a VPS with 256MB of ram. I'm running nginx, uwsgi and postgresql on Ubuntu 12.04 for a soon to be Django site. About 200MB of ram are being used despite the website not being active, the uwsgi processes seem to just be idling. Is this bad? I once heard that having a bunch of free memory isn't necessarily a good metric because it is possible that the memory in use can easily be freed up. I mean, it is possible that the server is storing commonly used "stuff" in case it is accessed but is more than happy to dump it if the ram is needed. But I'm really not sure, hence me asking this question. If it is bad I could set some of the application loading options for uwsgi like "cheap" or "idle" mode. Screenshot of my htop

    Read the article

  • Apache2 memory usage when uploading large files

    - by abhaga
    Hi, I am running apache2.2.12 along with PHP 5.2.10. PHP is configured to run as a separate process through fcgid. The problem is that when users upload a file, size of the apache process swells by almost the same amount. So if somebody tries to upload a 200 MB file, one of the child process swells to current size+200 MB. If 2 users simultaneously start uploading, my server crashes. Now it is the virtual memory size which is increasing but since I am on a OpenVZ based VPS, that is what counts. My questions are: Is it the normal Apache behavior or can I do something to fix this? If not, is there a more memory efficient way of handling big file uploads. Going by the current behavior, I will need 1 GB of free RAM for every apache child accepting a upload. Thanks! Abhaya -

    Read the article

  • reduce memory footprint of java virtual machine

    - by Lorenzo Boccaccia
    I've a citrix server where multiple users use a multiple java application. Is there a way to reduce the memory footprint of the jvm itself? The max heap is already set fairly low (64MB), as the permgen (32MB) space and we're to the point that the jvm itself uses way more memory than the application itself (the committed area is around 350MB) I'm looking for a way to reduce the jvm ram usage or to make the all the applications run within the same jvm or any other way of sharing common pages between running jvm (if possible) or try switch to switch to a jvm if a jvm exists having optimizations relative to this scenario currently using windows 2003 server and sun java virtual machine 1.6

    Read the article

  • Task Manager does not show memory usage

    - by Robin
    I just noticed this yesterday. I selected different memory columns, none of them worked, and I've tried showing processes from all users. I'm using Win 7. It doesn't slow down my computer or does anything else. I just want to know why and how to fix it. Could anyone help me on this? Thank you cannot post pix :( it is like this: only shows K, without actual number Image Name--------User Name----CPU----Memory (Private Working Set)------Description System -----------SYSTEM ------01-------------------------------K-------NT Kernel &system Smss.exe--------- SYSTEM -----00-------------------------------K-------Win Session Manager Wininit.exe------ SYSTEM ------00-------------------------------K-------Win Start-up Applic It's pretty much the same as http://www.sevenforums.com/general-discussion/56891-my-task-manager-doesnt-show-ram-usage-each-program.html that is the only one i found on google.

    Read the article

  • How do you record how much memory an app is using on OS X

    - by Ace Legend
    I'm on a Mac Mini with OS X 10.8.2. I am an app developer, but in this case am building an app in C++, so I can not use Xcode for this question. I would like to track how much memory my app is using, but I don't want to manually record it. How do I do this. MORE INFO: I want to record it all day long. I will have the app running all day, so that I can compare peaks in memory. I am not opposed to 3rd party apps, as long as they are reliable. Thanks.

    Read the article

  • Memory limit on PHP + Apache + Windows 32 bit?

    - by thkala
    I am considering using Apache 32-bit for a Moodle installation on a Windows 2008 R2 64-bit/16GB server. Since the available memory affects the number of concurrent of users that can be served, I was wondering how the 2GB memory limit on 32-bit Windows processes affects Apache+PHP. Is it a collective limit for the whole server, or is it applied separately for each Apache child process/thread? If it is separate, how many of those children are launched on Windows? One per request? One per processor core? Something in the middle? Is this somehow configurable?

    Read the article

  • Motherboard Issue - 3 Beep Bios (memory error) despite new RAM

    - by Glenn
    I have an Intel dG43RK motherboard, bought new and sealed, and have tried two different brands and speeds of RAM with a 3-beep BIOS indicating a memory error, which also occurs without RAM installed (as it should). The memory tried is; 1x4GB 1333 Kingston HyperX DDR3 RAM (New and Sealed) 2x4GB Team Elite 1066 DDR3 RAM (New and Sealed) I have tried multiple configurations and seating layouts and still no luck. I also have a GT520 graphics card on board as I dislike in-built graphics in most cases and had it at hand (also new and sealed). The only used parts are the CPU, which worked in my previous tower and was directly taken from the PC into the new set-up and the CPU Fan which will be replaced with a new fan in the foreseeable future once this is resolved. I've run out of ideas myself and any help is appreciated.

    Read the article

  • Would shell command join cause out of memory?

    - by Hancy
    I have two file to join. FILE 1: a A1 a A2 a A3 ... c C1 c C2 ... FILE 2: a feature1_of_a a feature2_of_a ... a featureN_of_a ... ... c feature1_of_c c feature2_of_c ... after join, i could get File like this: A1 feature1_of_a A2 feature1_of_a A3 feature1_of_a A1 feature2_of_a A2 feature2_of_a A3 feature2_of_a ... A1 featureN_of_a A2 featureN_of_a A3 featureN_of_a ... In order to do that: i wrote shell command join -11 -21 -o1.2,2.2 file1 file2. But the problem is: number N might be huge. So if join read all feautre of a into memory at once, memory might not be enough. I don't know how join is implemented. WQould the momery become a problem? If so, is there any way to get what I want?

    Read the article

  • Windows Vista Home memory usage problem

    - by lordg
    Hi, I have a Windows Vista Home laptop from a client that is running on 1GB ram. The laptop is used for super basic things, word, internet, outlook, etc. What makes zero sense is that the RAM is being completely consumed, causing the PC to hang sometimes when it can't take it anymore. However, in task manager, the processes appear to only be consuming maybe 100MB (Private Working Set). The client literally has a simple setup, and is running kaspersky, though that does not seem to be indicating it is the cause of the excessive memory usage. Does anyone have a suggestion on how to resolve the memory issue or how to track down what is actually happening and fix it? G

    Read the article

  • Swap 95%+ , but a lot of free ram memory

    - by Paolo_NL_FR
    I am running centos 5.8 with cpanel. Lately I am getting reports that my swap is full , but there is a lot of free memory to use. top - 10:33:43 up 133 days, 17:00, 1 user, load average: 0.05, 0.03, 0.05 Tasks: 170 total, 1 running, 169 sleeping, 0 stopped, 0 zombie Cpu(s): 2.1%us, 0.5%sy, 0.0%ni, 97.2%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 24726100k total, 8255368k used, 16470732k free, 599560k buffers Swap: 1046520k total, 984740k used, 61780k free, 3641828k cached How do I solve this? The unused ram memory should be used instead of the swap. Or should I increase the swap ( and how do I do that ? ). Thanks

    Read the article

  • Cherokee high virtual memory usage even after disabling I/O Cache

    - by nidheeshdas
    I've Ubuntu 10.04LTS 64-bit running on a openvz container and Cherokee 1.0.8 compiled from source. The virtual memory usage of cherokee-worker is around 430 MB even after disabling I/O cache from Advanced - I/O Cache - NOT enabled. Is this issue particular to openvz? Because many people reported to have successfully reduced virt memory usage by disabling io cache. htop output: http://imgur.com/z5JEL.jpg (newbies not allowed to post image.) thanks in advance. nidheeshdas

    Read the article

  • How to identify who is using Hardware Reserved Memory in Windows 7

    - by blasteralfred
    I run Windows 7 x86 Home Premium. I have an installed physical memory of 4 GB, out of which, 2.96 GB is usable (My Computer Properties). I checked the memory usage using Resource Monitor and found 3036 MB / 4096 MB is available. I noticed that 1060 MB is unavailable since it is reserved by some "Hardware component(s)". I would like to know which hardware component is using this 1060 MB. Is there any way or tool to identify this? Note: I know that Windows 7 Home Premium x86 supports a maximum of 4GB RAM.

    Read the article

  • Memory works fine separately, but not together

    - by patersonjs
    I've been given four 1GB Corsair DDR2 memory modules, and am trying to fit them into my computer but am getting BSOD on Windows XP and errors in Memtest86+. I've tried to identify if one particular module is faulty by trying them in pairs. They work fine in pairs, but when all four are inserted, Memtest86+ reports errors. The motherboard is an Asus P5N-E with dual channel support and the modules are all the same model (same speed, capacity and timings) but one pair is a different hardware revision. One is v2.1 and the other is v2.2... the voltages are the same too. Would this minor difference be a possible cause of the problem? I've got the BIOS memory timing settings all at AUTO - should I manually set the timings?

    Read the article

  • Windows Vista Home memory usage problem [closed]

    - by lordg
    Hi, I have a Windows Vista Home laptop from a client that is running on 1GB ram. The laptop is used for super basic things, word, internet, outlook, etc. What makes zero sense is that the RAM is being completely consumed, causing the PC to hang sometimes when it can't take it anymore. However, in task manager, the processes appear to only be consuming maybe 100MB (Private Working Set). The client literally has a simple setup, and is running kaspersky, though that does not seem to be indicating it is the cause of the excessive memory usage. Does anyone have a suggestion on how to resolve the memory issue or how to track down what is actually happening and fix it? G

    Read the article

  • Cherokee high virtual memory usage even after disabling I/O Cache

    - by nidheeshdas
    hi all I've Ubuntu 10.04LTS 64-bit running on a openvz container and Cherokee 1.0.8 compiled from source. The virtual memory usage of cherokee-worker is around 430 MB even after disabling I/O cache from Advanced - I/O Cache - NOT enabled. Is this issue particular to openvz? Because many people reported to have successfully reduced virt memory usage by disabling io cache. htop output: http://imgur.com/z5JEL.jpg (newbies not allowed to post image.) thanks in advance. nidheeshdas

    Read the article

  • How to Track CPU and Memory Usage Per Process

    - by Mjsk
    I have seen this question asked on here before but was unable to follow the answer which was given. I would like to monitor a processes CPU, Memory, and possibly GPU usage over a given time. The data would be useful if presented in a graph. It would be nice if I could do this using Performance Monitor, but I am open to alternative solutions as well. I have tried using Performance Monitor and my problem is that I'm not sure which performance counters to use since there are so many. I've been looking at a Process, Processor, Memory, etc. but I'm not sure which counters within those categories will be of interest to me. My OS is Windows 7.

    Read the article

  • How can pointers to functions point to something that doesn't exist in memory yet? Why do prototypes have different addresses?

    - by Kacy Raye
    To my knowledge, functions do not get added to the stack until run-time after they are called in the main function. So how can a pointer to a function have a function's memory address if it doesn't exist in memory? For example: using namespace std; #include <iostream> void func() { } int main() { void (*ptr)() = func; cout << reinterpret_cast<void*>(ptr) << endl; //prints 0x8048644 even though func never gets added to the stack } Also, this next question is a little less important to me, so if you only know the answer to my first question, then that is fine. But anyway, why does the value of the pointer ( the memory address of the function ) differ when I declare a function prototype and implement the function after main? In the first example, it printed out 0x8048644 no matter how many times I ran the program. In the next example, it printed out 0x8048680 no matter how many times I ran the program. For example: using namespace std; #include <iostream> void func(); int main() { void ( *ptr )() = func; cout << reinterpret_cast<void*>(ptr) << endl; } void func(){ }

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >